Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
15,697
19,848,206,225
IssuesEvent
2022-01-21 09:19:18
ooi-data/CE06ISSM-RID16-07-NUTNRB000-recovered_host-suna_dcl_recovered
https://api.github.com/repos/ooi-data/CE06ISSM-RID16-07-NUTNRB000-recovered_host-suna_dcl_recovered
opened
πŸ›‘ Processing failed: ValueError
process
## Overview `ValueError` found in `processing_task` task during run ended on 2022-01-21T09:19:17.461091. ## Details Flow name: `CE06ISSM-RID16-07-NUTNRB000-recovered_host-suna_dcl_recovered` Task name: `processing_task` Error type: `ValueError` Error message: not enough values to unpack (expected 3, got 0) <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream append_to_zarr(mod_ds, final_store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr _append_zarr(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr existing_arr.append(var_data.values) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values return _as_array_or_item(self._data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item data = np.asarray(data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__ x = self.compute() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute (result,) = compute(self, traverse=False, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute results = schedule(dsk, keys, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get results = get_async( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async raise_exception(exc, tb) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise raise exc File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task result = _execute_task(task, data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task return func(*(_execute_task(a, cache) for a in args)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter c = np.asarray(c) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__ self._ensure_cached() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached self.array = NumpyIndexingAdapter(np.asarray(self.array)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__ return self.func(self.array) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask data = np.asarray(data, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__ return array[key.tuple] File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__ return self.get_basic_selection(selection, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection return self._get_basic_selection_nd(selection=selection, out=out, File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd return self._get_selection(indexer=indexer, out=out, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection lchunk_coords, lchunk_selection, lout_selection = zip(*indexer) ValueError: not enough values to unpack (expected 3, got 0) ``` </details>
1.0
πŸ›‘ Processing failed: ValueError - ## Overview `ValueError` found in `processing_task` task during run ended on 2022-01-21T09:19:17.461091. ## Details Flow name: `CE06ISSM-RID16-07-NUTNRB000-recovered_host-suna_dcl_recovered` Task name: `processing_task` Error type: `ValueError` Error message: not enough values to unpack (expected 3, got 0) <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream append_to_zarr(mod_ds, final_store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr _append_zarr(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr existing_arr.append(var_data.values) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values return _as_array_or_item(self._data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item data = np.asarray(data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__ x = self.compute() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute (result,) = compute(self, traverse=False, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute results = schedule(dsk, keys, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get results = get_async( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async raise_exception(exc, tb) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise raise exc File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task result = _execute_task(task, data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task return func(*(_execute_task(a, cache) for a in args)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter c = np.asarray(c) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__ self._ensure_cached() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached self.array = NumpyIndexingAdapter(np.asarray(self.array)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__ return self.func(self.array) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask data = np.asarray(data, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__ return array[key.tuple] File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__ return self.get_basic_selection(selection, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection return self._get_basic_selection_nd(selection=selection, out=out, File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd return self._get_selection(indexer=indexer, out=out, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection lchunk_coords, lchunk_selection, lout_selection = zip(*indexer) ValueError: not enough values to unpack (expected 3, got 0) ``` </details>
process
πŸ›‘ processing failed valueerror overview valueerror found in processing task task during run ended on details flow name recovered host suna dcl recovered task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray coding variables py line in array return self func self array file srv conda envs notebook lib site packages xarray coding variables py line in apply mask data np asarray data dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got
1
9,413
12,406,993,168
IssuesEvent
2020-05-21 20:12:40
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
runtime expression $[variables.var] renders as is when not found
Pri1 devops-cicd-process/tech devops/prod doc-enhancement
while the doc says it renders empty string. Example pipeline: ```yml steps: - bash: echo '$[variables.var]' ``` --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#feedback) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
runtime expression $[variables.var] renders as is when not found - while the doc says it renders empty string. Example pipeline: ```yml steps: - bash: echo '$[variables.var]' ``` --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#feedback) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
runtime expression renders as is when not found while the doc says it renders empty string example pipeline yml steps bash echo document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id bcdb content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
14,291
17,264,841,937
IssuesEvent
2021-07-22 12:37:56
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[iOS] Custom schedule > Incorrect label 'Missed' is shown when participant completed previous run and next run yet to start
Bug P2 Process: Fixed Process: Tested QA Process: Tested dev iOS
Steps: 1. Configure a custom schedule regular/anchor based having multiple runs 2. Let 1st run start from 1PM 07/06 to 2PM 07/06 3. Let 2nd run start from 3PM 07/06 to 4PM 07/06 3. Participant completes the 1st run successfully 4. Let 1st run gets expired 5. Observe the activity 'Label' in between 1st run and 2nd run i.e from 2.01 PM to 2:59 PM Actual: Label 'Missed' is shown Expected: Label 'Start' should be shown (Refer android screenshot) Issue not observed for other scheduling frequencies Issue should be fixed for questionnaires and active tasks Issue should be fixed for Regular and Anchor based custom scheduling iOS: ![iOS](https://user-images.githubusercontent.com/60386291/124558769-94363180-de58-11eb-8ee4-06709a9c1e17.png) Android (for reference): ![Android](https://user-images.githubusercontent.com/60386291/124559186-0f97e300-de59-11eb-8168-4ad0d3e17451.jpg)
3.0
[iOS] Custom schedule > Incorrect label 'Missed' is shown when participant completed previous run and next run yet to start - Steps: 1. Configure a custom schedule regular/anchor based having multiple runs 2. Let 1st run start from 1PM 07/06 to 2PM 07/06 3. Let 2nd run start from 3PM 07/06 to 4PM 07/06 3. Participant completes the 1st run successfully 4. Let 1st run gets expired 5. Observe the activity 'Label' in between 1st run and 2nd run i.e from 2.01 PM to 2:59 PM Actual: Label 'Missed' is shown Expected: Label 'Start' should be shown (Refer android screenshot) Issue not observed for other scheduling frequencies Issue should be fixed for questionnaires and active tasks Issue should be fixed for Regular and Anchor based custom scheduling iOS: ![iOS](https://user-images.githubusercontent.com/60386291/124558769-94363180-de58-11eb-8ee4-06709a9c1e17.png) Android (for reference): ![Android](https://user-images.githubusercontent.com/60386291/124559186-0f97e300-de59-11eb-8168-4ad0d3e17451.jpg)
process
custom schedule incorrect label missed is shown when participant completed previous run and next run yet to start steps configure a custom schedule regular anchor based having multiple runs let run start from to let run start from to participant completes the run successfully let run gets expired observe the activity label in between run and run i e from pm to pm actual label missed is shown expected label start should be shown refer android screenshot issue not observed for other scheduling frequencies issue should be fixed for questionnaires and active tasks issue should be fixed for regular and anchor based custom scheduling ios android for reference
1
5,195
7,973,978,025
IssuesEvent
2018-07-17 02:31:07
pelias/pelias
https://api.github.com/repos/pelias/pelias
closed
handle OA records with no HASH property
processed
it seems like a bunch of the OA records still don't have a `HASH` property, eg: ```bash $ head /data/oa/us/nj/statewide.csv LON,LAT,NUMBER,STREET,UNIT,CITY,DISTRICT,REGION,POSTCODE,ID,HASH -74.4226095,39.3689364,729,LEXINGTON AVE,,,,,,, -74.4147457,39.3689073,29,N VERMONT AVE,,,,,,, -74.4227321,39.3688336,219,N DELAWARE AVE,,,,,,, -74.4220756,39.3648725,901,ATLANTIC AVE,,,,,,, -74.4345249,39.3666268,400,N KENTUCKY AVE,,,,,,, -74.415292,39.3684783,19,IRVING AVE,,,,,,, -74.4330455,39.3606627,1729,ATLANTIC AVE RR,,,,,,, -74.4646897,39.3464363,1,S PLAZA PL,,,,,,, -74.4637452,39.3456241,4601,ATLANTIC AVE,,,,,,, ``` (that's from a fresh download of `openaddr-collected-global.zip` within an hour of writing this). this means that search results don't actually generate GID values correctly for these records, eg: ``` "properties": { "id": "us/nj/statewide:0", "gid": "openaddresses:address:us/nj/statewide:0", "layer": "address", "source": "openaddresses", "source_id": "us/nj/statewide:0", "name": "729 Lexington Ave", "housenumber": "729", "street": "Lexington Ave", ``` cc/ @iandees
1.0
handle OA records with no HASH property - it seems like a bunch of the OA records still don't have a `HASH` property, eg: ```bash $ head /data/oa/us/nj/statewide.csv LON,LAT,NUMBER,STREET,UNIT,CITY,DISTRICT,REGION,POSTCODE,ID,HASH -74.4226095,39.3689364,729,LEXINGTON AVE,,,,,,, -74.4147457,39.3689073,29,N VERMONT AVE,,,,,,, -74.4227321,39.3688336,219,N DELAWARE AVE,,,,,,, -74.4220756,39.3648725,901,ATLANTIC AVE,,,,,,, -74.4345249,39.3666268,400,N KENTUCKY AVE,,,,,,, -74.415292,39.3684783,19,IRVING AVE,,,,,,, -74.4330455,39.3606627,1729,ATLANTIC AVE RR,,,,,,, -74.4646897,39.3464363,1,S PLAZA PL,,,,,,, -74.4637452,39.3456241,4601,ATLANTIC AVE,,,,,,, ``` (that's from a fresh download of `openaddr-collected-global.zip` within an hour of writing this). this means that search results don't actually generate GID values correctly for these records, eg: ``` "properties": { "id": "us/nj/statewide:0", "gid": "openaddresses:address:us/nj/statewide:0", "layer": "address", "source": "openaddresses", "source_id": "us/nj/statewide:0", "name": "729 Lexington Ave", "housenumber": "729", "street": "Lexington Ave", ``` cc/ @iandees
process
handle oa records with no hash property it seems like a bunch of the oa records still don t have a hash property eg bash head data oa us nj statewide csv lon lat number street unit city district region postcode id hash lexington ave n vermont ave n delaware ave atlantic ave n kentucky ave irving ave atlantic ave rr s plaza pl atlantic ave that s from a fresh download of openaddr collected global zip within an hour of writing this this means that search results don t actually generate gid values correctly for these records eg properties id us nj statewide gid openaddresses address us nj statewide layer address source openaddresses source id us nj statewide name lexington ave housenumber street lexington ave cc iandees
1
289,108
31,931,181,359
IssuesEvent
2023-09-19 07:32:27
Trinadh465/linux-4.1.15_CVE-2023-4128
https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-4128
opened
CVE-2022-28356 (Medium) detected in linuxlinux-4.6
Mend: dependency security vulnerability
## CVE-2022-28356 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-4128/commit/0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8">0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/llc/af_llc.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/llc/af_llc.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> In the Linux kernel before 5.17.1, a refcount leak bug was found in net/llc/af_llc.c. <p>Publish Date: 2022-04-02 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-28356>CVE-2022-28356</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-28356">https://www.linuxkernelcves.com/cves/CVE-2022-28356</a></p> <p>Release Date: 2022-04-02</p> <p>Fix Resolution: v4.9.309,v4.14.274,v4.19.237,v5.4.188,v5.10.109,v5.15.32,v5.16.18,v5.17.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-28356 (Medium) detected in linuxlinux-4.6 - ## CVE-2022-28356 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-4128/commit/0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8">0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/llc/af_llc.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/llc/af_llc.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> In the Linux kernel before 5.17.1, a refcount leak bug was found in net/llc/af_llc.c. <p>Publish Date: 2022-04-02 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-28356>CVE-2022-28356</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-28356">https://www.linuxkernelcves.com/cves/CVE-2022-28356</a></p> <p>Release Date: 2022-04-02</p> <p>Fix Resolution: v4.9.309,v4.14.274,v4.19.237,v5.4.188,v5.10.109,v5.15.32,v5.16.18,v5.17.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files net llc af llc c net llc af llc c vulnerability details in the linux kernel before a refcount leak bug was found in net llc af llc c publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
12,074
14,739,925,035
IssuesEvent
2021-01-07 08:11:33
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] [Dev] Open study > Enrollment registry > Pagination issue
Bug P1 Participant manager Process: Dev Process: Tested dev
AR : 2nd page is getting displayed eventhough a study has less than or equal to 10 participant records ER : If the study has less than or equal to 10 participant records, then 2nd page should not be displayed ![pagination2](https://user-images.githubusercontent.com/71445210/103085324-651c4900-4607-11eb-9313-ef45029d98c7.png) ![pagination 1](https://user-images.githubusercontent.com/71445210/103085309-5897f080-4607-11eb-95e1-aee45c19c5fd.png)
2.0
[PM] [Dev] Open study > Enrollment registry > Pagination issue - AR : 2nd page is getting displayed eventhough a study has less than or equal to 10 participant records ER : If the study has less than or equal to 10 participant records, then 2nd page should not be displayed ![pagination2](https://user-images.githubusercontent.com/71445210/103085324-651c4900-4607-11eb-9313-ef45029d98c7.png) ![pagination 1](https://user-images.githubusercontent.com/71445210/103085309-5897f080-4607-11eb-95e1-aee45c19c5fd.png)
process
open study enrollment registry pagination issue ar page is getting displayed eventhough a study has less than or equal to participant records er if the study has less than or equal to participant records then page should not be displayed
1
128,406
5,064,777,928
IssuesEvent
2016-12-23 08:45:27
praekeltfoundation/gem-bbb-indo
https://api.github.com/repos/praekeltfoundation/gem-bbb-indo
closed
Missed Deadline text on Goal view
enhancement priority - medium
Target date copy changes from **"TARGET DATE"** to **"TARGET DATE MISSED"** when a Goal is overdue.
1.0
Missed Deadline text on Goal view - Target date copy changes from **"TARGET DATE"** to **"TARGET DATE MISSED"** when a Goal is overdue.
non_process
missed deadline text on goal view target date copy changes from target date to target date missed when a goal is overdue
0
339,341
30,442,451,905
IssuesEvent
2023-07-15 08:21:11
sonic-net/sonic-mgmt
https://api.github.com/repos/sonic-net/sonic-mgmt
closed
show snmp counters unavailable failing test_ft_snmp_trap_counter
Spytest
Hello, In spytest/apis/system/snmp.py, we have a method called show_snmp_counters (and clear snmp counters) which is being called by test_ft_snmp_trap_counter and test_ft_snmp_basic_counters. show_snmp_counters tries to run "show snmp counters" on the DUT. However, on the DUT there is no "show snmp counters" in klish CLI. What could be missing? The DUT is running : SONiC Software Version: SONiC.master.565-e0781f46 Distribution: Debian 10.6 Kernel: 4.19.0-9-2-amd64 Build commit: e0781f46 Build date: Mon Nov 23 08:32:04 UTC 2020 Built by: johnar@jenkins-worker-12 Platform: x86_64-kvm_x86_64-r0 HwSKU: Force10-S6000 ASIC: vs Serial Number: 000000 Uptime: 03:01:08 up 1 day, 4:44, 1 user, load average: 0.45, 0.81, 0.67 Docker images: REPOSITORY TAG IMAGE ID SIZE docker-gbsyncd-vs latest fc2bcaf7dc91 447MB docker-gbsyncd-vs master.565-e0781f46 fc2bcaf7dc91 447MB docker-syncd-vs latest a599f4b6a7c9 447MB docker-syncd-vs master.565-e0781f46 a599f4b6a7c9 447MB docker-snmp latest 72953533ec0e 477MB docker-snmp master.565-e0781f46 72953533ec0e 477MB docker-teamd latest 0392627a102f 483MB docker-teamd master.565-e0781f46 0392627a102f 483MB docker-nat latest 61210a29630f 486MB docker-nat master.565-e0781f46 61210a29630f 486MB docker-router-advertiser latest dcc0e15c5463 441MB docker-router-advertiser master.565-e0781f46 dcc0e15c5463 441MB docker-platform-monitor latest 053ce63c0b8f 563MB docker-platform-monitor master.565-e0781f46 053ce63c0b8f 563MB docker-lldp latest c68597acb666 514MB docker-lldp master.565-e0781f46 c68597acb666 514MB docker-dhcp-relay latest badb3fb27c4e 448MB docker-dhcp-relay master.565-e0781f46 badb3fb27c4e 448MB docker-database latest d92f5b4cb6e0 441MB docker-database master.565-e0781f46 d92f5b4cb6e0 441MB docker-orchagent latest d9c215c333d9 497MB docker-orchagent master.565-e0781f46 d9c215c333d9 497MB docker-sonic-telemetry latest 26941527084a 511MB docker-sonic-telemetry master.565-e0781f46 26941527084a 511MB docker-sonic-mgmt-framework latest d92d56dac3a5 598MB docker-sonic-mgmt-framework master.565-e0781f46 d92d56dac3a5 598MB docker-fpm-frr latest 6c139d86ef63 500MB docker-fpm-frr master.565-e0781f46 6c139d86ef63 500MB docker-sflow latest 883128578286 484MB docker-sflow master.565-e0781f46 883128578286 484MB
1.0
show snmp counters unavailable failing test_ft_snmp_trap_counter - Hello, In spytest/apis/system/snmp.py, we have a method called show_snmp_counters (and clear snmp counters) which is being called by test_ft_snmp_trap_counter and test_ft_snmp_basic_counters. show_snmp_counters tries to run "show snmp counters" on the DUT. However, on the DUT there is no "show snmp counters" in klish CLI. What could be missing? The DUT is running : SONiC Software Version: SONiC.master.565-e0781f46 Distribution: Debian 10.6 Kernel: 4.19.0-9-2-amd64 Build commit: e0781f46 Build date: Mon Nov 23 08:32:04 UTC 2020 Built by: johnar@jenkins-worker-12 Platform: x86_64-kvm_x86_64-r0 HwSKU: Force10-S6000 ASIC: vs Serial Number: 000000 Uptime: 03:01:08 up 1 day, 4:44, 1 user, load average: 0.45, 0.81, 0.67 Docker images: REPOSITORY TAG IMAGE ID SIZE docker-gbsyncd-vs latest fc2bcaf7dc91 447MB docker-gbsyncd-vs master.565-e0781f46 fc2bcaf7dc91 447MB docker-syncd-vs latest a599f4b6a7c9 447MB docker-syncd-vs master.565-e0781f46 a599f4b6a7c9 447MB docker-snmp latest 72953533ec0e 477MB docker-snmp master.565-e0781f46 72953533ec0e 477MB docker-teamd latest 0392627a102f 483MB docker-teamd master.565-e0781f46 0392627a102f 483MB docker-nat latest 61210a29630f 486MB docker-nat master.565-e0781f46 61210a29630f 486MB docker-router-advertiser latest dcc0e15c5463 441MB docker-router-advertiser master.565-e0781f46 dcc0e15c5463 441MB docker-platform-monitor latest 053ce63c0b8f 563MB docker-platform-monitor master.565-e0781f46 053ce63c0b8f 563MB docker-lldp latest c68597acb666 514MB docker-lldp master.565-e0781f46 c68597acb666 514MB docker-dhcp-relay latest badb3fb27c4e 448MB docker-dhcp-relay master.565-e0781f46 badb3fb27c4e 448MB docker-database latest d92f5b4cb6e0 441MB docker-database master.565-e0781f46 d92f5b4cb6e0 441MB docker-orchagent latest d9c215c333d9 497MB docker-orchagent master.565-e0781f46 d9c215c333d9 497MB docker-sonic-telemetry latest 26941527084a 511MB docker-sonic-telemetry master.565-e0781f46 26941527084a 511MB docker-sonic-mgmt-framework latest d92d56dac3a5 598MB docker-sonic-mgmt-framework master.565-e0781f46 d92d56dac3a5 598MB docker-fpm-frr latest 6c139d86ef63 500MB docker-fpm-frr master.565-e0781f46 6c139d86ef63 500MB docker-sflow latest 883128578286 484MB docker-sflow master.565-e0781f46 883128578286 484MB
non_process
show snmp counters unavailable failing test ft snmp trap counter hello in spytest apis system snmp py we have a method called show snmp counters and clear snmp counters which is being called by test ft snmp trap counter and test ft snmp basic counters show snmp counters tries to run show snmp counters on the dut however on the dut there is no show snmp counters in klish cli what could be missing the dut is running sonic software version sonic master distribution debian kernel build commit build date mon nov utc built by johnar jenkins worker platform kvm hwsku asic vs serial number uptime up day user load average docker images repository tag image id size docker gbsyncd vs latest docker gbsyncd vs master docker syncd vs latest docker syncd vs master docker snmp latest docker snmp master docker teamd latest docker teamd master docker nat latest docker nat master docker router advertiser latest docker router advertiser master docker platform monitor latest docker platform monitor master docker lldp latest docker lldp master docker dhcp relay latest docker dhcp relay master docker database latest docker database master docker orchagent latest docker orchagent master docker sonic telemetry latest docker sonic telemetry master docker sonic mgmt framework latest docker sonic mgmt framework master docker fpm frr latest docker fpm frr master docker sflow latest docker sflow master
0
13,209
15,652,604,302
IssuesEvent
2021-03-23 11:37:20
GoogleCloudPlatform/dotnet-docs-samples
https://api.github.com/repos/GoogleCloudPlatform/dotnet-docs-samples
closed
Clouddemo: Failed to build, but only transiently.
priority: p3 samples type: process
See CI logs [here](https://source.cloud.google.com/results/invocations/73312485-45fb-41eb-b505-bd2bc8c09b77/targets/github%2Fdotnet-docs-samples%2Fapplications%2Fclouddemo%2Fnetcore/tests). Not much information there unfortunatelly. Might be another manifestation of #1278 @jpassing I understand this might be just a one-of. But it would be great if you could take a quick look. I'm also creating the issue to be able to track this from the start. Even if we close it now as a one-of, we can always reopen later if it continues to happen. Thanks
1.0
Clouddemo: Failed to build, but only transiently. - See CI logs [here](https://source.cloud.google.com/results/invocations/73312485-45fb-41eb-b505-bd2bc8c09b77/targets/github%2Fdotnet-docs-samples%2Fapplications%2Fclouddemo%2Fnetcore/tests). Not much information there unfortunatelly. Might be another manifestation of #1278 @jpassing I understand this might be just a one-of. But it would be great if you could take a quick look. I'm also creating the issue to be able to track this from the start. Even if we close it now as a one-of, we can always reopen later if it continues to happen. Thanks
process
clouddemo failed to build but only transiently see ci logs not much information there unfortunatelly might be another manifestation of jpassing i understand this might be just a one of but it would be great if you could take a quick look i m also creating the issue to be able to track this from the start even if we close it now as a one of we can always reopen later if it continues to happen thanks
1
49,859
13,466,498,350
IssuesEvent
2020-09-09 23:05:38
wrbejar/JavaVulnerableLab
https://api.github.com/repos/wrbejar/JavaVulnerableLab
opened
CVE-2017-3523 (High) detected in mysql-connector-java-5.1.26.jar
security vulnerability
## CVE-2017-3523 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.26.jar</b></p></summary> <p>MySQL JDBC Type 4 driver</p> <p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p> <p>Path to dependency file: /tmp/ws-ua_20200909230431_TXNAGL/archiveExtraction_JAXWNN/20200909230432/ws-scm_depth_0/JavaVulnerableLab/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/pom.xml</p> <p>Path to vulnerable library: canner/.m2/repository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar,_depth_0/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,canner/.m2/repository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar,_depth_0/JavaVulnerableLab/bin/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,canner/.m2/repository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar,_depth_0/JavaVulnerableLab/bin/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,_depth_0/JavaVulnerableLab/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,/JavaVulnerableLab/bin/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar</p> <p> Dependency Hierarchy: - :x: **mysql-connector-java-5.1.26.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/wrbejar/JavaVulnerableLab/commit/b2c9e600cc70c6f238a3ff941d0e23ea9e816a2a">b2c9e600cc70c6f238a3ff941d0e23ea9e816a2a</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.40 and earlier. Difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.5 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H). <p>Publish Date: 2017-04-24 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3523>CVE-2017-3523</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.oracle.com/technetwork/security-advisory/cpuapr2017-3236618.html">https://www.oracle.com/technetwork/security-advisory/cpuapr2017-3236618.html</a></p> <p>Release Date: 2017-04-24</p> <p>Fix Resolution: 5.1.41</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.26","isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.26","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.1.41"}],"vulnerabilityIdentifier":"CVE-2017-3523","vulnerabilityDetails":"Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.40 and earlier. Difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.5 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3523","cvss3Severity":"high","cvss3Score":"8.5","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Changed","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2017-3523 (High) detected in mysql-connector-java-5.1.26.jar - ## CVE-2017-3523 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.26.jar</b></p></summary> <p>MySQL JDBC Type 4 driver</p> <p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p> <p>Path to dependency file: /tmp/ws-ua_20200909230431_TXNAGL/archiveExtraction_JAXWNN/20200909230432/ws-scm_depth_0/JavaVulnerableLab/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/pom.xml</p> <p>Path to vulnerable library: canner/.m2/repository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar,_depth_0/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,canner/.m2/repository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar,_depth_0/JavaVulnerableLab/bin/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,canner/.m2/repository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar,_depth_0/JavaVulnerableLab/bin/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,_depth_0/JavaVulnerableLab/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,/JavaVulnerableLab/bin/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar</p> <p> Dependency Hierarchy: - :x: **mysql-connector-java-5.1.26.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/wrbejar/JavaVulnerableLab/commit/b2c9e600cc70c6f238a3ff941d0e23ea9e816a2a">b2c9e600cc70c6f238a3ff941d0e23ea9e816a2a</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.40 and earlier. Difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.5 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H). <p>Publish Date: 2017-04-24 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3523>CVE-2017-3523</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.oracle.com/technetwork/security-advisory/cpuapr2017-3236618.html">https://www.oracle.com/technetwork/security-advisory/cpuapr2017-3236618.html</a></p> <p>Release Date: 2017-04-24</p> <p>Fix Resolution: 5.1.41</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.26","isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.26","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.1.41"}],"vulnerabilityIdentifier":"CVE-2017-3523","vulnerabilityDetails":"Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.40 and earlier. Difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.5 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3523","cvss3Severity":"high","cvss3Score":"8.5","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Changed","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in mysql connector java jar cve high severity vulnerability vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file tmp ws ua txnagl archiveextraction jaxwnn ws scm depth javavulnerablelab target javavulnerablelab meta inf maven org cysecurity javavulnerablelab pom xml path to vulnerable library canner repository mysql mysql connector java mysql connector java jar depth javavulnerablelab target javavulnerablelab web inf lib mysql connector java jar canner repository mysql mysql connector java mysql connector java jar depth javavulnerablelab bin target javavulnerablelab web inf lib mysql connector java jar javavulnerablelab target javavulnerablelab web inf lib mysql connector java jar canner repository mysql mysql connector java mysql connector java jar depth javavulnerablelab bin target javavulnerablelab meta inf maven org cysecurity javavulnerablelab target javavulnerablelab web inf lib mysql connector java jar depth javavulnerablelab target javavulnerablelab meta inf maven org cysecurity javavulnerablelab target javavulnerablelab web inf lib mysql connector java jar javavulnerablelab bin target javavulnerablelab web inf lib mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in head commit a href vulnerability details vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and earlier difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise mysql connectors while the vulnerability is in mysql connectors attacks may significantly impact additional products successful attacks of this vulnerability can result in takeover of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac h pr l ui n s c c h i h a h publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and earlier difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise mysql connectors while the vulnerability is in mysql connectors attacks may significantly impact additional products successful attacks of this vulnerability can result in takeover of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac h pr l ui n s c c h i h a h vulnerabilityurl
0
656,562
21,767,902,765
IssuesEvent
2022-05-13 05:35:31
COS301-SE-2022/Twitter-Summariser
https://api.github.com/repos/COS301-SE-2022/Twitter-Summariser
closed
Create mockups of at least 3 use cases
priority:high status:not-ready scope:frontend role:frontend-engineer role:team-lead role:backend-engineer
List of use cases to mockup: - [x] Explore published reports - [x] Search for topic to generate - [x] Generate report
1.0
Create mockups of at least 3 use cases - List of use cases to mockup: - [x] Explore published reports - [x] Search for topic to generate - [x] Generate report
non_process
create mockups of at least use cases list of use cases to mockup explore published reports search for topic to generate generate report
0
11,167
13,957,694,437
IssuesEvent
2020-10-24 08:11:15
alexanderkotsev/geoportal
https://api.github.com/repos/alexanderkotsev/geoportal
opened
RO: Harvesting request
Geoportal Harvesting process RO - Romania
Dear Angelo, Yesterday, during the harvesting process from the National Geoportal, we had an incident on it. Therefore not all metadata files were visible at that time. So, can you please start again a harvesting process for the Romanian Geoportal ? Best regards, Simona Bunea
1.0
RO: Harvesting request - Dear Angelo, Yesterday, during the harvesting process from the National Geoportal, we had an incident on it. Therefore not all metadata files were visible at that time. So, can you please start again a harvesting process for the Romanian Geoportal ? Best regards, Simona Bunea
process
ro harvesting request dear angelo yesterday during the harvesting process from the national geoportal we had an incident on it therefore not all metadata files were visible at that time so can you please start again a harvesting process for the romanian geoportal best regards simona bunea
1
17,789
23,716,078,268
IssuesEvent
2022-08-30 11:54:23
cloudfoundry/korifi
https://api.github.com/repos/cloudfoundry/korifi
opened
[Feature]: Developer can push apps using the top-level `health-check-http-endpoint` field in the manifest
Top-level process config
### Blockers/Dependencies _No response_ ### Background **As a** developer **I want** top-level process configuration in manifests to be supported **So that** I can use shortcut `cf push` flags like `-c`, `-i`, `-m` etc. ### Acceptance Criteria * **GIVEN** I have the following node app: ```js var http = require('http'); http.createServer(function (request, response) { if (request.url == '/health') { response.writeHead(200, {'Content-Type': 'text/plain'}); response.end('ok'); } else { response.writeHead(500, {'Content-Type': 'text/plain'}); response.end('no'); } }).listen(process.env.PORT); ``` with the following `manifest.yml`: ```yaml --- applications: - name: my-app health-check-http-endpoint: /health ``` **WHEN I** `cf push` **THEN I** see the push succeeds with an output similar to this: ``` name: test requested state: started routes: test.vcap.me last uploaded: Mon 29 Aug 16:28:36 UTC 2022 stack: cflinuxfs3 buildpacks: name version detect output buildpack name nodejs_buildpack 1.7.61 nodejs nodejs type: web sidecars: instances: 3/3 memory usage: 256M start command: npm start state since cpu memory disk details #0 running 2022-08-29T16:28:54Z 1.6% 42.3M of 256M 115.7M of 1G ``` * **GIVEN** I have the same app with the following manifest: **AND** `manifest.yml` looks like this: ```yaml --- applications: - name: my-app health-check-http-endpoint: /wrong processes: - type: web health-check-http-endpoint: /health ``` **WHEN I** `cf push` **THEN I** see the push succeeds with the same output as above ### Dev Notes _No response_
1.0
[Feature]: Developer can push apps using the top-level `health-check-http-endpoint` field in the manifest - ### Blockers/Dependencies _No response_ ### Background **As a** developer **I want** top-level process configuration in manifests to be supported **So that** I can use shortcut `cf push` flags like `-c`, `-i`, `-m` etc. ### Acceptance Criteria * **GIVEN** I have the following node app: ```js var http = require('http'); http.createServer(function (request, response) { if (request.url == '/health') { response.writeHead(200, {'Content-Type': 'text/plain'}); response.end('ok'); } else { response.writeHead(500, {'Content-Type': 'text/plain'}); response.end('no'); } }).listen(process.env.PORT); ``` with the following `manifest.yml`: ```yaml --- applications: - name: my-app health-check-http-endpoint: /health ``` **WHEN I** `cf push` **THEN I** see the push succeeds with an output similar to this: ``` name: test requested state: started routes: test.vcap.me last uploaded: Mon 29 Aug 16:28:36 UTC 2022 stack: cflinuxfs3 buildpacks: name version detect output buildpack name nodejs_buildpack 1.7.61 nodejs nodejs type: web sidecars: instances: 3/3 memory usage: 256M start command: npm start state since cpu memory disk details #0 running 2022-08-29T16:28:54Z 1.6% 42.3M of 256M 115.7M of 1G ``` * **GIVEN** I have the same app with the following manifest: **AND** `manifest.yml` looks like this: ```yaml --- applications: - name: my-app health-check-http-endpoint: /wrong processes: - type: web health-check-http-endpoint: /health ``` **WHEN I** `cf push` **THEN I** see the push succeeds with the same output as above ### Dev Notes _No response_
process
developer can push apps using the top level health check http endpoint field in the manifest blockers dependencies no response background as a developer i want top level process configuration in manifests to be supported so that i can use shortcut cf push flags like c i m etc acceptance criteria given i have the following node app js var http require http http createserver function request response if request url health response writehead content type text plain response end ok else response writehead content type text plain response end no listen process env port with the following manifest yml yaml applications name my app health check http endpoint health when i cf push then i see the push succeeds with an output similar to this name test requested state started routes test vcap me last uploaded mon aug utc stack buildpacks name version detect output buildpack name nodejs buildpack nodejs nodejs type web sidecars instances memory usage start command npm start state since cpu memory disk details running of of given i have the same app with the following manifest and manifest yml looks like this yaml applications name my app health check http endpoint wrong processes type web health check http endpoint health when i cf push then i see the push succeeds with the same output as above dev notes no response
1
18,192
24,240,172,304
IssuesEvent
2022-09-27 05:43:35
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
GO:0044662 disruption by virus of host cell membrane
ready multi-species process
From @ValWood in #18692 (thus point was not dealt with): * **GO:0044662 disruption by virus of host cell membrane** A process by which a virus has a negative effect on the functioning of a host cellular membrane. I don't understand this one at all what is "negative effect on the functioning of a host cellular membrane"? 5 annotations CACAO but looking at the paper it should probably merge into GO:0044659 cytolysis by virus of host cell Definition (GO:0044659 GONUTS page) The killing by a virus of a cell by means of the rupture of cell membranes and the loss of cytoplasm. PMID:26728778 it's another way of saying the same thing i.e I think most of the high level "disruption terms" can be removed as most of the meaningful terms have alternative sensible parents... * There is one child term, **GO:0090680 disruption by virus of host outer membrane** @dsiegele Can the CACAO annotations be re-housed? Thanks, Pascale
1.0
GO:0044662 disruption by virus of host cell membrane - From @ValWood in #18692 (thus point was not dealt with): * **GO:0044662 disruption by virus of host cell membrane** A process by which a virus has a negative effect on the functioning of a host cellular membrane. I don't understand this one at all what is "negative effect on the functioning of a host cellular membrane"? 5 annotations CACAO but looking at the paper it should probably merge into GO:0044659 cytolysis by virus of host cell Definition (GO:0044659 GONUTS page) The killing by a virus of a cell by means of the rupture of cell membranes and the loss of cytoplasm. PMID:26728778 it's another way of saying the same thing i.e I think most of the high level "disruption terms" can be removed as most of the meaningful terms have alternative sensible parents... * There is one child term, **GO:0090680 disruption by virus of host outer membrane** @dsiegele Can the CACAO annotations be re-housed? Thanks, Pascale
process
go disruption by virus of host cell membrane from valwood in thus point was not dealt with go disruption by virus of host cell membrane a process by which a virus has a negative effect on the functioning of a host cellular membrane i don t understand this one at all what is negative effect on the functioning of a host cellular membrane annotations cacao but looking at the paper it should probably merge into go cytolysis by virus of host cell definition go gonuts page the killing by a virus of a cell by means of the rupture of cell membranes and the loss of cytoplasm pmid it s another way of saying the same thing i e i think most of the high level disruption terms can be removed as most of the meaningful terms have alternative sensible parents there is one child term go disruption by virus of host outer membrane dsiegele can the cacao annotations be re housed thanks pascale
1
19,332
25,472,559,240
IssuesEvent
2022-11-25 11:29:00
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[IDP] [PM] Null value is getting displayed in the phone number field in the following scenario
Bug P1 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
**Pre-condition:** mfa should be disabled in the PM **Steps:** 1. Login to PM 2. Add admin in the application 3. Click on account activation link of added admin 4. Set up account without 'Phone number' 5. Login as added admin 6. Navigate to 'My account' screen and Verify **AR:** Null value is getting displayed in the phone number field in the following scenario **ER:** Phone number field should be empty without any value ![null](https://user-images.githubusercontent.com/86007179/177759020-45e4b60d-0a87-40d6-859d-e9bb412cc181.png)
3.0
[IDP] [PM] Null value is getting displayed in the phone number field in the following scenario - **Pre-condition:** mfa should be disabled in the PM **Steps:** 1. Login to PM 2. Add admin in the application 3. Click on account activation link of added admin 4. Set up account without 'Phone number' 5. Login as added admin 6. Navigate to 'My account' screen and Verify **AR:** Null value is getting displayed in the phone number field in the following scenario **ER:** Phone number field should be empty without any value ![null](https://user-images.githubusercontent.com/86007179/177759020-45e4b60d-0a87-40d6-859d-e9bb412cc181.png)
process
null value is getting displayed in the phone number field in the following scenario pre condition mfa should be disabled in the pm steps login to pm add admin in the application click on account activation link of added admin set up account without phone number login as added admin navigate to my account screen and verify ar null value is getting displayed in the phone number field in the following scenario er phone number field should be empty without any value
1
83,661
24,116,302,282
IssuesEvent
2022-09-20 14:58:08
veracruz-project/veracruz
https://api.github.com/repos/veracruz-project/veracruz
opened
Compilation artifacts are not reproducible due to everchanging build ID
bug build-process
**Describe the bug** Veracruz binaries (client, server, attestation, runtime manager) contain a build ID (`.note.gnu.build-id` section) in their ELF headers. This build ID is a hash determined by rustc and/or the linker. However, for unknown reasons (timestamp somewhere? source of randomness?), the build ID changes every time the crate is cargo cleaned then rebuilt. This results in binaries that are functionally equivalent but have different overall hashes, which gives different Docker images. **To Reproduce** * Build Veracruz, e.g. `make nitro` * Hash e.g. `veracruz-server`: `sha256sum nitro-host/target/debug/veracruz-server` * Clean: `make clean` * Build Veracruz again: `make nitro` * Hash `veracruz-server`: `sha256sum nitro-host/target/debug/veracruz-server` * The two hashes are different because the build ID is different: `objdump --section=.note.gnu.build-id --full-contents nitro-host/target/release/veracruz-server` **Expected behaviour** It's not clear to me how the build ID is determined, but I would expect it to stay constant between builds. **Solutions** Strip build ID from binaries or fix build ID generation
1.0
Compilation artifacts are not reproducible due to everchanging build ID - **Describe the bug** Veracruz binaries (client, server, attestation, runtime manager) contain a build ID (`.note.gnu.build-id` section) in their ELF headers. This build ID is a hash determined by rustc and/or the linker. However, for unknown reasons (timestamp somewhere? source of randomness?), the build ID changes every time the crate is cargo cleaned then rebuilt. This results in binaries that are functionally equivalent but have different overall hashes, which gives different Docker images. **To Reproduce** * Build Veracruz, e.g. `make nitro` * Hash e.g. `veracruz-server`: `sha256sum nitro-host/target/debug/veracruz-server` * Clean: `make clean` * Build Veracruz again: `make nitro` * Hash `veracruz-server`: `sha256sum nitro-host/target/debug/veracruz-server` * The two hashes are different because the build ID is different: `objdump --section=.note.gnu.build-id --full-contents nitro-host/target/release/veracruz-server` **Expected behaviour** It's not clear to me how the build ID is determined, but I would expect it to stay constant between builds. **Solutions** Strip build ID from binaries or fix build ID generation
non_process
compilation artifacts are not reproducible due to everchanging build id describe the bug veracruz binaries client server attestation runtime manager contain a build id note gnu build id section in their elf headers this build id is a hash determined by rustc and or the linker however for unknown reasons timestamp somewhere source of randomness the build id changes every time the crate is cargo cleaned then rebuilt this results in binaries that are functionally equivalent but have different overall hashes which gives different docker images to reproduce build veracruz e g make nitro hash e g veracruz server nitro host target debug veracruz server clean make clean build veracruz again make nitro hash veracruz server nitro host target debug veracruz server the two hashes are different because the build id is different objdump section note gnu build id full contents nitro host target release veracruz server expected behaviour it s not clear to me how the build id is determined but i would expect it to stay constant between builds solutions strip build id from binaries or fix build id generation
0
12,583
9,694,148,374
IssuesEvent
2019-05-24 18:06:15
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Update to reflect new Public IP option for SQL Database Managed Instance
assigned-to-author azure-analysis-services/svc doc-enhancement triaged
This text should be updated: β€œ 2 - Azure SQL Database Managed Instance is supported. Because a managed instance runs within Azure VNet with a private IP address, an On-premises Data Gateway is required.” --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: ea45d88b-212b-51ae-b902-6efd5873f6c5 * Version Independent ID: 6f86e171-7553-3d01-00fa-e0865d8d817b * Content: [Data sources supported in Azure Analysis Services](https://docs.microsoft.com/en-us/azure/analysis-services/analysis-services-datasource#azsqlmanaged) * Content Source: [articles/analysis-services/analysis-services-datasource.md](https://github.com/Microsoft/azure-docs/blob/master/articles/analysis-services/analysis-services-datasource.md) * Service: **azure-analysis-services** * GitHub Login: @Minewiskan * Microsoft Alias: **owend**
1.0
Update to reflect new Public IP option for SQL Database Managed Instance - This text should be updated: β€œ 2 - Azure SQL Database Managed Instance is supported. Because a managed instance runs within Azure VNet with a private IP address, an On-premises Data Gateway is required.” --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: ea45d88b-212b-51ae-b902-6efd5873f6c5 * Version Independent ID: 6f86e171-7553-3d01-00fa-e0865d8d817b * Content: [Data sources supported in Azure Analysis Services](https://docs.microsoft.com/en-us/azure/analysis-services/analysis-services-datasource#azsqlmanaged) * Content Source: [articles/analysis-services/analysis-services-datasource.md](https://github.com/Microsoft/azure-docs/blob/master/articles/analysis-services/analysis-services-datasource.md) * Service: **azure-analysis-services** * GitHub Login: @Minewiskan * Microsoft Alias: **owend**
non_process
update to reflect new public ip option for sql database managed instance this text should be updated β€œ azure sql database managed instance is supported because a managed instance runs within azure vnet with a private ip address an on premises data gateway is required ” document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service azure analysis services github login minewiskan microsoft alias owend
0
132,214
18,545,258,378
IssuesEvent
2021-10-21 21:10:04
mozilla/foundation.mozilla.org
https://api.github.com/repos/mozilla/foundation.mozilla.org
opened
[PNI Website] Design support & reviews for dev work cont. IV
design Buyer's Guide πŸ›
Continuing support and reviews from #7584 - support and review PRs for devs
1.0
[PNI Website] Design support & reviews for dev work cont. IV - Continuing support and reviews from #7584 - support and review PRs for devs
non_process
design support reviews for dev work cont iv continuing support and reviews from support and review prs for devs
0
447,343
12,887,664,267
IssuesEvent
2020-07-13 11:39:17
crestic-urca/remotelabz
https://api.github.com/repos/crestic-urca/remotelabz
closed
Unlink security roles and groups (classes)
feature enhancement normal priority
In GitLab by @jhubert on Nov 21, 2018, 17:07 Groups as implemented in previous versions are redundant with Symfony’s roles. They shouldn’t be overloaded by a custom class. Despite of that, we could create a group class which aims to implement teacher’s β€œclasses” (or whatever they are called). *(from redmine: issue id 279, created on 2018-11-21)*
1.0
Unlink security roles and groups (classes) - In GitLab by @jhubert on Nov 21, 2018, 17:07 Groups as implemented in previous versions are redundant with Symfony’s roles. They shouldn’t be overloaded by a custom class. Despite of that, we could create a group class which aims to implement teacher’s β€œclasses” (or whatever they are called). *(from redmine: issue id 279, created on 2018-11-21)*
non_process
unlink security roles and groups classes in gitlab by jhubert on nov groups as implemented in previous versions are redundant with symfony’s roles they shouldn’t be overloaded by a custom class despite of that we could create a group class which aims to implement teacher’s β€œclasses” or whatever they are called from redmine issue id created on
0
5,687
8,559,543,068
IssuesEvent
2018-11-08 21:36:25
MichiganDataScienceTeam/googleanalytics
https://api.github.com/repos/MichiganDataScienceTeam/googleanalytics
closed
Preprocess: u'trafficSource.campaign', u'trafficSource.isTrueDirect', u'trafficSource.keyword',
easy preprocessing
Preprocess the following features: Preprocess the following features: u'trafficSource.campaign', u'trafficSource.isTrueDirect', u'trafficSource.keyword', 1. Standardization: [http://scikit-learn.org/stable/modules/preprocessing.html#standardization-or-mean-removal-and-variance-scaling](http://scikit-learn.org/stable/modules/preprocessing.html#standardization-or-mean-removal-and-variance-scaling) 2. Impute missing values: [http://scikit-learn.org/stable/modules/impute.html](http://scikit-learn.org/stable/modules/impute.html) 3. Normalization: [http://scikit-learn.org/stable/modules/preprocessing.html#normalization](http://scikit-learn.org/stable/modules/preprocessing.html#normalization) 4. Encode categorical features (optional): [http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features](http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features) 5. Discretization (optional): [http://scikit-learn.org/stable/modules/preprocessing.html#discretization](http://scikit-learn.org/stable/modules/preprocessing.html#discretization) [http://scikit-learn.org/stable/modules/preprocessing.html](http://scikit-learn.org/stable/modules/preprocessing.html) 1. Standardization: [http://scikit-learn.org/stable/modules/preprocessing.html#standardization-or-mean-removal-and-variance-scaling](http://scikit-learn.org/stable/modules/preprocessing.html#standardization-or-mean-removal-and-variance-scaling) 2. Impute missing values: [http://scikit-learn.org/stable/modules/impute.html](http://scikit-learn.org/stable/modules/impute.html) 3. Normalization: [http://scikit-learn.org/stable/modules/preprocessing.html#normalization](http://scikit-learn.org/stable/modules/preprocessing.html#normalization) 4. Encode categorical features (optional): [http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features](http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features) 5. Discretization (optional): [http://scikit-learn.org/stable/modules/preprocessing.html#discretization](http://scikit-learn.org/stable/modules/preprocessing.html#discretization) [http://scikit-learn.org/stable/modules/preprocessing.html](http://scikit-learn.org/stable/modules/preprocessing.html)
1.0
Preprocess: u'trafficSource.campaign', u'trafficSource.isTrueDirect', u'trafficSource.keyword', - Preprocess the following features: Preprocess the following features: u'trafficSource.campaign', u'trafficSource.isTrueDirect', u'trafficSource.keyword', 1. Standardization: [http://scikit-learn.org/stable/modules/preprocessing.html#standardization-or-mean-removal-and-variance-scaling](http://scikit-learn.org/stable/modules/preprocessing.html#standardization-or-mean-removal-and-variance-scaling) 2. Impute missing values: [http://scikit-learn.org/stable/modules/impute.html](http://scikit-learn.org/stable/modules/impute.html) 3. Normalization: [http://scikit-learn.org/stable/modules/preprocessing.html#normalization](http://scikit-learn.org/stable/modules/preprocessing.html#normalization) 4. Encode categorical features (optional): [http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features](http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features) 5. Discretization (optional): [http://scikit-learn.org/stable/modules/preprocessing.html#discretization](http://scikit-learn.org/stable/modules/preprocessing.html#discretization) [http://scikit-learn.org/stable/modules/preprocessing.html](http://scikit-learn.org/stable/modules/preprocessing.html) 1. Standardization: [http://scikit-learn.org/stable/modules/preprocessing.html#standardization-or-mean-removal-and-variance-scaling](http://scikit-learn.org/stable/modules/preprocessing.html#standardization-or-mean-removal-and-variance-scaling) 2. Impute missing values: [http://scikit-learn.org/stable/modules/impute.html](http://scikit-learn.org/stable/modules/impute.html) 3. Normalization: [http://scikit-learn.org/stable/modules/preprocessing.html#normalization](http://scikit-learn.org/stable/modules/preprocessing.html#normalization) 4. Encode categorical features (optional): [http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features](http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features) 5. Discretization (optional): [http://scikit-learn.org/stable/modules/preprocessing.html#discretization](http://scikit-learn.org/stable/modules/preprocessing.html#discretization) [http://scikit-learn.org/stable/modules/preprocessing.html](http://scikit-learn.org/stable/modules/preprocessing.html)
process
preprocess u trafficsource campaign u trafficsource istruedirect u trafficsource keyword preprocess the following features preprocess the following features u trafficsource campaign u trafficsource istruedirect u trafficsource keyword standardization impute missing values normalization encode categorical features optional discretization optional standardization impute missing values normalization encode categorical features optional discretization optional
1
22,472
31,388,003,143
IssuesEvent
2023-08-26 02:00:09
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Fri, 25 Aug 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events ### An All Deep System for Badminton Game Analysis - **Authors:** Po-Yung Chou, Yu-Chun Lo, Bo-Zheng Xie, Cheng-Hung Lin, Yu-Yung Kao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2308.12645 - **Pdf link:** https://arxiv.org/pdf/2308.12645 - **Abstract** The CoachAI Badminton 2023 Track1 initiative aim to automatically detect events within badminton match videos. Detecting small objects, especially the shuttlecock, is of quite importance and demands high precision within the challenge. Such detection is crucial for tasks like hit count, hitting time, and hitting location. However, even after revising the well-regarded shuttlecock detecting model, TrackNet, our object detection models still fall short of the desired accuracy. To address this issue, we've implemented various deep learning methods to tackle the problems arising from noisy detectied data, leveraging diverse data types to improve precision. In this report, we detail the detection model modifications we've made and our approach to the 11 tasks. Notably, our system garnered a score of 0.78 out of 1.0 in the challenge. ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### AdVerb: Visually Guided Audio Dereverberation - **Authors:** Sanjoy Chowdhury, Sreyan Ghosh, Subhrajyoti Dasgupta, Anton Ratnarajah, Utkarsh Tyagi, Dinesh Manocha - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM); Sound (cs.SD); Audio and Speech Processing (eess.AS) - **Arxiv link:** https://arxiv.org/abs/2308.12370 - **Pdf link:** https://arxiv.org/pdf/2308.12370 - **Abstract** We present AdVerb, a novel audio-visual dereverberation framework that uses visual cues in addition to the reverberant sound to estimate clean audio. Although audio-only dereverberation is a well-studied problem, our approach incorporates the complementary visual modality to perform audio dereverberation. Given an image of the environment where the reverberated sound signal has been recorded, AdVerb employs a novel geometry-aware cross-modal transformer architecture that captures scene geometry and audio-visual cross-modal relationship to generate a complex ideal ratio mask, which, when applied to the reverberant audio predicts the clean sound. The effectiveness of our method is demonstrated through extensive quantitative and qualitative evaluations. Our approach significantly outperforms traditional audio-only and audio-visual baselines on three downstream tasks: speech enhancement, speech recognition, and speaker verification, with relative improvements in the range of 18% - 82% on the LibriSpeech test-clean set. We also achieve highly satisfactory RT60 error scores on the AVSpeech dataset. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### SCP: Spherical-Coordinate-based Learned Point Cloud Compression - **Authors:** Ao Luo, Linxin Song, Keisuke Nonaka, Kyohei Unno, Heming Sun, Masayuki Goto, Jiro Katto - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2308.12535 - **Pdf link:** https://arxiv.org/pdf/2308.12535 - **Abstract** In recent years, the task of learned point cloud compression has gained prominence. An important type of point cloud, the spinning LiDAR point cloud, is generated by spinning LiDAR on vehicles. This process results in numerous circular shapes and azimuthal angle invariance features within the point clouds. However, these two features have been largely overlooked by previous methodologies. In this paper, we introduce a model-agnostic method called Spherical-Coordinate-based learned Point cloud compression (SCP), designed to leverage the aforementioned features fully. Additionally, we propose a multi-level Octree for SCP to mitigate the reconstruction error for distant areas within the Spherical-coordinate-based Octree. SCP exhibits excellent universality, making it applicable to various learned point cloud compression techniques. Experimental results demonstrate that SCP surpasses previous state-of-the-art methods by up to 29.14% in point-to-point PSNR BD-Rate. ### DLIP: Distilling Language-Image Pre-training - **Authors:** Huafeng Kuang, Jie Wu, Xiawu Zheng, Ming Li, Xuefeng Xiao, Rui Wang, Min Zheng, Rongrong Ji - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2308.12956 - **Pdf link:** https://arxiv.org/pdf/2308.12956 - **Abstract** Vision-Language Pre-training (VLP) shows remarkable progress with the assistance of extremely heavy parameters, which challenges deployment in real applications. Knowledge distillation is well recognized as the essential procedure in model compression. However, existing knowledge distillation techniques lack an in-depth investigation and analysis of VLP, and practical guidelines for VLP-oriented distillation are still not yet explored. In this paper, we present DLIP, a simple yet efficient Distilling Language-Image Pre-training framework, through which we investigate how to distill a light VLP model. Specifically, we dissect the model distillation from multiple dimensions, such as the architecture characteristics of different modules and the information transfer of different modalities. We conduct comprehensive experiments and provide insights on distilling a light but performant VLP model. Experimental results reveal that DLIP can achieve a state-of-the-art accuracy/efficiency trade-off across diverse cross-modal tasks, e.g., image-text retrieval, image captioning and visual question answering. For example, DLIP compresses BLIP by 1.9x, from 213M to 108M parameters, while achieving comparable or better performance. Furthermore, DLIP succeeds in retaining more than 95% of the performance with 22.4% parameters and 24.8% FLOPs compared to the teacher model and accelerates inference speed by 2.7x. ## Keyword: RAW ### Toward American Sign Language Processing in the Real World: Data, Tasks, and Methods - **Authors:** Bowen Shi - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2308.12419 - **Pdf link:** https://arxiv.org/pdf/2308.12419 - **Abstract** Sign language, which conveys meaning through gestures, is the chief means of communication among deaf people. Recognizing sign language in natural settings presents significant challenges due to factors such as lighting, background clutter, and variations in signer characteristics. In this thesis, I study automatic sign language processing in the wild, using signing videos collected from the Internet. This thesis contributes new datasets, tasks, and methods. Most chapters of this thesis address tasks related to fingerspelling, an important component of sign language and yet has not been studied widely by prior work. I present three new large-scale ASL datasets in the wild: ChicagoFSWild, ChicagoFSWild+, and OpenASL. Using ChicagoFSWild and ChicagoFSWild+, I address fingerspelling recognition, which consists of transcribing fingerspelling sequences into text. I propose an end-to-end approach based on iterative attention that allows recognition from a raw video without explicit hand detection. I further show that using a Conformer-based network jointly modeling handshape and mouthing can bring performance close to that of humans. Next, I propose two tasks for building real-world fingerspelling-based applications: fingerspelling detection and search. For fingerspelling detection, I introduce a suite of evaluation metrics and a new detection model via multi-task training. To address the problem of searching for fingerspelled keywords in raw sign language videos, we propose a novel method that jointly localizes and matches fingerspelling segments to text. Finally, I will describe a benchmark for large-vocabulary open-domain sign language translation based on OpenASL. To address the challenges of sign language translation in realistic settings, we propose a set of techniques including sign search as a pretext task for pre-training and fusion of mouthing and handshape features. ### SieveNet: Selecting Point-Based Features for Mesh Networks - **Authors:** Shengchao Yuan, Yishun Dou, Rui Shi, Bingbing Ni, Zhong Zheng - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2308.12530 - **Pdf link:** https://arxiv.org/pdf/2308.12530 - **Abstract** Meshes are widely used in 3D computer vision and graphics, but their irregular topology poses challenges in applying them to existing neural network architectures. Recent advances in mesh neural networks turn to remeshing and push the boundary of pioneer methods that solely take the raw meshes as input. Although the remeshing offers a regular topology that significantly facilitates the design of mesh network architectures, features extracted from such remeshed proxies may struggle to retain the underlying geometry faithfully, limiting the subsequent neural network's capacity. To address this issue, we propose SieveNet, a novel paradigm that takes into account both the regular topology and the exact geometry. Specifically, this method utilizes structured mesh topology from remeshing and accurate geometric information from distortion-aware point sampling on the surface of the original mesh. Furthermore, our method eliminates the need for hand-crafted feature engineering and can leverage off-the-shelf network architectures such as the vision transformer. Comprehensive experimental results on classification and segmentation tasks well demonstrate the effectiveness and superiority of our method. ## Keyword: raw image There is no result
2.0
New submissions for Fri, 25 Aug 23 - ## Keyword: events ### An All Deep System for Badminton Game Analysis - **Authors:** Po-Yung Chou, Yu-Chun Lo, Bo-Zheng Xie, Cheng-Hung Lin, Yu-Yung Kao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2308.12645 - **Pdf link:** https://arxiv.org/pdf/2308.12645 - **Abstract** The CoachAI Badminton 2023 Track1 initiative aim to automatically detect events within badminton match videos. Detecting small objects, especially the shuttlecock, is of quite importance and demands high precision within the challenge. Such detection is crucial for tasks like hit count, hitting time, and hitting location. However, even after revising the well-regarded shuttlecock detecting model, TrackNet, our object detection models still fall short of the desired accuracy. To address this issue, we've implemented various deep learning methods to tackle the problems arising from noisy detectied data, leveraging diverse data types to improve precision. In this report, we detail the detection model modifications we've made and our approach to the 11 tasks. Notably, our system garnered a score of 0.78 out of 1.0 in the challenge. ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### AdVerb: Visually Guided Audio Dereverberation - **Authors:** Sanjoy Chowdhury, Sreyan Ghosh, Subhrajyoti Dasgupta, Anton Ratnarajah, Utkarsh Tyagi, Dinesh Manocha - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM); Sound (cs.SD); Audio and Speech Processing (eess.AS) - **Arxiv link:** https://arxiv.org/abs/2308.12370 - **Pdf link:** https://arxiv.org/pdf/2308.12370 - **Abstract** We present AdVerb, a novel audio-visual dereverberation framework that uses visual cues in addition to the reverberant sound to estimate clean audio. Although audio-only dereverberation is a well-studied problem, our approach incorporates the complementary visual modality to perform audio dereverberation. Given an image of the environment where the reverberated sound signal has been recorded, AdVerb employs a novel geometry-aware cross-modal transformer architecture that captures scene geometry and audio-visual cross-modal relationship to generate a complex ideal ratio mask, which, when applied to the reverberant audio predicts the clean sound. The effectiveness of our method is demonstrated through extensive quantitative and qualitative evaluations. Our approach significantly outperforms traditional audio-only and audio-visual baselines on three downstream tasks: speech enhancement, speech recognition, and speaker verification, with relative improvements in the range of 18% - 82% on the LibriSpeech test-clean set. We also achieve highly satisfactory RT60 error scores on the AVSpeech dataset. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### SCP: Spherical-Coordinate-based Learned Point Cloud Compression - **Authors:** Ao Luo, Linxin Song, Keisuke Nonaka, Kyohei Unno, Heming Sun, Masayuki Goto, Jiro Katto - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2308.12535 - **Pdf link:** https://arxiv.org/pdf/2308.12535 - **Abstract** In recent years, the task of learned point cloud compression has gained prominence. An important type of point cloud, the spinning LiDAR point cloud, is generated by spinning LiDAR on vehicles. This process results in numerous circular shapes and azimuthal angle invariance features within the point clouds. However, these two features have been largely overlooked by previous methodologies. In this paper, we introduce a model-agnostic method called Spherical-Coordinate-based learned Point cloud compression (SCP), designed to leverage the aforementioned features fully. Additionally, we propose a multi-level Octree for SCP to mitigate the reconstruction error for distant areas within the Spherical-coordinate-based Octree. SCP exhibits excellent universality, making it applicable to various learned point cloud compression techniques. Experimental results demonstrate that SCP surpasses previous state-of-the-art methods by up to 29.14% in point-to-point PSNR BD-Rate. ### DLIP: Distilling Language-Image Pre-training - **Authors:** Huafeng Kuang, Jie Wu, Xiawu Zheng, Ming Li, Xuefeng Xiao, Rui Wang, Min Zheng, Rongrong Ji - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2308.12956 - **Pdf link:** https://arxiv.org/pdf/2308.12956 - **Abstract** Vision-Language Pre-training (VLP) shows remarkable progress with the assistance of extremely heavy parameters, which challenges deployment in real applications. Knowledge distillation is well recognized as the essential procedure in model compression. However, existing knowledge distillation techniques lack an in-depth investigation and analysis of VLP, and practical guidelines for VLP-oriented distillation are still not yet explored. In this paper, we present DLIP, a simple yet efficient Distilling Language-Image Pre-training framework, through which we investigate how to distill a light VLP model. Specifically, we dissect the model distillation from multiple dimensions, such as the architecture characteristics of different modules and the information transfer of different modalities. We conduct comprehensive experiments and provide insights on distilling a light but performant VLP model. Experimental results reveal that DLIP can achieve a state-of-the-art accuracy/efficiency trade-off across diverse cross-modal tasks, e.g., image-text retrieval, image captioning and visual question answering. For example, DLIP compresses BLIP by 1.9x, from 213M to 108M parameters, while achieving comparable or better performance. Furthermore, DLIP succeeds in retaining more than 95% of the performance with 22.4% parameters and 24.8% FLOPs compared to the teacher model and accelerates inference speed by 2.7x. ## Keyword: RAW ### Toward American Sign Language Processing in the Real World: Data, Tasks, and Methods - **Authors:** Bowen Shi - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2308.12419 - **Pdf link:** https://arxiv.org/pdf/2308.12419 - **Abstract** Sign language, which conveys meaning through gestures, is the chief means of communication among deaf people. Recognizing sign language in natural settings presents significant challenges due to factors such as lighting, background clutter, and variations in signer characteristics. In this thesis, I study automatic sign language processing in the wild, using signing videos collected from the Internet. This thesis contributes new datasets, tasks, and methods. Most chapters of this thesis address tasks related to fingerspelling, an important component of sign language and yet has not been studied widely by prior work. I present three new large-scale ASL datasets in the wild: ChicagoFSWild, ChicagoFSWild+, and OpenASL. Using ChicagoFSWild and ChicagoFSWild+, I address fingerspelling recognition, which consists of transcribing fingerspelling sequences into text. I propose an end-to-end approach based on iterative attention that allows recognition from a raw video without explicit hand detection. I further show that using a Conformer-based network jointly modeling handshape and mouthing can bring performance close to that of humans. Next, I propose two tasks for building real-world fingerspelling-based applications: fingerspelling detection and search. For fingerspelling detection, I introduce a suite of evaluation metrics and a new detection model via multi-task training. To address the problem of searching for fingerspelled keywords in raw sign language videos, we propose a novel method that jointly localizes and matches fingerspelling segments to text. Finally, I will describe a benchmark for large-vocabulary open-domain sign language translation based on OpenASL. To address the challenges of sign language translation in realistic settings, we propose a set of techniques including sign search as a pretext task for pre-training and fusion of mouthing and handshape features. ### SieveNet: Selecting Point-Based Features for Mesh Networks - **Authors:** Shengchao Yuan, Yishun Dou, Rui Shi, Bingbing Ni, Zhong Zheng - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2308.12530 - **Pdf link:** https://arxiv.org/pdf/2308.12530 - **Abstract** Meshes are widely used in 3D computer vision and graphics, but their irregular topology poses challenges in applying them to existing neural network architectures. Recent advances in mesh neural networks turn to remeshing and push the boundary of pioneer methods that solely take the raw meshes as input. Although the remeshing offers a regular topology that significantly facilitates the design of mesh network architectures, features extracted from such remeshed proxies may struggle to retain the underlying geometry faithfully, limiting the subsequent neural network's capacity. To address this issue, we propose SieveNet, a novel paradigm that takes into account both the regular topology and the exact geometry. Specifically, this method utilizes structured mesh topology from remeshing and accurate geometric information from distortion-aware point sampling on the surface of the original mesh. Furthermore, our method eliminates the need for hand-crafted feature engineering and can leverage off-the-shelf network architectures such as the vision transformer. Comprehensive experimental results on classification and segmentation tasks well demonstrate the effectiveness and superiority of our method. ## Keyword: raw image There is no result
process
new submissions for fri aug keyword events an all deep system for badminton game analysis authors po yung chou yu chun lo bo zheng xie cheng hung lin yu yung kao subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the coachai badminton initiative aim to automatically detect events within badminton match videos detecting small objects especially the shuttlecock is of quite importance and demands high precision within the challenge such detection is crucial for tasks like hit count hitting time and hitting location however even after revising the well regarded shuttlecock detecting model tracknet our object detection models still fall short of the desired accuracy to address this issue we ve implemented various deep learning methods to tackle the problems arising from noisy detectied data leveraging diverse data types to improve precision in this report we detail the detection model modifications we ve made and our approach to the tasks notably our system garnered a score of out of in the challenge keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp adverb visually guided audio dereverberation authors sanjoy chowdhury sreyan ghosh subhrajyoti dasgupta anton ratnarajah utkarsh tyagi dinesh manocha subjects computer vision and pattern recognition cs cv multimedia cs mm sound cs sd audio and speech processing eess as arxiv link pdf link abstract we present adverb a novel audio visual dereverberation framework that uses visual cues in addition to the reverberant sound to estimate clean audio although audio only dereverberation is a well studied problem our approach incorporates the complementary visual modality to perform audio dereverberation given an image of the environment where the reverberated sound signal has been recorded adverb employs a novel geometry aware cross modal transformer architecture that captures scene geometry and audio visual cross modal relationship to generate a complex ideal ratio mask which when applied to the reverberant audio predicts the clean sound the effectiveness of our method is demonstrated through extensive quantitative and qualitative evaluations our approach significantly outperforms traditional audio only and audio visual baselines on three downstream tasks speech enhancement speech recognition and speaker verification with relative improvements in the range of on the librispeech test clean set we also achieve highly satisfactory error scores on the avspeech dataset keyword image signal processing there is no result keyword image signal process there is no result keyword compression scp spherical coordinate based learned point cloud compression authors ao luo linxin song keisuke nonaka kyohei unno heming sun masayuki goto jiro katto subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract in recent years the task of learned point cloud compression has gained prominence an important type of point cloud the spinning lidar point cloud is generated by spinning lidar on vehicles this process results in numerous circular shapes and azimuthal angle invariance features within the point clouds however these two features have been largely overlooked by previous methodologies in this paper we introduce a model agnostic method called spherical coordinate based learned point cloud compression scp designed to leverage the aforementioned features fully additionally we propose a multi level octree for scp to mitigate the reconstruction error for distant areas within the spherical coordinate based octree scp exhibits excellent universality making it applicable to various learned point cloud compression techniques experimental results demonstrate that scp surpasses previous state of the art methods by up to in point to point psnr bd rate dlip distilling language image pre training authors huafeng kuang jie wu xiawu zheng ming li xuefeng xiao rui wang min zheng rongrong ji subjects computer vision and pattern recognition cs cv artificial intelligence cs ai machine learning cs lg arxiv link pdf link abstract vision language pre training vlp shows remarkable progress with the assistance of extremely heavy parameters which challenges deployment in real applications knowledge distillation is well recognized as the essential procedure in model compression however existing knowledge distillation techniques lack an in depth investigation and analysis of vlp and practical guidelines for vlp oriented distillation are still not yet explored in this paper we present dlip a simple yet efficient distilling language image pre training framework through which we investigate how to distill a light vlp model specifically we dissect the model distillation from multiple dimensions such as the architecture characteristics of different modules and the information transfer of different modalities we conduct comprehensive experiments and provide insights on distilling a light but performant vlp model experimental results reveal that dlip can achieve a state of the art accuracy efficiency trade off across diverse cross modal tasks e g image text retrieval image captioning and visual question answering for example dlip compresses blip by from to parameters while achieving comparable or better performance furthermore dlip succeeds in retaining more than of the performance with parameters and flops compared to the teacher model and accelerates inference speed by keyword raw toward american sign language processing in the real world data tasks and methods authors bowen shi subjects computer vision and pattern recognition cs cv computation and language cs cl arxiv link pdf link abstract sign language which conveys meaning through gestures is the chief means of communication among deaf people recognizing sign language in natural settings presents significant challenges due to factors such as lighting background clutter and variations in signer characteristics in this thesis i study automatic sign language processing in the wild using signing videos collected from the internet this thesis contributes new datasets tasks and methods most chapters of this thesis address tasks related to fingerspelling an important component of sign language and yet has not been studied widely by prior work i present three new large scale asl datasets in the wild chicagofswild chicagofswild and openasl using chicagofswild and chicagofswild i address fingerspelling recognition which consists of transcribing fingerspelling sequences into text i propose an end to end approach based on iterative attention that allows recognition from a raw video without explicit hand detection i further show that using a conformer based network jointly modeling handshape and mouthing can bring performance close to that of humans next i propose two tasks for building real world fingerspelling based applications fingerspelling detection and search for fingerspelling detection i introduce a suite of evaluation metrics and a new detection model via multi task training to address the problem of searching for fingerspelled keywords in raw sign language videos we propose a novel method that jointly localizes and matches fingerspelling segments to text finally i will describe a benchmark for large vocabulary open domain sign language translation based on openasl to address the challenges of sign language translation in realistic settings we propose a set of techniques including sign search as a pretext task for pre training and fusion of mouthing and handshape features sievenet selecting point based features for mesh networks authors shengchao yuan yishun dou rui shi bingbing ni zhong zheng subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract meshes are widely used in computer vision and graphics but their irregular topology poses challenges in applying them to existing neural network architectures recent advances in mesh neural networks turn to remeshing and push the boundary of pioneer methods that solely take the raw meshes as input although the remeshing offers a regular topology that significantly facilitates the design of mesh network architectures features extracted from such remeshed proxies may struggle to retain the underlying geometry faithfully limiting the subsequent neural network s capacity to address this issue we propose sievenet a novel paradigm that takes into account both the regular topology and the exact geometry specifically this method utilizes structured mesh topology from remeshing and accurate geometric information from distortion aware point sampling on the surface of the original mesh furthermore our method eliminates the need for hand crafted feature engineering and can leverage off the shelf network architectures such as the vision transformer comprehensive experimental results on classification and segmentation tasks well demonstrate the effectiveness and superiority of our method keyword raw image there is no result
1
222,262
24,694,965,898
IssuesEvent
2022-10-19 11:23:35
kaidisn/netflix_conductor_fork
https://api.github.com/repos/kaidisn/netflix_conductor_fork
opened
CVE-2022-40160 (Medium) detected in commons-jxpath-1.3.jar
security vulnerability
## CVE-2022-40160 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-jxpath-1.3.jar</b></p></summary> <p>A Java-based implementation of XPath 1.0 that, in addition to XML processing, can inspect/modify Java object graphs (the library's explicit purpose) and even mixed Java/XML structures.</p> <p>Path to dependency file: /test-harness/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-jxpath/commons-jxpath/1.3/c22d7d0f0f40eb7059a23cfa61773a416768b137/commons-jxpath-1.3.jar</p> <p> Dependency Hierarchy: - conductor-redis-persistence-1.0 (Root Library) - dyno-queues-redis-2.0.13.jar - dyno-jedis-1.7.2-rc2.jar - dyno-contrib-1.7.2-rc2.jar - eureka-client-1.8.6.jar - netflix-eventbus-0.3.0.jar - netflix-infix-0.3.0.jar - :x: **commons-jxpath-1.3.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kaidisn/netflix_conductor_fork/commit/e5f3a784765077c7776dd541a3c94011c256b35b">e5f3a784765077c7776dd541a3c94011c256b35b</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Those using JXPath to interpret XPath may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow. This effect may support a denial of service attack. <p>Publish Date: 2022-10-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-40160>CVE-2022-40160</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p>
True
CVE-2022-40160 (Medium) detected in commons-jxpath-1.3.jar - ## CVE-2022-40160 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-jxpath-1.3.jar</b></p></summary> <p>A Java-based implementation of XPath 1.0 that, in addition to XML processing, can inspect/modify Java object graphs (the library's explicit purpose) and even mixed Java/XML structures.</p> <p>Path to dependency file: /test-harness/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-jxpath/commons-jxpath/1.3/c22d7d0f0f40eb7059a23cfa61773a416768b137/commons-jxpath-1.3.jar</p> <p> Dependency Hierarchy: - conductor-redis-persistence-1.0 (Root Library) - dyno-queues-redis-2.0.13.jar - dyno-jedis-1.7.2-rc2.jar - dyno-contrib-1.7.2-rc2.jar - eureka-client-1.8.6.jar - netflix-eventbus-0.3.0.jar - netflix-infix-0.3.0.jar - :x: **commons-jxpath-1.3.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kaidisn/netflix_conductor_fork/commit/e5f3a784765077c7776dd541a3c94011c256b35b">e5f3a784765077c7776dd541a3c94011c256b35b</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Those using JXPath to interpret XPath may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow. This effect may support a denial of service attack. <p>Publish Date: 2022-10-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-40160>CVE-2022-40160</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p>
non_process
cve medium detected in commons jxpath jar cve medium severity vulnerability vulnerable library commons jxpath jar a java based implementation of xpath that in addition to xml processing can inspect modify java object graphs the library s explicit purpose and even mixed java xml structures path to dependency file test harness build gradle path to vulnerable library home wss scanner gradle caches modules files commons jxpath commons jxpath commons jxpath jar dependency hierarchy conductor redis persistence root library dyno queues redis jar dyno jedis jar dyno contrib jar eureka client jar netflix eventbus jar netflix infix jar x commons jxpath jar vulnerable library found in head commit a href found in base branch master vulnerability details those using jxpath to interpret xpath may be vulnerable to denial of service attacks dos if the parser is running on user supplied input an attacker may supply content that causes the parser to crash by stackoverflow this effect may support a denial of service attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href
0
72,362
24,074,341,728
IssuesEvent
2022-09-18 15:47:10
SeleniumHQ/selenium
https://api.github.com/repos/SeleniumHQ/selenium
closed
[πŸ› Bug]: The TAB page switch in Internet Explorer does not take effect, and no error is reported
I-defect needs-triaging
### What happened? The TAB page switch in Internet Explorer does not take effect, and no error is reported. this is my code: ` public void TestMethod1() { InternetExplorerDriverService service = InternetExplorerDriverService.CreateDefaultService(@"F:\IEDriverServer_Win32_4.3.0"); service.LogFile = "./iedriver.log"; service.LoggingLevel = InternetExplorerDriverLogLevel.Trace; WebDriver webDriver = new InternetExplorerDriver(service); webDriver.Navigate().GoToUrl("http://bing.com"); var firstHandler = webDriver.CurrentWindowHandle; webDriver.SwitchTo().NewWindow(WindowType.Tab); webDriver.Navigate().GoToUrl("http://www.bing.com"); webDriver.SwitchTo().Window(firstHandler); // Does not go to the initial window handle. webDriver.SwitchTo().Window(firstHandler); var newWindowHandle = webDriver.CurrentWindowHandle; foreach (string window in webDriver.WindowHandles) { if (newWindowHandle != window) { webDriver.SwitchTo().Window(window); break; } } Thread.Sleep(5000); webDriver.Quit(); } [iedriver.log](https://github.com/SeleniumHQ/selenium/files/9594247/iedriver.log) ` ### How can we reproduce the issue? ```shell As above ``` ### Relevant log output ```shell The log file is attached ``` ### Operating System win 10 ### Selenium version Selenium4.4 ### What are the browser(s) and version(s) where you see this issue? IE11 ### What are the browser driver(s) and version(s) where you see this issue? IEDriver 4.3 ### Are you using Selenium Grid? no
1.0
[πŸ› Bug]: The TAB page switch in Internet Explorer does not take effect, and no error is reported - ### What happened? The TAB page switch in Internet Explorer does not take effect, and no error is reported. this is my code: ` public void TestMethod1() { InternetExplorerDriverService service = InternetExplorerDriverService.CreateDefaultService(@"F:\IEDriverServer_Win32_4.3.0"); service.LogFile = "./iedriver.log"; service.LoggingLevel = InternetExplorerDriverLogLevel.Trace; WebDriver webDriver = new InternetExplorerDriver(service); webDriver.Navigate().GoToUrl("http://bing.com"); var firstHandler = webDriver.CurrentWindowHandle; webDriver.SwitchTo().NewWindow(WindowType.Tab); webDriver.Navigate().GoToUrl("http://www.bing.com"); webDriver.SwitchTo().Window(firstHandler); // Does not go to the initial window handle. webDriver.SwitchTo().Window(firstHandler); var newWindowHandle = webDriver.CurrentWindowHandle; foreach (string window in webDriver.WindowHandles) { if (newWindowHandle != window) { webDriver.SwitchTo().Window(window); break; } } Thread.Sleep(5000); webDriver.Quit(); } [iedriver.log](https://github.com/SeleniumHQ/selenium/files/9594247/iedriver.log) ` ### How can we reproduce the issue? ```shell As above ``` ### Relevant log output ```shell The log file is attached ``` ### Operating System win 10 ### Selenium version Selenium4.4 ### What are the browser(s) and version(s) where you see this issue? IE11 ### What are the browser driver(s) and version(s) where you see this issue? IEDriver 4.3 ### Are you using Selenium Grid? no
non_process
the tab page switch in internet explorer does not take effect and no error is reported what happened the tab page switch in internet explorer does not take effect and no error is reported this is my code public void internetexplorerdriverservice service internetexplorerdriverservice createdefaultservice f iedriverserver service logfile iedriver log service logginglevel internetexplorerdriverloglevel trace webdriver webdriver new internetexplorerdriver service webdriver navigate gotourl var firsthandler webdriver currentwindowhandle webdriver switchto newwindow windowtype tab webdriver navigate gotourl webdriver switchto window firsthandler does not go to the initial window handle webdriver switchto window firsthandler var newwindowhandle webdriver currentwindowhandle foreach string window in webdriver windowhandles if newwindowhandle window webdriver switchto window window break thread sleep webdriver quit how can we reproduce the issue shell as above relevant log output shell the log file is attached operating system win selenium version what are the browser s and version s where you see this issue what are the browser driver s and version s where you see this issue iedriver are you using selenium grid no
0
229,856
25,384,450,776
IssuesEvent
2022-11-21 20:32:05
jenkins-infra/repository-permissions-updater
https://api.github.com/repos/jenkins-infra/repository-permissions-updater
closed
Forking gerrit-trigger-plugin
needs clarification proposed-for-close hosting-request needs-fix security-audit-skip
### Repository URL https://github.com/criteo-forks/gerrit-trigger-plugin ### New Repository Name criteo-gerrit-trigger-plugin ### Description ## Criteo fork for gerrit-trigger-plugin ### GitHub users to have commit permission @emmanuelguerin @teikitel @zeralight @gmonceyron @bhou-crto ### Jenkins project users to have release permission @emmanuelguerin @teikitel @zeralight @gmonceyron @bhou-crto ### Issue tracker GitHub issues
True
Forking gerrit-trigger-plugin - ### Repository URL https://github.com/criteo-forks/gerrit-trigger-plugin ### New Repository Name criteo-gerrit-trigger-plugin ### Description ## Criteo fork for gerrit-trigger-plugin ### GitHub users to have commit permission @emmanuelguerin @teikitel @zeralight @gmonceyron @bhou-crto ### Jenkins project users to have release permission @emmanuelguerin @teikitel @zeralight @gmonceyron @bhou-crto ### Issue tracker GitHub issues
non_process
forking gerrit trigger plugin repository url new repository name criteo gerrit trigger plugin description criteo fork for gerrit trigger plugin github users to have commit permission emmanuelguerin teikitel zeralight gmonceyron bhou crto jenkins project users to have release permission emmanuelguerin teikitel zeralight gmonceyron bhou crto issue tracker github issues
0
2,356
5,165,426,604
IssuesEvent
2017-01-17 13:41:59
DevExpress/testcafe-hammerhead
https://api.github.com/repos/DevExpress/testcafe-hammerhead
closed
The page resource detecting works incorrectly
AREA: server SYSTEM: resource processing TYPE: bug
A browser sends the `accept` header with the page mime type when I click on a link. For example: `Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8` But we don't consider the `content-type` header from response. We process resource like a page if the `content-type` header equals `application/pdf` or `text/plain` etc. See https://github.com/DevExpress/testcafe/issues/1130
1.0
The page resource detecting works incorrectly - A browser sends the `accept` header with the page mime type when I click on a link. For example: `Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8` But we don't consider the `content-type` header from response. We process resource like a page if the `content-type` header equals `application/pdf` or `text/plain` etc. See https://github.com/DevExpress/testcafe/issues/1130
process
the page resource detecting works incorrectly a browser sends the accept header with the page mime type when i click on a link for example accept text html application xhtml xml application xml q image webp q but we don t consider the content type header from response we process resource like a page if the content type header equals application pdf or text plain etc see
1
2,406
5,193,196,735
IssuesEvent
2017-01-22 16:56:58
raphym/Simulation_of_message_routing_by_intelligent_agents
https://api.github.com/repos/raphym/Simulation_of_message_routing_by_intelligent_agents
opened
Node change
being processed
to check if a specific Node is a quorum i have to add a boolean isBackbone with its function of set and get
1.0
Node change - to check if a specific Node is a quorum i have to add a boolean isBackbone with its function of set and get
process
node change to check if a specific node is a quorum i have to add a boolean isbackbone with its function of set and get
1
88,033
10,563,399,317
IssuesEvent
2019-10-04 20:52:05
boto/boto3
https://api.github.com/repos/boto/boto3
closed
ECS.client.create_service() Incorrectly Requiring loadBalancers param
api-documentation api-question
Attempting to run ECS.client.create_service() without a load balancer. Per the documentation, loadBalancers parameter is not required: https://boto3.readthedocs.org/en/latest/reference/services/ecs.html#ECS.Client.create_service When executed via boto3, I'm seeing the following error: ``` (Error code 400) File "/usr/local/lib/python3.5/site-packages/botocore/client.py", line 310, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/local/lib/python3.5/site-packages/botocore/client.py", line 407, in _make_api_call raise ClientError(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (InvalidParameterException) when calling the CreateService operation: A role was passed, but no load balancers were present. ``` The aws_cli tool does not require the loadBalancers parameter.
1.0
ECS.client.create_service() Incorrectly Requiring loadBalancers param - Attempting to run ECS.client.create_service() without a load balancer. Per the documentation, loadBalancers parameter is not required: https://boto3.readthedocs.org/en/latest/reference/services/ecs.html#ECS.Client.create_service When executed via boto3, I'm seeing the following error: ``` (Error code 400) File "/usr/local/lib/python3.5/site-packages/botocore/client.py", line 310, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/local/lib/python3.5/site-packages/botocore/client.py", line 407, in _make_api_call raise ClientError(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (InvalidParameterException) when calling the CreateService operation: A role was passed, but no load balancers were present. ``` The aws_cli tool does not require the loadBalancers parameter.
non_process
ecs client create service incorrectly requiring loadbalancers param attempting to run ecs client create service without a load balancer per the documentation loadbalancers parameter is not required when executed via i m seeing the following error error code file usr local lib site packages botocore client py line in api call return self make api call operation name kwargs file usr local lib site packages botocore client py line in make api call raise clienterror parsed response operation name botocore exceptions clienterror an error occurred invalidparameterexception when calling the createservice operation a role was passed but no load balancers were present the aws cli tool does not require the loadbalancers parameter
0
17,339
23,160,290,659
IssuesEvent
2022-07-29 16:56:52
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
Conref to topic element is not resolved when publishing to HTML
bug priority/medium preprocess/conref
## Steps to Reproduce I publish the DITA map from this zip file [sample.zip](https://github.com/dita-ot/dita-ot/files/9198045/sample.zip) to HTML and the conref to topic element is not resolved into the output. In the console I have the following messages: ` [conref] Loading stylesheet file:/D:/kitSite/Oxygen%20XML%20Editor%2024.1/frameworks/dita/DITA-OT3.x/plugins/org.dita.base/xsl/preprocess/conref.xsl [conref] Processing file:/C:/Users/cosmin_duna/Desktop/toDelete/sample/temp/html5/topic.dita [conref] An empty sequence is not allowed as the value of variable $first [conref] Failed to transform document: Failed to transform document: An empty sequence is not allowed as the value of variable $first` ## Possible Solution I checked into the "plugin:org.dita.base:xsl/preprocess/conrefImpl.xsl" file and the problem seems to be into the template that matches '*[@conref:orig-id]'. In my case the "orig-id" variable is the id of a topic and the content ('content' variable) where this id is searched contains a list of elements that skips the topic elements. So, it seems to work if I change the "first" variable with this: `<xsl:variable name="first" select="($topic//@conref:orig-id[. = $orig-id])[1]" as="attribute()"/>`
1.0
Conref to topic element is not resolved when publishing to HTML - ## Steps to Reproduce I publish the DITA map from this zip file [sample.zip](https://github.com/dita-ot/dita-ot/files/9198045/sample.zip) to HTML and the conref to topic element is not resolved into the output. In the console I have the following messages: ` [conref] Loading stylesheet file:/D:/kitSite/Oxygen%20XML%20Editor%2024.1/frameworks/dita/DITA-OT3.x/plugins/org.dita.base/xsl/preprocess/conref.xsl [conref] Processing file:/C:/Users/cosmin_duna/Desktop/toDelete/sample/temp/html5/topic.dita [conref] An empty sequence is not allowed as the value of variable $first [conref] Failed to transform document: Failed to transform document: An empty sequence is not allowed as the value of variable $first` ## Possible Solution I checked into the "plugin:org.dita.base:xsl/preprocess/conrefImpl.xsl" file and the problem seems to be into the template that matches '*[@conref:orig-id]'. In my case the "orig-id" variable is the id of a topic and the content ('content' variable) where this id is searched contains a list of elements that skips the topic elements. So, it seems to work if I change the "first" variable with this: `<xsl:variable name="first" select="($topic//@conref:orig-id[. = $orig-id])[1]" as="attribute()"/>`
process
conref to topic element is not resolved when publishing to html steps to reproduce i publish the dita map from this zip file to html and the conref to topic element is not resolved into the output in the console i have the following messages loading stylesheet file d kitsite oxygen frameworks dita dita x plugins org dita base xsl preprocess conref xsl processing file c users cosmin duna desktop todelete sample temp topic dita an empty sequence is not allowed as the value of variable first failed to transform document failed to transform document an empty sequence is not allowed as the value of variable first possible solution i checked into the plugin org dita base xsl preprocess conrefimpl xsl file and the problem seems to be into the template that matches in my case the orig id variable is the id of a topic and the content content variable where this id is searched contains a list of elements that skips the topic elements so it seems to work if i change the first variable with this
1
16,234
20,781,149,084
IssuesEvent
2022-03-16 14:53:30
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
CockroachDB + migrate: figure out source and impact of slowness
process/candidate team/schema topic: cockroachdb
From first hand experience, using `prisma migrate dev` with cockroachdb is slow. - Does it happen on all versions/deployments of cockroachdb? The prisma test docker image has a bunch of custom settings. - Where does the slowness come from? - Do we still consider it an acceptable user experience? - If not, use findings from previous steps to come up with potential improvements. - As a last resort, figure out ways to provide more frequent feedback on command progress
1.0
CockroachDB + migrate: figure out source and impact of slowness - From first hand experience, using `prisma migrate dev` with cockroachdb is slow. - Does it happen on all versions/deployments of cockroachdb? The prisma test docker image has a bunch of custom settings. - Where does the slowness come from? - Do we still consider it an acceptable user experience? - If not, use findings from previous steps to come up with potential improvements. - As a last resort, figure out ways to provide more frequent feedback on command progress
process
cockroachdb migrate figure out source and impact of slowness from first hand experience using prisma migrate dev with cockroachdb is slow does it happen on all versions deployments of cockroachdb the prisma test docker image has a bunch of custom settings where does the slowness come from do we still consider it an acceptable user experience if not use findings from previous steps to come up with potential improvements as a last resort figure out ways to provide more frequent feedback on command progress
1
99,857
21,046,956,570
IssuesEvent
2022-03-31 16:54:39
microsoft/vscode-pylint
https://api.github.com/repos/microsoft/vscode-pylint
closed
Update to pylint 2.13.x
code health
We released pylint `2.13.0` last week: https://pylint.pycqa.org/en/latest/whatsnew/2.13.html Besides some new features and bugfixes, it also includes major improvements for error ranges. I.e. errors on function and classes are no longer emitted for the whole object, instead only for the first line. https://pypi.org/project/pylint/ https://pypi.org/project/astroid/
1.0
Update to pylint 2.13.x - We released pylint `2.13.0` last week: https://pylint.pycqa.org/en/latest/whatsnew/2.13.html Besides some new features and bugfixes, it also includes major improvements for error ranges. I.e. errors on function and classes are no longer emitted for the whole object, instead only for the first line. https://pypi.org/project/pylint/ https://pypi.org/project/astroid/
non_process
update to pylint x we released pylint last week besides some new features and bugfixes it also includes major improvements for error ranges i e errors on function and classes are no longer emitted for the whole object instead only for the first line
0
119,299
4,768,104,030
IssuesEvent
2016-10-26 08:00:17
PowerlineApp/powerline-mobile
https://api.github.com/repos/PowerlineApp/powerline-mobile
closed
Create Leader Content: Show only groups where user can create leader content
enhancement P2 - Medium Priority
"Make a leader content" option in top right corner in Main screen should be visible only when following conditions are met: * when some particual group is selected in Activity newsfeed, and the user has right to create leader content there (i.e. is leader or owner) * or, when no group is selected in Activity newsfeed, and there exists at least one group where the user is allowed to create leader content. Currently there exists `GET /api/groups/is-manager/{id}` api call which can tell if the current user is manager of a particual group or not, however this is not effective approach, because I would rather know that information in bulk for all the groups the user is member of (because in create form detail one can pick from list of groups and it would be innefective to make http request for each group to find out if user can create there poll or not). Therefore my suggestion is to enrich `GET /api/v2/user/groups` so that it also provides information whether user is manager/owner of the group. I already make request to `GET /api/v2/user/groups` to get the list of groups to display in activity newsfeed filter so I will get it almost for free.
1.0
Create Leader Content: Show only groups where user can create leader content - "Make a leader content" option in top right corner in Main screen should be visible only when following conditions are met: * when some particual group is selected in Activity newsfeed, and the user has right to create leader content there (i.e. is leader or owner) * or, when no group is selected in Activity newsfeed, and there exists at least one group where the user is allowed to create leader content. Currently there exists `GET /api/groups/is-manager/{id}` api call which can tell if the current user is manager of a particual group or not, however this is not effective approach, because I would rather know that information in bulk for all the groups the user is member of (because in create form detail one can pick from list of groups and it would be innefective to make http request for each group to find out if user can create there poll or not). Therefore my suggestion is to enrich `GET /api/v2/user/groups` so that it also provides information whether user is manager/owner of the group. I already make request to `GET /api/v2/user/groups` to get the list of groups to display in activity newsfeed filter so I will get it almost for free.
non_process
create leader content show only groups where user can create leader content make a leader content option in top right corner in main screen should be visible only when following conditions are met when some particual group is selected in activity newsfeed and the user has right to create leader content there i e is leader or owner or when no group is selected in activity newsfeed and there exists at least one group where the user is allowed to create leader content currently there exists get api groups is manager id api call which can tell if the current user is manager of a particual group or not however this is not effective approach because i would rather know that information in bulk for all the groups the user is member of because in create form detail one can pick from list of groups and it would be innefective to make http request for each group to find out if user can create there poll or not therefore my suggestion is to enrich get api user groups so that it also provides information whether user is manager owner of the group i already make request to get api user groups to get the list of groups to display in activity newsfeed filter so i will get it almost for free
0
40,938
12,800,504,758
IssuesEvent
2020-07-02 17:13:50
zcash/ZcashLightClientKit
https://api.github.com/repos/zcash/ZcashLightClientKit
closed
Null bytes in strings effectively truncate the string from librustzcash's point of view
security-finding security-mvp2
The relevant code: ``` static func isValidShieldedAddress(_ address: String) throws -> Bool { guard zcashlc_is_valid_shielded_address([CChar](address.utf8CString)) else { if let error = lastError() { throw error } return false } return true } ``` It's possible for `address` to contain null characters, which are included in `address.utf8CString`. The rust layer will attempt to build a `CStr` out of a pointer to the first character, which will look until it finds the first null byte, so it will think the string ends earlier compared to what the Swift code thinks. I think it would be possible to make a QR code with an invalid address, but one that starts with a valid address then a null byte, like this: ``` "zs1vsfpknuh5afwe2wl7v0uwlx2n87n57r905472zym8zq20a8k56mvvu6g0x2g9atgqlkru5w7cxa\0something else that makes the address invalid" ``` When the iOS app code scans the QR code and checks `isValidAddress()`, it will return true and think it's a valid address because the rust layer only sees up until that null byte. Demo code: ``` import Foundation import Glibc let s = "Hel\0lo!" let bytes = s.utf8CString print(bytes) // [72, 101, 108, 0, 108, 111, 33, 0] bytes.withUnsafeBufferPointer { ptr in print(strlen(ptr.baseAddress!)) // 3 } ```
True
Null bytes in strings effectively truncate the string from librustzcash's point of view - The relevant code: ``` static func isValidShieldedAddress(_ address: String) throws -> Bool { guard zcashlc_is_valid_shielded_address([CChar](address.utf8CString)) else { if let error = lastError() { throw error } return false } return true } ``` It's possible for `address` to contain null characters, which are included in `address.utf8CString`. The rust layer will attempt to build a `CStr` out of a pointer to the first character, which will look until it finds the first null byte, so it will think the string ends earlier compared to what the Swift code thinks. I think it would be possible to make a QR code with an invalid address, but one that starts with a valid address then a null byte, like this: ``` "zs1vsfpknuh5afwe2wl7v0uwlx2n87n57r905472zym8zq20a8k56mvvu6g0x2g9atgqlkru5w7cxa\0something else that makes the address invalid" ``` When the iOS app code scans the QR code and checks `isValidAddress()`, it will return true and think it's a valid address because the rust layer only sees up until that null byte. Demo code: ``` import Foundation import Glibc let s = "Hel\0lo!" let bytes = s.utf8CString print(bytes) // [72, 101, 108, 0, 108, 111, 33, 0] bytes.withUnsafeBufferPointer { ptr in print(strlen(ptr.baseAddress!)) // 3 } ```
non_process
null bytes in strings effectively truncate the string from librustzcash s point of view the relevant code static func isvalidshieldedaddress address string throws bool guard zcashlc is valid shielded address address else if let error lasterror throw error return false return true it s possible for address to contain null characters which are included in address the rust layer will attempt to build a cstr out of a pointer to the first character which will look until it finds the first null byte so it will think the string ends earlier compared to what the swift code thinks i think it would be possible to make a qr code with an invalid address but one that starts with a valid address then a null byte like this else that makes the address invalid when the ios app code scans the qr code and checks isvalidaddress it will return true and think it s a valid address because the rust layer only sees up until that null byte demo code import foundation import glibc let s hel let bytes s print bytes bytes withunsafebufferpointer ptr in print strlen ptr baseaddress
0
106,290
4,269,485,005
IssuesEvent
2016-07-13 00:41:08
regan-sarwas/Observer
https://api.github.com/repos/regan-sarwas/Observer
opened
Slashes in survey name
arena:office priority:high type:bug
Slashes in a survey name result in a poz file with an invalid filename. Either restrict the survey name, or translate the poz name to something acceptable. Also check other suspect characters.
1.0
Slashes in survey name - Slashes in a survey name result in a poz file with an invalid filename. Either restrict the survey name, or translate the poz name to something acceptable. Also check other suspect characters.
non_process
slashes in survey name slashes in a survey name result in a poz file with an invalid filename either restrict the survey name or translate the poz name to something acceptable also check other suspect characters
0
46,101
6,008,419,897
IssuesEvent
2017-06-06 07:47:32
geetsisbac/E55Q3NOULHJ77TLQU57Z4QQM
https://api.github.com/repos/geetsisbac/E55Q3NOULHJ77TLQU57Z4QQM
reopened
hx93C+k8j73elKo5uflDwGQoa9w+NioNu4iwCgtAHiwMNMgDfbvO3DGBFZscRA4NwcobaOXE9B7jAlQYDbWnEZskA0/tG/w44h8+jcwEggP9ebFeY+n6vNXJTqKqAQW45NNSf/TpnVOoS4/q+lzdw0UhzsruqbEFLoJ1HMYEf9k=
design
JDfEyiZe7tvY6LutYB+KEU89iDbXvtNRrvpSCxfsSehgRXC1oiRQZ81wuXUkZz3FGhN980898U+5sBe+cp8BM6pkD1wZSG3sFuzws0Pmu7aVcys9nrneXbFttzMX2nZyLSt4zWUhmUy+fel620jJooNyyAn06xaoHnWAqCZuGER67lYpirrQ0cHIFZZDweX9M3iZwEcv9CBhocYaCf+VOHwktIpSNNIScjoaBCLloq2rpUlRqISYdJfN+Wzo5V7b2i63+pjzIzNckQZTIZg5Dj6wUH5x52gSvERUQCBwlXkDpqfGLqMvI0JVsEvq6PR+xv/yxw7+rc1hzNM5IlHOD5c8eWJA+nVVRn562zw9GJ2jqV7cXkOaycAMy6C2CwakUFMKys+rQMWomDVI1He+dilEDBS1XkMfULEgI5Z4uwz8Xxp7Ceprk+lf6dljc48G6JkVb1Hg3zlfzHhr5hgv5ASU0c4vU2q8E+0fvDFBEgHIkbNwDuPIsnhbEZH96BYV74T3co7J4oX3+OC+/SUPANlc5811DG02M8jWCE11JcOln9IH9hOJICnoyHTH8LcWkm+NF7scg/BLvY+3TBcBW2u2Q2gn/6Cd/no7t2MQpl18HEMmHthMFahblNvJCqOqGebSgQSklJlQII0UTL2MxtO9hThdid+FvoZbe/WE5XUJ7Ths+rtyLzoKR/5prW11Jua53oj2L1ktUs473MtmbhRFnq5y8y8fF6xraFuxbltuT+WW2oFaGlxeivyj1+cl
1.0
hx93C+k8j73elKo5uflDwGQoa9w+NioNu4iwCgtAHiwMNMgDfbvO3DGBFZscRA4NwcobaOXE9B7jAlQYDbWnEZskA0/tG/w44h8+jcwEggP9ebFeY+n6vNXJTqKqAQW45NNSf/TpnVOoS4/q+lzdw0UhzsruqbEFLoJ1HMYEf9k= - JDfEyiZe7tvY6LutYB+KEU89iDbXvtNRrvpSCxfsSehgRXC1oiRQZ81wuXUkZz3FGhN980898U+5sBe+cp8BM6pkD1wZSG3sFuzws0Pmu7aVcys9nrneXbFttzMX2nZyLSt4zWUhmUy+fel620jJooNyyAn06xaoHnWAqCZuGER67lYpirrQ0cHIFZZDweX9M3iZwEcv9CBhocYaCf+VOHwktIpSNNIScjoaBCLloq2rpUlRqISYdJfN+Wzo5V7b2i63+pjzIzNckQZTIZg5Dj6wUH5x52gSvERUQCBwlXkDpqfGLqMvI0JVsEvq6PR+xv/yxw7+rc1hzNM5IlHOD5c8eWJA+nVVRn562zw9GJ2jqV7cXkOaycAMy6C2CwakUFMKys+rQMWomDVI1He+dilEDBS1XkMfULEgI5Z4uwz8Xxp7Ceprk+lf6dljc48G6JkVb1Hg3zlfzHhr5hgv5ASU0c4vU2q8E+0fvDFBEgHIkbNwDuPIsnhbEZH96BYV74T3co7J4oX3+OC+/SUPANlc5811DG02M8jWCE11JcOln9IH9hOJICnoyHTH8LcWkm+NF7scg/BLvY+3TBcBW2u2Q2gn/6Cd/no7t2MQpl18HEMmHthMFahblNvJCqOqGebSgQSklJlQII0UTL2MxtO9hThdid+FvoZbe/WE5XUJ7Ths+rtyLzoKR/5prW11Jua53oj2L1ktUs473MtmbhRFnq5y8y8fF6xraFuxbltuT+WW2oFaGlxeivyj1+cl
non_process
tg q xv oc blvy fvozbe rtylzokr cl
0
14,756
18,040,261,151
IssuesEvent
2021-09-18 00:32:51
Leviatan-Analytics/LA-data-processing
https://api.github.com/repos/Leviatan-Analytics/LA-data-processing
closed
Discuss curation techniques utilty with LOL analyst [1]
Data Processing Week 3 Sprint 4
Discuss with the analyst different ways to enhance the recognition when player names overlap based on the previous research.
1.0
Discuss curation techniques utilty with LOL analyst [1] - Discuss with the analyst different ways to enhance the recognition when player names overlap based on the previous research.
process
discuss curation techniques utilty with lol analyst discuss with the analyst different ways to enhance the recognition when player names overlap based on the previous research
1
9,963
13,001,915,657
IssuesEvent
2020-07-24 01:22:08
LD4P/discovery
https://api.github.com/repos/LD4P/discovery
closed
Identify needs/tasks for updating the indexing process and content itself
Data: sources and linkages Indexing process and content
Are there changes to the index itself that we would like to experiment and try out?
1.0
Identify needs/tasks for updating the indexing process and content itself - Are there changes to the index itself that we would like to experiment and try out?
process
identify needs tasks for updating the indexing process and content itself are there changes to the index itself that we would like to experiment and try out
1
8,106
11,300,414,888
IssuesEvent
2020-01-17 13:34:08
prisma/prisma2
https://api.github.com/repos/prisma/prisma2
opened
Unclear Introspection error message: Error parsing attribute "@id": Fields that are marked as id must be required.
process/candidate topic: introspection
Introspecting SQLite: ``` CREATE TABLE "playlist_track" ( "PlaylistId" INTEGER NOT NULL, "TrackId" INTEGER NOT NULL, "id" INTEGER PRIMARY KEY AUTOINCREMENT, FOREIGN KEY("PlaylistId") REFERENCES "playlists"("PlaylistId") ON DELETE NO ACTION ON UPDATE NO ACTION, FOREIGN KEY("TrackId") REFERENCES "tracks"("TrackId") ON DELETE NO ACTION ON UPDATE NO ACTION ) ``` Creates error message: ``` ERROR Oops, an unexpected error occured! Schema parsing error: Error parsing attribute "@id": Fields that are marked as id must be required. --> schema.prisma:126 | 125 | TrackId tracks 126 | id Int? @id | ``` Error message does not include name of table/model for context.
1.0
Unclear Introspection error message: Error parsing attribute "@id": Fields that are marked as id must be required. - Introspecting SQLite: ``` CREATE TABLE "playlist_track" ( "PlaylistId" INTEGER NOT NULL, "TrackId" INTEGER NOT NULL, "id" INTEGER PRIMARY KEY AUTOINCREMENT, FOREIGN KEY("PlaylistId") REFERENCES "playlists"("PlaylistId") ON DELETE NO ACTION ON UPDATE NO ACTION, FOREIGN KEY("TrackId") REFERENCES "tracks"("TrackId") ON DELETE NO ACTION ON UPDATE NO ACTION ) ``` Creates error message: ``` ERROR Oops, an unexpected error occured! Schema parsing error: Error parsing attribute "@id": Fields that are marked as id must be required. --> schema.prisma:126 | 125 | TrackId tracks 126 | id Int? @id | ``` Error message does not include name of table/model for context.
process
unclear introspection error message error parsing attribute id fields that are marked as id must be required introspecting sqlite create table playlist track playlistid integer not null trackid integer not null id integer primary key autoincrement foreign key playlistid references playlists playlistid on delete no action on update no action foreign key trackid references tracks trackid on delete no action on update no action creates error message error oops an unexpected error occured schema parsing error error parsing attribute id fields that are marked as id must be required schema prisma trackid tracks id int id error message does not include name of table model for context
1
10,729
13,531,299,305
IssuesEvent
2020-09-15 21:22:33
prisma/prisma
https://api.github.com/repos/prisma/prisma
closed
internal error with self relation query
bug/2-confirmed engines/query engine kind/bug process/next-milestone team/engines topic: broken query
<!-- Thanks for helping us improve Prisma! πŸ™ Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client. Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports --> ## Bug description self relation query doesn't work ## How to reproduce ```graphql query { findManyPost(where: { morePosts: { some: { title: { equals: "my" } } } }) { id title userId postId } } ``` results in ``` { "errors": [ { "error": "Error occurred during query execution:\nConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(Error { kind: Db, cause: Some(DbError { severity: \"ERROR\", parsed_severity: Some(Error), code: SqlState(\"42601\"), message: \"syntax error at or near \\\")\\\"\", detail: None, hint: None, position: Some(Original(259)), where_: None, schema: None, table: None, column: None, datatype: None, constraint: None, file: Some(\"scan.l\"), line: Some(1149), routine: Some(\"scanner_yyerror\") }) }) })", "user_facing_error": { "is_panic": false, "message": "Error occurred during query execution:\nConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(Error { kind: Db, cause: Some(DbError { severity: \"ERROR\", parsed_severity: Some(Error), code: SqlState(\"42601\"), message: \"syntax error at or near \\\")\\\"\", detail: None, hint: None, position: Some(Original(259)), where_: None, schema: None, table: None, column: None, datatype: None, constraint: None, file: Some(\"scan.l\"), line: Some(1149), routine: Some(\"scanner_yyerror\") }) }) })", "backtrace": null } } ] } ``` ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> ## Prisma information ```prisma generator photon { provider = "prisma-client-js" } datasource db { provider = "postgresql" url = env("DB") } model User { id String @default(cuid()) @id posts Post[] } model Post { id String @default(cuid()) @id title String morePosts Post[] @relation("PostToPost") User User? @relation(fields: [userId], references: [id]) userId String? Post Post? @relation("PostToPost", fields: [postId], references: [id]) postId String? } ``` ## Environment & setup <!-- In which environment does the problem occur --> - OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]--> - Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]--> postgres - Node.js version: <!--[Run `node -v` to see your Node.js version]--> - Prisma version: <!--[Run `prisma -v` to see your Prisma version and paste it between the ´´´]--> ``` @prisma/cli : 2.5.0 Current platform : darwin Query Engine : query-engine 9a670138b1db276001d785a2adcba1584c869d24 (at node_modules/@prisma/cli/query-engine-darwin) Migration Engine : migration-engine-cli 9a670138b1db276001d785a2adcba1584c869d24 (at node_modules/@prisma/cli/migration-engine-darwin) Introspection Engine : introspection-core 9a670138b1db276001d785a2adcba1584c869d24 (at node_modules/@prisma/cli/introspection-engine-darwin) Format Binary : prisma-fmt 9a670138b1db276001d785a2adcba1584c869d24 (at node_modules/@prisma/cli/prisma-fmt-darwin) Studio : 0.259.0 ``` originally reported at https://github.com/prisma/prisma-client-go/issues/206
1.0
internal error with self relation query - <!-- Thanks for helping us improve Prisma! πŸ™ Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client. Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports --> ## Bug description self relation query doesn't work ## How to reproduce ```graphql query { findManyPost(where: { morePosts: { some: { title: { equals: "my" } } } }) { id title userId postId } } ``` results in ``` { "errors": [ { "error": "Error occurred during query execution:\nConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(Error { kind: Db, cause: Some(DbError { severity: \"ERROR\", parsed_severity: Some(Error), code: SqlState(\"42601\"), message: \"syntax error at or near \\\")\\\"\", detail: None, hint: None, position: Some(Original(259)), where_: None, schema: None, table: None, column: None, datatype: None, constraint: None, file: Some(\"scan.l\"), line: Some(1149), routine: Some(\"scanner_yyerror\") }) }) })", "user_facing_error": { "is_panic": false, "message": "Error occurred during query execution:\nConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(Error { kind: Db, cause: Some(DbError { severity: \"ERROR\", parsed_severity: Some(Error), code: SqlState(\"42601\"), message: \"syntax error at or near \\\")\\\"\", detail: None, hint: None, position: Some(Original(259)), where_: None, schema: None, table: None, column: None, datatype: None, constraint: None, file: Some(\"scan.l\"), line: Some(1149), routine: Some(\"scanner_yyerror\") }) }) })", "backtrace": null } } ] } ``` ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> ## Prisma information ```prisma generator photon { provider = "prisma-client-js" } datasource db { provider = "postgresql" url = env("DB") } model User { id String @default(cuid()) @id posts Post[] } model Post { id String @default(cuid()) @id title String morePosts Post[] @relation("PostToPost") User User? @relation(fields: [userId], references: [id]) userId String? Post Post? @relation("PostToPost", fields: [postId], references: [id]) postId String? } ``` ## Environment & setup <!-- In which environment does the problem occur --> - OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]--> - Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]--> postgres - Node.js version: <!--[Run `node -v` to see your Node.js version]--> - Prisma version: <!--[Run `prisma -v` to see your Prisma version and paste it between the ´´´]--> ``` @prisma/cli : 2.5.0 Current platform : darwin Query Engine : query-engine 9a670138b1db276001d785a2adcba1584c869d24 (at node_modules/@prisma/cli/query-engine-darwin) Migration Engine : migration-engine-cli 9a670138b1db276001d785a2adcba1584c869d24 (at node_modules/@prisma/cli/migration-engine-darwin) Introspection Engine : introspection-core 9a670138b1db276001d785a2adcba1584c869d24 (at node_modules/@prisma/cli/introspection-engine-darwin) Format Binary : prisma-fmt 9a670138b1db276001d785a2adcba1584c869d24 (at node_modules/@prisma/cli/prisma-fmt-darwin) Studio : 0.259.0 ``` originally reported at https://github.com/prisma/prisma-client-go/issues/206
process
internal error with self relation query thanks for helping us improve prisma πŸ™ please follow the sections in the template and provide as much information as possible about your problem e g by setting the debug environment variable and enabling additional logging output in prisma client learn more about writing proper bug reports here bug description self relation query doesn t work how to reproduce graphql query findmanypost where moreposts some title equals my id title userid postid results in errors error error occurred during query execution nconnectorerror connectorerror user facing error none kind queryerror error kind db cause some dberror severity error parsed severity some error code sqlstate message syntax error at or near detail none hint none position some original where none schema none table none column none datatype none constraint none file some scan l line some routine some scanner yyerror user facing error is panic false message error occurred during query execution nconnectorerror connectorerror user facing error none kind queryerror error kind db cause some dberror severity error parsed severity some error code sqlstate message syntax error at or near detail none hint none position some original where none schema none table none column none datatype none constraint none file some scan l line some routine some scanner yyerror backtrace null expected behavior prisma information prisma generator photon provider prisma client js datasource db provider postgresql url env db model user id string default cuid id posts post model post id string default cuid id title string moreposts post relation posttopost user user relation fields references userid string post post relation posttopost fields references postid string environment setup os database postgres node js version prisma version prisma cli current platform darwin query engine query engine at node modules prisma cli query engine darwin migration engine migration engine cli at node modules prisma cli migration engine darwin introspection engine introspection core at node modules prisma cli introspection engine darwin format binary prisma fmt at node modules prisma cli prisma fmt darwin studio originally reported at
1
17,474
23,298,432,224
IssuesEvent
2022-08-07 00:15:07
mdsreq-fga-unb/2022.1-GDS
https://api.github.com/repos/mdsreq-fga-unb/2022.1-GDS
closed
Processo de Desenvolvimento - Atividades
Processo de Desenvolvimento
**DescriΓ§Γ£o** em vΓ‘rias atividades existe uma coluna denominada **feedback** a qual possui os valores **segundos**, **minutos** e **dias**. O que Γ© isso? Γ© quando os feedbacks serΓ£o recebidos? faz sentido essa informaΓ§Γ£o? para que ela serΓ‘ usada?
1.0
Processo de Desenvolvimento - Atividades - **DescriΓ§Γ£o** em vΓ‘rias atividades existe uma coluna denominada **feedback** a qual possui os valores **segundos**, **minutos** e **dias**. O que Γ© isso? Γ© quando os feedbacks serΓ£o recebidos? faz sentido essa informaΓ§Γ£o? para que ela serΓ‘ usada?
process
processo de desenvolvimento atividades descriΓ§Γ£o em vΓ‘rias atividades existe uma coluna denominada feedback a qual possui os valores segundos minutos e dias o que Γ© isso Γ© quando os feedbacks serΓ£o recebidos faz sentido essa informaΓ§Γ£o para que ela serΓ‘ usada
1
569,696
17,015,829,381
IssuesEvent
2021-07-02 11:54:28
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
opened
Usability - User's can't finish/exit creating a geometry
Component: potlatch2 Priority: major Type: enhancement
**[Submitted to the original trac issue database at 8.34pm, Wednesday, 1st February 2012]** Through my experience with newbie users, most of them have problems when it comes to adding features. Specifically, they quickly understand they can simply click on the map to create a feature. What they have problems with is that most click once to create a point, and then move on to add attributes. They dont understand that they first need to finish the 'adding geometry' step by clicking on the same point again first before moving on to adding attributes. So they create a line in the end while trying to edit attributes. When they are creating ways, the same problem happens that they can't finish the adding geometry step. This is a usability issue for which I do not have a clear answer, but some intuitive action for 'finishing editing geometry' should be found and applied consistently for editing points, lines, or closed ways.
1.0
Usability - User's can't finish/exit creating a geometry - **[Submitted to the original trac issue database at 8.34pm, Wednesday, 1st February 2012]** Through my experience with newbie users, most of them have problems when it comes to adding features. Specifically, they quickly understand they can simply click on the map to create a feature. What they have problems with is that most click once to create a point, and then move on to add attributes. They dont understand that they first need to finish the 'adding geometry' step by clicking on the same point again first before moving on to adding attributes. So they create a line in the end while trying to edit attributes. When they are creating ways, the same problem happens that they can't finish the adding geometry step. This is a usability issue for which I do not have a clear answer, but some intuitive action for 'finishing editing geometry' should be found and applied consistently for editing points, lines, or closed ways.
non_process
usability user s can t finish exit creating a geometry through my experience with newbie users most of them have problems when it comes to adding features specifically they quickly understand they can simply click on the map to create a feature what they have problems with is that most click once to create a point and then move on to add attributes they dont understand that they first need to finish the adding geometry step by clicking on the same point again first before moving on to adding attributes so they create a line in the end while trying to edit attributes when they are creating ways the same problem happens that they can t finish the adding geometry step this is a usability issue for which i do not have a clear answer but some intuitive action for finishing editing geometry should be found and applied consistently for editing points lines or closed ways
0
89,079
11,195,438,148
IssuesEvent
2020-01-03 06:27:14
jackfirth/rebellion
https://api.github.com/repos/jackfirth/rebellion
closed
Support pattern matching on type instances
enhancement needs api design pattern matching
The `rebellion/type` libraries should cooperate with the `racket/match` library so that pattern matching on instances of custom types is easy.
1.0
Support pattern matching on type instances - The `rebellion/type` libraries should cooperate with the `racket/match` library so that pattern matching on instances of custom types is easy.
non_process
support pattern matching on type instances the rebellion type libraries should cooperate with the racket match library so that pattern matching on instances of custom types is easy
0
15,505
19,703,264,677
IssuesEvent
2022-01-12 18:52:10
googleapis/java-appengine-admin
https://api.github.com/repos/googleapis/java-appengine-admin
opened
Your .repo-metadata.json file has a problem πŸ€’
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan πŸ“ˆ: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'appengine-admin' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem πŸ€’ - You have a problem with your .repo-metadata.json file: Result of scan πŸ“ˆ: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'appengine-admin' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json file has a problem πŸ€’ you have a problem with your repo metadata json file result of scan πŸ“ˆ release level must be equal to one of the allowed values in repo metadata json api shortname appengine admin invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
1
674,541
23,055,981,135
IssuesEvent
2022-07-25 04:39:05
space-wizards/space-station-14
https://api.github.com/repos/space-wizards/space-station-14
reopened
Inconsistent usage of "Centcom" and "Centcomm"
Priority: 2-Before Release Issue: Needs Cleanup Difficulty: 1-Easy Beginner Friendly
## Description Inconsistent usage of "Centcom" and "Centcomm". Which one is true one? Capitalization-wise as well? `grep -iR "centcom"` reveals both being in use. Seeing: - Centcom - Centcomm - CentCom - CentComm **Reproduction** Run grep on repo.
1.0
Inconsistent usage of "Centcom" and "Centcomm" - ## Description Inconsistent usage of "Centcom" and "Centcomm". Which one is true one? Capitalization-wise as well? `grep -iR "centcom"` reveals both being in use. Seeing: - Centcom - Centcomm - CentCom - CentComm **Reproduction** Run grep on repo.
non_process
inconsistent usage of centcom and centcomm description inconsistent usage of centcom and centcomm which one is true one capitalization wise as well grep ir centcom reveals both being in use seeing centcom centcomm centcom centcomm reproduction run grep on repo
0
308,937
26,639,492,878
IssuesEvent
2023-01-25 02:20:02
microsoft/vscode
https://api.github.com/repos/microsoft/vscode
closed
Test profiles feature for shipping
testplan-item
Refs: https://github.com/microsoft/vscode/issues/116740 - [x] macOS @rebornix - [x] linux @lramos15 - [x] windows @karthiknadig Complexity: 4 [Create Issue](https://github.com/microsoft/vscode/issues/new?body=Testing+%23172051%0A%0A&assignees=sandy081) --- Use unreleased 1.75.0 stable build from our builds page for testing. As we are planning to ship profiles to stable as GA feature, please do a smoke and bit of exploratory testing of profiles feature. Try to use all possible functionalities of profiles feature and make sure they are working as expected. You can access all profiles functionality from global activity menu (gear). Provide any UX suggestions you come across while using the feature.
1.0
Test profiles feature for shipping - Refs: https://github.com/microsoft/vscode/issues/116740 - [x] macOS @rebornix - [x] linux @lramos15 - [x] windows @karthiknadig Complexity: 4 [Create Issue](https://github.com/microsoft/vscode/issues/new?body=Testing+%23172051%0A%0A&assignees=sandy081) --- Use unreleased 1.75.0 stable build from our builds page for testing. As we are planning to ship profiles to stable as GA feature, please do a smoke and bit of exploratory testing of profiles feature. Try to use all possible functionalities of profiles feature and make sure they are working as expected. You can access all profiles functionality from global activity menu (gear). Provide any UX suggestions you come across while using the feature.
non_process
test profiles feature for shipping refs macos rebornix linux windows karthiknadig complexity use unreleased stable build from our builds page for testing as we are planning to ship profiles to stable as ga feature please do a smoke and bit of exploratory testing of profiles feature try to use all possible functionalities of profiles feature and make sure they are working as expected you can access all profiles functionality from global activity menu gear provide any ux suggestions you come across while using the feature
0
516,792
14,987,882,437
IssuesEvent
2021-01-28 23:54:17
NCEAS/metacatui
https://api.github.com/repos/NCEAS/metacatui
opened
Support editing map options on portal data pages
Portal Editor Priority: Low enhancement portals
Allow users to edit the supported map options in the portal builder: | `optionName` | type | description | | --- | --- | --- | | mapZoomLevel | Integer | A zoom level to use for a map | | mapCenterLatitude | Latitude | The geographic latitude on which the map should be centered | | mapCenterLongitude | Longitude | The geographic longitude on which the map should be centered | | mapShapeHue | Three digit hue code | A hue to use for shape and/or marker colors on the map, represented by a 3-digit hue number. | Mock-ups for this feature are available [here](https://invis.io/3WUGZ3Y82MZ) (along with other designs for the portal builder data page). <img width="1131" alt="Screen Shot 2021-01-28 at 18 51 47" src="https://user-images.githubusercontent.com/26600641/106213067-eaac8f00-6199-11eb-92af-edd87ef05a1f.png">
1.0
Support editing map options on portal data pages - Allow users to edit the supported map options in the portal builder: | `optionName` | type | description | | --- | --- | --- | | mapZoomLevel | Integer | A zoom level to use for a map | | mapCenterLatitude | Latitude | The geographic latitude on which the map should be centered | | mapCenterLongitude | Longitude | The geographic longitude on which the map should be centered | | mapShapeHue | Three digit hue code | A hue to use for shape and/or marker colors on the map, represented by a 3-digit hue number. | Mock-ups for this feature are available [here](https://invis.io/3WUGZ3Y82MZ) (along with other designs for the portal builder data page). <img width="1131" alt="Screen Shot 2021-01-28 at 18 51 47" src="https://user-images.githubusercontent.com/26600641/106213067-eaac8f00-6199-11eb-92af-edd87ef05a1f.png">
non_process
support editing map options on portal data pages allow users to edit the supported map options in the portal builder optionname type description mapzoomlevel integer a zoom level to use for a map mapcenterlatitude latitude the geographic latitude on which the map should be centered mapcenterlongitude longitude the geographic longitude on which the map should be centered mapshapehue three digit hue code a hue to use for shape and or marker colors on the map represented by a digit hue number mock ups for this feature are available along with other designs for the portal builder data page img width alt screen shot at src
0
19,994
26,467,673,605
IssuesEvent
2023-01-17 02:32:52
vivianafu/dt-ui
https://api.github.com/repos/vivianafu/dt-ui
closed
Tooltip issues
bug processing
Nothing renders with following: ```tsx <Tooltip label="Tooltip works">Tooltip</Tooltip> ``` Either 1. Make it works 2. Limit children types
1.0
Tooltip issues - Nothing renders with following: ```tsx <Tooltip label="Tooltip works">Tooltip</Tooltip> ``` Either 1. Make it works 2. Limit children types
process
tooltip issues nothing renders with following tsx tooltip either make it works limit children types
1
105,860
16,661,207,296
IssuesEvent
2021-06-06 11:01:26
SpyrexDE/NetChat
https://api.github.com/repos/SpyrexDE/NetChat
opened
Insecure RSA key exchange
security
At the moment an intercepting MITM could manipulate the RSA key exchange to gain full control of the traffic, fully breaking the encryption. To fix that, we'd need a TLSv1.3 wrapped socket to the server that uses a certificate, for example from Let'sEncrypt.
True
Insecure RSA key exchange - At the moment an intercepting MITM could manipulate the RSA key exchange to gain full control of the traffic, fully breaking the encryption. To fix that, we'd need a TLSv1.3 wrapped socket to the server that uses a certificate, for example from Let'sEncrypt.
non_process
insecure rsa key exchange at the moment an intercepting mitm could manipulate the rsa key exchange to gain full control of the traffic fully breaking the encryption to fix that we d need a wrapped socket to the server that uses a certificate for example from let sencrypt
0
263,244
19,904,477,263
IssuesEvent
2022-01-25 11:18:59
lava-nc/lava
https://api.github.com/repos/lava-nc/lava
opened
Accessing Non-lava type attributes process interface defined in process models
documentation
<!-- - Before submitting an issue please refer to https://lava-nc.org/developer_guide.html#how-to-contribute-to-lava - Please make sure you are posting an issue pertaining to the github.com/lava-nc/lava, for issues with lava libraries please file in appropriate library repository, for example github.com/lava-nc/lava-dl/issues - Please do not submit support requests or "How to" questions here, use discussions Q&A https://github.com/lava-nc/lava/discussions/categories/q-a - ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> <!-- Insert one sentence issue objective here, can be copied to PR. --> Objective of issue: <!-- Lava Version bug found in or Lava Version feature is targeting--> **Lava version:** - [ ] **0.3.0** (feature release) - [ ] **0.2.1** (bug fixes) - [x] **0.2.0** (current version) - [ ] **0.1.2** **I'm submitting a ...** <!-- (check one with "x") --> - [ ] bug report - [ ] feature request - [x] documentation request <!-- Please do not submit support requests or "How to" questions here, use discussions Q&A https://github.com/lava-nc/lava/discussions/categories/q-a --> **Current behavior:** <!-- Describe the bug or why a new feature is needed, can be copied to PR --> - From the documentation now, it is unclear whether a non-Lava variable defined in the process interfaces can be accessed in its process model (or whether this is allowable behavior). Also not very obvious is what different types of Vars can be created. For example, creating `Vars()` when the shape of the attribute is not fixed (like a user defined list) or string. A nice addition for a new developer would be a short document showing examples of initializing variables of different types and clarifying the above issue. **Expected behavior:** <!-- Describe how the bug or new feature should work, can be copied to PR --> - **Steps to reproduce:** <!-- If a bug, explain the steps to reproduce the issue --> - **Related code:** <!-- If you are able to illustrate the bug or feature request with a code example, please provide a sample application via one of the following means: A sample application via GitHub StackBlitz (https://stackblitz.com) Plunker (http://plnkr.co/edit/cpeRJs?p=preview) Replit (https://replit.com/languages/python3) --> ``` insert short code snippets here ``` **Other information:** <!-- List any other information that is relevant to your issue. Stack traces, related issues, suggestions on how to fix, Stack Overflow links, forum links, etc. --> ``` insert the output from lava debug here ```
1.0
Accessing Non-lava type attributes process interface defined in process models - <!-- - Before submitting an issue please refer to https://lava-nc.org/developer_guide.html#how-to-contribute-to-lava - Please make sure you are posting an issue pertaining to the github.com/lava-nc/lava, for issues with lava libraries please file in appropriate library repository, for example github.com/lava-nc/lava-dl/issues - Please do not submit support requests or "How to" questions here, use discussions Q&A https://github.com/lava-nc/lava/discussions/categories/q-a - ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> <!-- Insert one sentence issue objective here, can be copied to PR. --> Objective of issue: <!-- Lava Version bug found in or Lava Version feature is targeting--> **Lava version:** - [ ] **0.3.0** (feature release) - [ ] **0.2.1** (bug fixes) - [x] **0.2.0** (current version) - [ ] **0.1.2** **I'm submitting a ...** <!-- (check one with "x") --> - [ ] bug report - [ ] feature request - [x] documentation request <!-- Please do not submit support requests or "How to" questions here, use discussions Q&A https://github.com/lava-nc/lava/discussions/categories/q-a --> **Current behavior:** <!-- Describe the bug or why a new feature is needed, can be copied to PR --> - From the documentation now, it is unclear whether a non-Lava variable defined in the process interfaces can be accessed in its process model (or whether this is allowable behavior). Also not very obvious is what different types of Vars can be created. For example, creating `Vars()` when the shape of the attribute is not fixed (like a user defined list) or string. A nice addition for a new developer would be a short document showing examples of initializing variables of different types and clarifying the above issue. **Expected behavior:** <!-- Describe how the bug or new feature should work, can be copied to PR --> - **Steps to reproduce:** <!-- If a bug, explain the steps to reproduce the issue --> - **Related code:** <!-- If you are able to illustrate the bug or feature request with a code example, please provide a sample application via one of the following means: A sample application via GitHub StackBlitz (https://stackblitz.com) Plunker (http://plnkr.co/edit/cpeRJs?p=preview) Replit (https://replit.com/languages/python3) --> ``` insert short code snippets here ``` **Other information:** <!-- List any other information that is relevant to your issue. Stack traces, related issues, suggestions on how to fix, Stack Overflow links, forum links, etc. --> ``` insert the output from lava debug here ```
non_process
accessing non lava type attributes process interface defined in process models before submitting an issue please refer to please make sure you are posting an issue pertaining to the github com lava nc lava for issues with lava libraries please file in appropriate library repository for example github com lava nc lava dl issues please do not submit support requests or how to questions here use discussions q a issues missing important information may be closed without investigation objective of issue lava version feature release bug fixes current version i m submitting a bug report feature request documentation request current behavior from the documentation now it is unclear whether a non lava variable defined in the process interfaces can be accessed in its process model or whether this is allowable behavior also not very obvious is what different types of vars can be created for example creating vars when the shape of the attribute is not fixed like a user defined list or string a nice addition for a new developer would be a short document showing examples of initializing variables of different types and clarifying the above issue expected behavior steps to reproduce related code if you are able to illustrate the bug or feature request with a code example please provide a sample application via one of the following means a sample application via github stackblitz plunker replit insert short code snippets here other information insert the output from lava debug here
0
20,954
27,815,705,460
IssuesEvent
2023-03-18 17:02:06
cse442-at-ub/project_s23-atomic
https://api.github.com/repos/cse442-at-ub/project_s23-atomic
closed
Set up the App with React
Processing Task Sprint 2
**Task Test** *Test 1* 1. Go to our repo: https://github.com/cse442-at-ub/project_s23-atomic/ 2. Go to the Spring2-reactApp branch 3. Clone it 4. Run it locally (Localhost 300) 5. Verify app is up and running
1.0
Set up the App with React - **Task Test** *Test 1* 1. Go to our repo: https://github.com/cse442-at-ub/project_s23-atomic/ 2. Go to the Spring2-reactApp branch 3. Clone it 4. Run it locally (Localhost 300) 5. Verify app is up and running
process
set up the app with react task test test go to our repo go to the reactapp branch clone it run it locally localhost verify app is up and running
1
557
3,020,242,780
IssuesEvent
2015-07-31 05:59:50
e-government-ua/i
https://api.github.com/repos/e-government-ua/i
closed
На Π³Π»Π°Π²Π½ΠΎΠΌ ΠΏΠΎΡ€Ρ‚Π°Π»Π΅ создаСм Π½ΠΎΠ²Ρ‹ΠΉ ΠΏΡƒΠ½ΠΊΡ‚ Π² мСню "Бтатус заявок" ΠΈ оТивляСм Π΅Π³ΠΎ
hi priority In process of testing test
- [x] 0) Π”ΠΎΠ±Π°Π²ΠΈΡ‚ΡŒ Π² Π³Π»Π°Π²Π½ΠΎΠ΅ мСню Π½ΠΎΠ²Ρ‹ΠΉ ΠΏΡƒΠ½ΠΊΡ‚ "Заявки" (Π½Π°Π·Π²Π°Π½ΠΈΠ΅ Π½ΠΎΠ²ΠΎΠ³ΠΎ модуля "order") - [x] 1) ВСкст ΠΏΠΎΠ΄ Π·Π°Π³ΠΎΠ»ΠΎΠ²ΠΊΠΎΠΌ: "На Ρ†Ρ–ΠΉ сторінці Π’ΠΈ ΠΌΠΎΠΆΠ΅Ρ‚Π΅ пСрСглянути статус ΠΏΠΎ своїй заявці Π·Π° Ρ—Ρ— Π½ΠΎΠΌΠ΅Ρ€ΠΎΠΌ." - [x] 2) Π­Π»Π΅ΠΌΠ΅Π½Ρ‚Ρ‹(Π½ΠΈΠΆΠ΅): НомСр заявки: [ПолС Π²Π²ΠΎΠ΄Π°] [ΠŸΠ΅Ρ€Π΅Π³Π»ΡΠ½ΡƒΡ‚ΠΈ] (ΠΊΠ½ΠΎΠΏΠΊΠ°) - [x] 3) ПослС наТатия Π½Π° β€œΠŸΠ΅Ρ€Π΅Π³Π»ΡΠ½ΡƒΡ‚ΠΈβ€ ΠΏΠΎΠ΄ ΠΏΠΎΠ»Π΅ΠΌ ΠΏΠΎΠΊΠ°Π·Ρ‹Π²Π°Π΅ΠΌ: - [x] 3.1) Π’ΠΈΠΏ заявки: %Ρ‚ΠΈΠΏ% (Π’ΠΈΠΏ Π·Π°Ρ…Π°Ρ€Π΄ΠΊΠΎΠ΄ΠΈΡ‚ΡŒ "Услуга") - [x] 3.2) Ρ‚Π°Π±Π»ΠΈΡ†Π° с двумя ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°ΠΌΠΈ "Час" ΠΈ "Бтатус", ΠΈ ΠΎΠ΄Π½ΠΎΠΉ строчкой, Π³Π΄Π΅ Π² ячСйках Ρ€Π°Π·ΠΌΠ΅ΡΡ‚ΠΈΡ‚ΡŒ ΠΏΠΎΠ»ΡƒΡ‡Π΅Π½Π½Ρ‹Π΅ с сСрвиса Π΄Π°Π½Π½Ρ‹Π΅. - [x] 4) ΠΈΡΠΊΠ°Ρ‚ΡŒ Ρ‡Π΅Ρ€Π΅Π· сСрвис getHistoryEvent_Service, ΠΊΡƒΠ΄Π° пСрСдаСтся ΠΏΠ°Ρ€Π°ΠΌΠ΅Ρ‚Ρ€ΠΎΠΌ "sID" - Π·Π½Π°Ρ‡Π΅Π½ΠΈΠ΅ ΠΈΠ· "поля Π²Π²ΠΎΠ΄Π°" БСрвис Ρ€Π΅Π°Π»ΠΈΠ·ΠΎΠ²Π°Π½ ΠΏΠΎ Π·Π°Π΄Π°Ρ‡Π΅: https://github.com/e-government-ua/i/issues/493
1.0
На Π³Π»Π°Π²Π½ΠΎΠΌ ΠΏΠΎΡ€Ρ‚Π°Π»Π΅ создаСм Π½ΠΎΠ²Ρ‹ΠΉ ΠΏΡƒΠ½ΠΊΡ‚ Π² мСню "Бтатус заявок" ΠΈ оТивляСм Π΅Π³ΠΎ - - [x] 0) Π”ΠΎΠ±Π°Π²ΠΈΡ‚ΡŒ Π² Π³Π»Π°Π²Π½ΠΎΠ΅ мСню Π½ΠΎΠ²Ρ‹ΠΉ ΠΏΡƒΠ½ΠΊΡ‚ "Заявки" (Π½Π°Π·Π²Π°Π½ΠΈΠ΅ Π½ΠΎΠ²ΠΎΠ³ΠΎ модуля "order") - [x] 1) ВСкст ΠΏΠΎΠ΄ Π·Π°Π³ΠΎΠ»ΠΎΠ²ΠΊΠΎΠΌ: "На Ρ†Ρ–ΠΉ сторінці Π’ΠΈ ΠΌΠΎΠΆΠ΅Ρ‚Π΅ пСрСглянути статус ΠΏΠΎ своїй заявці Π·Π° Ρ—Ρ— Π½ΠΎΠΌΠ΅Ρ€ΠΎΠΌ." - [x] 2) Π­Π»Π΅ΠΌΠ΅Π½Ρ‚Ρ‹(Π½ΠΈΠΆΠ΅): НомСр заявки: [ПолС Π²Π²ΠΎΠ΄Π°] [ΠŸΠ΅Ρ€Π΅Π³Π»ΡΠ½ΡƒΡ‚ΠΈ] (ΠΊΠ½ΠΎΠΏΠΊΠ°) - [x] 3) ПослС наТатия Π½Π° β€œΠŸΠ΅Ρ€Π΅Π³Π»ΡΠ½ΡƒΡ‚ΠΈβ€ ΠΏΠΎΠ΄ ΠΏΠΎΠ»Π΅ΠΌ ΠΏΠΎΠΊΠ°Π·Ρ‹Π²Π°Π΅ΠΌ: - [x] 3.1) Π’ΠΈΠΏ заявки: %Ρ‚ΠΈΠΏ% (Π’ΠΈΠΏ Π·Π°Ρ…Π°Ρ€Π΄ΠΊΠΎΠ΄ΠΈΡ‚ΡŒ "Услуга") - [x] 3.2) Ρ‚Π°Π±Π»ΠΈΡ†Π° с двумя ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°ΠΌΠΈ "Час" ΠΈ "Бтатус", ΠΈ ΠΎΠ΄Π½ΠΎΠΉ строчкой, Π³Π΄Π΅ Π² ячСйках Ρ€Π°Π·ΠΌΠ΅ΡΡ‚ΠΈΡ‚ΡŒ ΠΏΠΎΠ»ΡƒΡ‡Π΅Π½Π½Ρ‹Π΅ с сСрвиса Π΄Π°Π½Π½Ρ‹Π΅. - [x] 4) ΠΈΡΠΊΠ°Ρ‚ΡŒ Ρ‡Π΅Ρ€Π΅Π· сСрвис getHistoryEvent_Service, ΠΊΡƒΠ΄Π° пСрСдаСтся ΠΏΠ°Ρ€Π°ΠΌΠ΅Ρ‚Ρ€ΠΎΠΌ "sID" - Π·Π½Π°Ρ‡Π΅Π½ΠΈΠ΅ ΠΈΠ· "поля Π²Π²ΠΎΠ΄Π°" БСрвис Ρ€Π΅Π°Π»ΠΈΠ·ΠΎΠ²Π°Π½ ΠΏΠΎ Π·Π°Π΄Π°Ρ‡Π΅: https://github.com/e-government-ua/i/issues/493
process
Π½Π° Π³Π»Π°Π²Π½ΠΎΠΌ ΠΏΠΎΡ€Ρ‚Π°Π»Π΅ создаСм Π½ΠΎΠ²Ρ‹ΠΉ ΠΏΡƒΠ½ΠΊΡ‚ Π² мСню статус заявок ΠΈ оТивляСм Π΅Π³ΠΎ Π΄ΠΎΠ±Π°Π²ΠΈΡ‚ΡŒ Π² Π³Π»Π°Π²Π½ΠΎΠ΅ мСню Π½ΠΎΠ²Ρ‹ΠΉ ΠΏΡƒΠ½ΠΊΡ‚ заявки Π½Π°Π·Π²Π°Π½ΠΈΠ΅ Π½ΠΎΠ²ΠΎΠ³ΠΎ модуля order тСкст ΠΏΠΎΠ΄ Π·Π°Π³ΠΎΠ»ΠΎΠ²ΠΊΠΎΠΌ Π½Π° Ρ†Ρ–ΠΉ сторінці Π²ΠΈ ΠΌΠΎΠΆΠ΅Ρ‚Π΅ пСрСглянути статус ΠΏΠΎ своїй заявці Π·Π° Ρ—Ρ— Π½ΠΎΠΌΠ΅Ρ€ΠΎΠΌ элСмСнты Π½ΠΈΠΆΠ΅ Π½ΠΎΠΌΠ΅Ρ€ заявки ΠΊΠ½ΠΎΠΏΠΊΠ° послС наТатия Π½Π° β€œΠΏΠ΅Ρ€Π΅Π³Π»ΡΠ½ΡƒΡ‚ΠΈβ€ ΠΏΠΎΠ΄ ΠΏΠΎΠ»Π΅ΠΌ ΠΏΠΎΠΊΠ°Π·Ρ‹Π²Π°Π΅ΠΌ Ρ‚ΠΈΠΏ заявки Ρ‚ΠΈΠΏ Ρ‚ΠΈΠΏ Π·Π°Ρ…Π°Ρ€Π΄ΠΊΠΎΠ΄ΠΈΡ‚ΡŒ услуга Ρ‚Π°Π±Π»ΠΈΡ†Π° с двумя ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°ΠΌΠΈ час ΠΈ статус ΠΈ ΠΎΠ΄Π½ΠΎΠΉ строчкой Π³Π΄Π΅ Π² ячСйках Ρ€Π°Π·ΠΌΠ΅ΡΡ‚ΠΈΡ‚ΡŒ ΠΏΠΎΠ»ΡƒΡ‡Π΅Π½Π½Ρ‹Π΅ с сСрвиса Π΄Π°Π½Π½Ρ‹Π΅ ΠΈΡΠΊΠ°Ρ‚ΡŒ Ρ‡Π΅Ρ€Π΅Π· сСрвис gethistoryevent service ΠΊΡƒΠ΄Π° пСрСдаСтся ΠΏΠ°Ρ€Π°ΠΌΠ΅Ρ‚Ρ€ΠΎΠΌ sid Π·Π½Π°Ρ‡Π΅Π½ΠΈΠ΅ ΠΈΠ· поля Π²Π²ΠΎΠ΄Π° сСрвис Ρ€Π΅Π°Π»ΠΈΠ·ΠΎΠ²Π°Π½ ΠΏΠΎ Π·Π°Π΄Π°Ρ‡Π΅
1
17,780
23,708,996,296
IssuesEvent
2022-08-30 05:55:25
pycaret/pycaret
https://api.github.com/repos/pycaret/pycaret
closed
[BUG]:ValueError: 'fill_value'=constant is invalid. Expected a numerical value when imputing numerical data
bug preprocessing
### pycaret version checks - [X] I have checked that this issue has not already been reported [here](https://github.com/pycaret/pycaret/issues). - [X] I have confirmed this bug exists on the [latest version](https://github.com/pycaret/pycaret/releases) of pycaret. - [ ] I have confirmed this bug exists on the master branch of pycaret (pip install -U git+https://github.com/pycaret/pycaret.git@master). ### Issue Description This error is thrown in Setup when Preprocess = True and fix_imbalance_method=smote and fix_imbalance=True for classification ### Reproducible Example ```python No `object` columns are present in the dataframe. All are int/float ``` ### Expected Behavior No Error ### Actual Results ```python-traceback ---> 79 fix_imbalance_method=smote,fix_imbalance=True) 10 frames /usr/local/lib/python3.7/dist-packages/sklearn/impute/_base.py in fit(self, X, y) 338 "'fill_value'={0} is invalid. Expected a " 339 "numerical value when imputing numerical " --> 340 "data".format(fill_value) 341 ) 342 ValueError: 'fill_value'=constant is invalid. Expected a numerical value when imputing numerical data ``` ### Installed Versions <details> 3.0.0rc </details>
1.0
[BUG]:ValueError: 'fill_value'=constant is invalid. Expected a numerical value when imputing numerical data - ### pycaret version checks - [X] I have checked that this issue has not already been reported [here](https://github.com/pycaret/pycaret/issues). - [X] I have confirmed this bug exists on the [latest version](https://github.com/pycaret/pycaret/releases) of pycaret. - [ ] I have confirmed this bug exists on the master branch of pycaret (pip install -U git+https://github.com/pycaret/pycaret.git@master). ### Issue Description This error is thrown in Setup when Preprocess = True and fix_imbalance_method=smote and fix_imbalance=True for classification ### Reproducible Example ```python No `object` columns are present in the dataframe. All are int/float ``` ### Expected Behavior No Error ### Actual Results ```python-traceback ---> 79 fix_imbalance_method=smote,fix_imbalance=True) 10 frames /usr/local/lib/python3.7/dist-packages/sklearn/impute/_base.py in fit(self, X, y) 338 "'fill_value'={0} is invalid. Expected a " 339 "numerical value when imputing numerical " --> 340 "data".format(fill_value) 341 ) 342 ValueError: 'fill_value'=constant is invalid. Expected a numerical value when imputing numerical data ``` ### Installed Versions <details> 3.0.0rc </details>
process
valueerror fill value constant is invalid expected a numerical value when imputing numerical data pycaret version checks i have checked that this issue has not already been reported i have confirmed this bug exists on the of pycaret i have confirmed this bug exists on the master branch of pycaret pip install u git issue description this error is thrown in setup when preprocess true and fix imbalance method smote and fix imbalance true for classification reproducible example python no object columns are present in the dataframe all are int float expected behavior no error actual results python traceback fix imbalance method smote fix imbalance true frames usr local lib dist packages sklearn impute base py in fit self x y fill value is invalid expected a numerical value when imputing numerical data format fill value valueerror fill value constant is invalid expected a numerical value when imputing numerical data installed versions
1
14,737
18,006,280,646
IssuesEvent
2021-09-16 00:17:55
dtcenter/MET
https://api.github.com/repos/dtcenter/MET
closed
Improve the PBL derivation logic in PB2NC.
type: enhancement requestor: NOAA/EMC priority: high alert: NEED ACCOUNT KEY requestor: DTC/SRW required: FOR OFFICIAL RELEASE MET: PreProcessing Tools (Point)
## Describe the Enhancement ## This issue is the result of this METplus Discussion dtcenter/METplus#1140. There was a lot of back and forth on that discussion thread, but in the end, two necessary changes were identified. This issue is to make the following 2 changes to the PBL derivation logic in PB2NC: (1) In MET version 10.0.0, PB2NC replaces any computed PBL values > 5000 with a value of 5000. Change this behavior by defining the upper limit as 10,000 and replace any values > 10000 with BAD DATA. Note that these bad data observation values should not be written to the output. (2) Check the input level data for the PBL derivation algorithm and omit any levels that contain non-pyhsical values, namely a pressure value of 0. For an example, see this Discussion [comment](https://github.com/dtcenter/METplus/discussions/1140#discussioncomment-1292156) which includes the following bad input level: ``` DEBUG 8: compute_pbl() input to calpbl_: 0 0.00000 0.00000 36893488147419103232.00000 36893488147419103232.00000 -2.29688 1.30000 ``` ### Time Estimate ### 1 day. ### Sub-Issues ### Consider breaking the enhancement down into sub-issues. No sub-issues. ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [x] Select **engineer(s)** or **no engineer** required: @hsoh-u - [x] Select **scientist(s)** or **no scientist** required: @PerryShafran-NOAA ### Labels ### - [x] Select **component(s)** - [x] Select **priority** - [x] Select **requestor(s)** ### Projects and Milestone ### - [x] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label - [x] Select **Milestone** as the next official version or **Future Versions** ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) No impacts. ## Enhancement Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [ ] Fork this repository or create a branch of **develop**. Branch name: `feature_<Issue Number>_<Description>` - [ ] Complete the development and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **develop**. Pull request: `feature <Issue Number> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Linked issues** Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Close this issue.
1.0
Improve the PBL derivation logic in PB2NC. - ## Describe the Enhancement ## This issue is the result of this METplus Discussion dtcenter/METplus#1140. There was a lot of back and forth on that discussion thread, but in the end, two necessary changes were identified. This issue is to make the following 2 changes to the PBL derivation logic in PB2NC: (1) In MET version 10.0.0, PB2NC replaces any computed PBL values > 5000 with a value of 5000. Change this behavior by defining the upper limit as 10,000 and replace any values > 10000 with BAD DATA. Note that these bad data observation values should not be written to the output. (2) Check the input level data for the PBL derivation algorithm and omit any levels that contain non-pyhsical values, namely a pressure value of 0. For an example, see this Discussion [comment](https://github.com/dtcenter/METplus/discussions/1140#discussioncomment-1292156) which includes the following bad input level: ``` DEBUG 8: compute_pbl() input to calpbl_: 0 0.00000 0.00000 36893488147419103232.00000 36893488147419103232.00000 -2.29688 1.30000 ``` ### Time Estimate ### 1 day. ### Sub-Issues ### Consider breaking the enhancement down into sub-issues. No sub-issues. ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [x] Select **engineer(s)** or **no engineer** required: @hsoh-u - [x] Select **scientist(s)** or **no scientist** required: @PerryShafran-NOAA ### Labels ### - [x] Select **component(s)** - [x] Select **priority** - [x] Select **requestor(s)** ### Projects and Milestone ### - [x] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label - [x] Select **Milestone** as the next official version or **Future Versions** ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) No impacts. ## Enhancement Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [ ] Fork this repository or create a branch of **develop**. Branch name: `feature_<Issue Number>_<Description>` - [ ] Complete the development and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **develop**. Pull request: `feature <Issue Number> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Linked issues** Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Close this issue.
process
improve the pbl derivation logic in describe the enhancement this issue is the result of this metplus discussion dtcenter metplus there was a lot of back and forth on that discussion thread but in the end two necessary changes were identified this issue is to make the following changes to the pbl derivation logic in in met version replaces any computed pbl values with a value of change this behavior by defining the upper limit as and replace any values with bad data note that these bad data observation values should not be written to the output check the input level data for the pbl derivation algorithm and omit any levels that contain non pyhsical values namely a pressure value of for an example see this discussion which includes the following bad input level debug compute pbl input to calpbl time estimate day sub issues consider breaking the enhancement down into sub issues no sub issues relevant deadlines list relevant project deadlines here or state none funding source define the source of funding and account keys here or state none define the metadata assignee select engineer s or no engineer required hsoh u select scientist s or no scientist required perryshafran noaa labels select component s select priority select requestor s projects and milestone select repository and or organization level project s or add alert need project assignment label select milestone as the next official version or future versions define related issue s consider the impact to the other metplus components no impacts enhancement checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of develop branch name feature complete the development and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into develop pull request feature define the pull request metadata as permissions allow select reviewer s and linked issues select repository level development cycle project for the next official release select milestone as the next official version iterate until the reviewer s accept and merge your changes delete your fork or branch close this issue
1
12,662
15,033,848,360
IssuesEvent
2021-02-02 12:04:16
DevExpress/testcafe-hammerhead
https://api.github.com/repos/DevExpress/testcafe-hammerhead
closed
Processing error for the https://qa.xreach.app/ page
AREA: client AREA: server FREQUENCY: level 1 SYSTEM: script processing TYPE: bug support center
Steps to reproduce: * open the https://qa.xreach.app/ page in the playground * white page will be displayed instead of origin page * open Chrome DevTools and see the script error ![image](https://user-images.githubusercontent.com/4133518/106244524-11aba500-621c-11eb-8c0e-642ccb5cbd31.png)
1.0
Processing error for the https://qa.xreach.app/ page - Steps to reproduce: * open the https://qa.xreach.app/ page in the playground * white page will be displayed instead of origin page * open Chrome DevTools and see the script error ![image](https://user-images.githubusercontent.com/4133518/106244524-11aba500-621c-11eb-8c0e-642ccb5cbd31.png)
process
processing error for the page steps to reproduce open the page in the playground white page will be displayed instead of origin page open chrome devtools and see the script error
1
5,856
8,680,735,723
IssuesEvent
2018-12-01 13:38:04
pwittchen/ReactiveNetwork
https://api.github.com/repos/pwittchen/ReactiveNetwork
closed
Release 3.0.1
release process
**Release notes**: - Fixed unserialized access to subject on Marshmallow - https://github.com/pwittchen/ReactiveNetwork/commit/91b676e726071b2d77b1e26a35dbcafa0fac7b32 by @aperfilyev **Things to do**: - [x] prepare release notes - [x] update JavaDoc on `gh-pages` (not needed in this release) - [x] update documentation on `gh-pages` (not needed in this release) - [x] bump library version - [x] upload archives to Maven Central - [x] close and release artifact on Maven Central - [x] update `CHANGELOG.md` - [x] bump library version in `README.md` after Maven Sync - [x] update docs on `gh-pages` after updating `README.md` - [x] create new GitHub release
1.0
Release 3.0.1 - **Release notes**: - Fixed unserialized access to subject on Marshmallow - https://github.com/pwittchen/ReactiveNetwork/commit/91b676e726071b2d77b1e26a35dbcafa0fac7b32 by @aperfilyev **Things to do**: - [x] prepare release notes - [x] update JavaDoc on `gh-pages` (not needed in this release) - [x] update documentation on `gh-pages` (not needed in this release) - [x] bump library version - [x] upload archives to Maven Central - [x] close and release artifact on Maven Central - [x] update `CHANGELOG.md` - [x] bump library version in `README.md` after Maven Sync - [x] update docs on `gh-pages` after updating `README.md` - [x] create new GitHub release
process
release release notes fixed unserialized access to subject on marshmallow by aperfilyev things to do prepare release notes update javadoc on gh pages not needed in this release update documentation on gh pages not needed in this release bump library version upload archives to maven central close and release artifact on maven central update changelog md bump library version in readme md after maven sync update docs on gh pages after updating readme md create new github release
1
18,059
24,068,060,046
IssuesEvent
2022-09-17 19:37:31
COS301-SE-2022/Pure-LoRa-Tracking
https://api.github.com/repos/COS301-SE-2022/Pure-LoRa-Tracking
closed
(processing/DB): CRON for averaging
(system) Server (bus) processing
CRON checks the resulting data is approximately close together to allow averaging to be beneficial
1.0
(processing/DB): CRON for averaging - CRON checks the resulting data is approximately close together to allow averaging to be beneficial
process
processing db cron for averaging cron checks the resulting data is approximately close together to allow averaging to be beneficial
1
230,218
18,519,047,677
IssuesEvent
2021-10-20 13:22:15
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: follower-reads/survival=region/locality=regional/reads=bounded-staleness failed
C-test-failure O-robot O-roachtest T-kv branch-release-21.2
roachtest.follower-reads/survival=region/locality=regional/reads=bounded-staleness [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=3431395&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=3431395&tab=artifacts#/follower-reads/survival=region/locality=regional/reads=bounded-staleness) on release-21.2 @ [8d491ced731e13da2377cda9961f577d7487d6a0](https://github.com/cockroachdb/cockroach/commits/8d491ced731e13da2377cda9961f577d7487d6a0): ``` The test failed on branch=release-21.2, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/follower-reads/survival=region/locality=regional/reads=bounded-staleness/run_1 follower_reads.go:657,follower_reads.go:319,follower_reads.go:57,test_runner.go:777: too many intervals with more than 2 nodes with low follower read ratios: 5 intervals > 4 threshold. Bad intervals: interval 09:39:50-09:40:00: n1 ratio: 0.229 n2 ratio: 0.000 n3 ratio: 0.132 n4 ratio: 1.000 n5 ratio: 1.000 n6 ratio: 1.000 interval 09:40:30-09:40:40: n1 ratio: 0.342 n2 ratio: 0.163 n3 ratio: 0.795 n4 ratio: 1.000 n5 ratio: 1.000 n6 ratio: 1.000 interval 09:41:10-09:41:20: n1 ratio: 0.187 n2 ratio: 0.000 n3 ratio: 0.812 n4 ratio: 1.000 n5 ratio: 1.000 n6 ratio: 0.994 interval 09:41:30-09:41:40: n1 ratio: 0.000 n2 ratio: 0.000 n3 ratio: 0.812 n4 ratio: 1.000 n5 ratio: 1.000 n6 ratio: 1.000 interval 09:42:00-09:42:10: n1 ratio: 0.000 n2 ratio: 0.000 n3 ratio: 0.767 n4 ratio: 1.000 n5 ratio: 1.000 n6 ratio: 1.000 ``` <details><summary>Reproduce</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) </p> </details> <details><summary>Same failure on other branches</summary> <p> - #70011 roachtest: follower-reads/survival=region/locality=regional/reads=bounded-staleness failed [C-test-failure O-roachtest O-robot branch-master release-blocker] </p> </details> /cc @cockroachdb/kv-triage <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*follower-reads/survival=region/locality=regional/reads=bounded-staleness.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub> Epic CRDB-10559
2.0
roachtest: follower-reads/survival=region/locality=regional/reads=bounded-staleness failed - roachtest.follower-reads/survival=region/locality=regional/reads=bounded-staleness [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=3431395&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=3431395&tab=artifacts#/follower-reads/survival=region/locality=regional/reads=bounded-staleness) on release-21.2 @ [8d491ced731e13da2377cda9961f577d7487d6a0](https://github.com/cockroachdb/cockroach/commits/8d491ced731e13da2377cda9961f577d7487d6a0): ``` The test failed on branch=release-21.2, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/follower-reads/survival=region/locality=regional/reads=bounded-staleness/run_1 follower_reads.go:657,follower_reads.go:319,follower_reads.go:57,test_runner.go:777: too many intervals with more than 2 nodes with low follower read ratios: 5 intervals > 4 threshold. Bad intervals: interval 09:39:50-09:40:00: n1 ratio: 0.229 n2 ratio: 0.000 n3 ratio: 0.132 n4 ratio: 1.000 n5 ratio: 1.000 n6 ratio: 1.000 interval 09:40:30-09:40:40: n1 ratio: 0.342 n2 ratio: 0.163 n3 ratio: 0.795 n4 ratio: 1.000 n5 ratio: 1.000 n6 ratio: 1.000 interval 09:41:10-09:41:20: n1 ratio: 0.187 n2 ratio: 0.000 n3 ratio: 0.812 n4 ratio: 1.000 n5 ratio: 1.000 n6 ratio: 0.994 interval 09:41:30-09:41:40: n1 ratio: 0.000 n2 ratio: 0.000 n3 ratio: 0.812 n4 ratio: 1.000 n5 ratio: 1.000 n6 ratio: 1.000 interval 09:42:00-09:42:10: n1 ratio: 0.000 n2 ratio: 0.000 n3 ratio: 0.767 n4 ratio: 1.000 n5 ratio: 1.000 n6 ratio: 1.000 ``` <details><summary>Reproduce</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) </p> </details> <details><summary>Same failure on other branches</summary> <p> - #70011 roachtest: follower-reads/survival=region/locality=regional/reads=bounded-staleness failed [C-test-failure O-roachtest O-robot branch-master release-blocker] </p> </details> /cc @cockroachdb/kv-triage <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*follower-reads/survival=region/locality=regional/reads=bounded-staleness.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub> Epic CRDB-10559
non_process
roachtest follower reads survival region locality regional reads bounded staleness failed roachtest follower reads survival region locality regional reads bounded staleness with on release the test failed on branch release cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts follower reads survival region locality regional reads bounded staleness run follower reads go follower reads go follower reads go test runner go too many intervals with more than nodes with low follower read ratios intervals threshold bad intervals interval ratio ratio ratio ratio ratio ratio interval ratio ratio ratio ratio ratio ratio interval ratio ratio ratio ratio ratio ratio interval ratio ratio ratio ratio ratio ratio interval ratio ratio ratio ratio ratio ratio reproduce see same failure on other branches roachtest follower reads survival region locality regional reads bounded staleness failed cc cockroachdb kv triage epic crdb
0
228,905
18,270,207,982
IssuesEvent
2021-10-04 13:07:11
vgstation-coders/vgstation13
https://api.github.com/repos/vgstation-coders/vgstation13
closed
TEG interface displays wrong pressures for the hot side except right after being opened
Bug / Fix Needs Moar Testing UI
The title isn't quite right but it's already too long For some reason, the TEG interface is correct right after you click the TEG, but the inlet and outlet pressures for `circ1` are backwards every time it updates without you clicking it. This happens despite the same proc being called to update the interface in both cases.
1.0
TEG interface displays wrong pressures for the hot side except right after being opened - The title isn't quite right but it's already too long For some reason, the TEG interface is correct right after you click the TEG, but the inlet and outlet pressures for `circ1` are backwards every time it updates without you clicking it. This happens despite the same proc being called to update the interface in both cases.
non_process
teg interface displays wrong pressures for the hot side except right after being opened the title isn t quite right but it s already too long for some reason the teg interface is correct right after you click the teg but the inlet and outlet pressures for are backwards every time it updates without you clicking it this happens despite the same proc being called to update the interface in both cases
0
18,599
24,573,536,140
IssuesEvent
2022-10-13 10:29:02
TerjeTL/TEP4545_Project
https://api.github.com/repos/TerjeTL/TEP4545_Project
opened
Post Processing
post processing
- Extract relevant quantities - Grid Convergence Index - Prepare data/plots
1.0
Post Processing - - Extract relevant quantities - Grid Convergence Index - Prepare data/plots
process
post processing extract relevant quantities grid convergence index prepare data plots
1
139,273
31,390,394,543
IssuesEvent
2023-08-26 09:03:51
joomla/joomla-cms
https://api.github.com/repos/joomla/joomla-cms
closed
Option "Browser Page Title" not working
Feature No Code Attached Yet
### Steps to reproduce the issue I create an article and add a title into the field "Browser Page Title" in the tab "Options" - e.g. "xxx my SEO optimized title-text xxx". Then I create a menu-item for this article. Here (in the menu-item) I do NOT add text into the field "Browser Page Title" in the tab "Page Display", the field is still empty. ### Expected result The title-tag (<title></title>) in the code of the frontend should show the title which I have set into the article, into the field "Browser Page Title" (= xxx my SEO optimized title-text xxx). ### Actual result It shows the wront text in the title-tag (<title></title>): It shows the "MENU title" (= the wording of the textlink in the menu) instead of the "Browser Page Title" (= xxx my SEO optimized title-text xxx). ### System information (as much as possible) Joomla! 4.3.3 with YOOtheme Version 4.0.10 ### Additional comments I refer to the issue [#28468 ](https://github.com/joomla/joomla-cms/issues/28468) . a comment here says that the issue is solved: https://github.com/joomla/joomla-cms/pull/39249. but it isn't...
1.0
Option "Browser Page Title" not working - ### Steps to reproduce the issue I create an article and add a title into the field "Browser Page Title" in the tab "Options" - e.g. "xxx my SEO optimized title-text xxx". Then I create a menu-item for this article. Here (in the menu-item) I do NOT add text into the field "Browser Page Title" in the tab "Page Display", the field is still empty. ### Expected result The title-tag (<title></title>) in the code of the frontend should show the title which I have set into the article, into the field "Browser Page Title" (= xxx my SEO optimized title-text xxx). ### Actual result It shows the wront text in the title-tag (<title></title>): It shows the "MENU title" (= the wording of the textlink in the menu) instead of the "Browser Page Title" (= xxx my SEO optimized title-text xxx). ### System information (as much as possible) Joomla! 4.3.3 with YOOtheme Version 4.0.10 ### Additional comments I refer to the issue [#28468 ](https://github.com/joomla/joomla-cms/issues/28468) . a comment here says that the issue is solved: https://github.com/joomla/joomla-cms/pull/39249. but it isn't...
non_process
option browser page title not working steps to reproduce the issue i create an article and add a title into the field browser page title in the tab options e g xxx my seo optimized title text xxx then i create a menu item for this article here in the menu item i do not add text into the field browser page title in the tab page display the field is still empty expected result the title tag in the code of the frontend should show the title which i have set into the article into the field browser page title xxx my seo optimized title text xxx actual result it shows the wront text in the title tag it shows the menu title the wording of the textlink in the menu instead of the browser page title xxx my seo optimized title text xxx system information as much as possible joomla with yootheme version additional comments i refer to the issue a comment here says that the issue is solved but it isn t
0
198,195
22,617,978,921
IssuesEvent
2022-06-30 01:29:12
thomasklwong/profile
https://api.github.com/repos/thomasklwong/profile
opened
CVE-2022-2217 (High) detected in parse-url-5.0.1.tgz
security vulnerability
## CVE-2022-2217 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parse-url-5.0.1.tgz</b></p></summary> <p>An advanced url parser supporting git urls too.</p> <p>Library home page: <a href="https://registry.npmjs.org/parse-url/-/parse-url-5.0.1.tgz">https://registry.npmjs.org/parse-url/-/parse-url-5.0.1.tgz</a></p> <p>Path to dependency file: /profile/package.json</p> <p>Path to vulnerable library: /node_modules/parse-url/package.json,/node_modules/parse-url/package.json</p> <p> Dependency Hierarchy: - gatsby-2.15.28.tgz (Root Library) - gatsby-telemetry-1.1.28.tgz - git-up-4.0.1.tgz - :x: **parse-url-5.0.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/thomasklwong/profile/commit/0f7d50c0f4becdce82432f48c2159390fbe2e662">0f7d50c0f4becdce82432f48c2159390fbe2e662</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Cross-site Scripting (XSS) - Generic in GitHub repository ionicabizau/parse-url prior to 7.0.0. <p>Publish Date: 2022-06-27 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2217>CVE-2022-2217</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/4e046c63-b1ca-4bcc-b418-29796918a71b/">https://huntr.dev/bounties/4e046c63-b1ca-4bcc-b418-29796918a71b/</a></p> <p>Release Date: 2022-06-27</p> <p>Fix Resolution: parse-url - 6.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-2217 (High) detected in parse-url-5.0.1.tgz - ## CVE-2022-2217 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parse-url-5.0.1.tgz</b></p></summary> <p>An advanced url parser supporting git urls too.</p> <p>Library home page: <a href="https://registry.npmjs.org/parse-url/-/parse-url-5.0.1.tgz">https://registry.npmjs.org/parse-url/-/parse-url-5.0.1.tgz</a></p> <p>Path to dependency file: /profile/package.json</p> <p>Path to vulnerable library: /node_modules/parse-url/package.json,/node_modules/parse-url/package.json</p> <p> Dependency Hierarchy: - gatsby-2.15.28.tgz (Root Library) - gatsby-telemetry-1.1.28.tgz - git-up-4.0.1.tgz - :x: **parse-url-5.0.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/thomasklwong/profile/commit/0f7d50c0f4becdce82432f48c2159390fbe2e662">0f7d50c0f4becdce82432f48c2159390fbe2e662</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Cross-site Scripting (XSS) - Generic in GitHub repository ionicabizau/parse-url prior to 7.0.0. <p>Publish Date: 2022-06-27 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2217>CVE-2022-2217</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/4e046c63-b1ca-4bcc-b418-29796918a71b/">https://huntr.dev/bounties/4e046c63-b1ca-4bcc-b418-29796918a71b/</a></p> <p>Release Date: 2022-06-27</p> <p>Fix Resolution: parse-url - 6.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in parse url tgz cve high severity vulnerability vulnerable library parse url tgz an advanced url parser supporting git urls too library home page a href path to dependency file profile package json path to vulnerable library node modules parse url package json node modules parse url package json dependency hierarchy gatsby tgz root library gatsby telemetry tgz git up tgz x parse url tgz vulnerable library found in head commit a href vulnerability details cross site scripting xss generic in github repository ionicabizau parse url prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution parse url step up your open source security game with mend
0
600
3,074,319,780
IssuesEvent
2015-08-20 06:13:21
mitchellh/packer
https://api.github.com/repos/mitchellh/packer
closed
Running packer push from Windows command line results in "authentication failed"
post-processor/atlas question
I am following the tutorial on the Atlas website on how to build Vagrant boxes with Packer and Atlas. Having set the ATLAS_TOKEN environment variable and successfully verifying, the `packer push` command does not authenticate. I have also tried adding the `-token` flag without success. I am running this in a Windows 8 machine. I have had the same results with versions 0.8.2 and 0.8.5.
1.0
Running packer push from Windows command line results in "authentication failed" - I am following the tutorial on the Atlas website on how to build Vagrant boxes with Packer and Atlas. Having set the ATLAS_TOKEN environment variable and successfully verifying, the `packer push` command does not authenticate. I have also tried adding the `-token` flag without success. I am running this in a Windows 8 machine. I have had the same results with versions 0.8.2 and 0.8.5.
process
running packer push from windows command line results in authentication failed i am following the tutorial on the atlas website on how to build vagrant boxes with packer and atlas having set the atlas token environment variable and successfully verifying the packer push command does not authenticate i have also tried adding the token flag without success i am running this in a windows machine i have had the same results with versions and
1
22,004
30,506,249,087
IssuesEvent
2023-07-18 17:06:37
darktable-org/darktable
https://api.github.com/repos/darktable-org/darktable
closed
Crop with retouch not working correctly in darkroom view
reproduce: confirmed scope: image processing bug: pending
Due to the position of crop and retouch in the pipe it should be possible to use cropped areas of the image as source spots for retouching. In this somewhat contrived example, I observe the following (see video below) 1. Reset the history for a clean image 2. Use the retouch module to clone a feature from the top of the image into a lower part of the image 3. Crop out the source feature 4. Retouch no longer works as expected (feature no longer present in lower part of the image) 5. Exit to the lighttable view 6. Retouch is applied correctly in the thumbnail (cloned feature is present) 7. Go back into the darkroom view 8. Retouch is again applied incorrectly (cloned feature not present) I have tried exporting and the retouch module is again applied as expected, so this appears to be an issue in the darkroom view only. See the following illustrative video: https://github.com/darktable-org/darktable/assets/9555491/4d99c261-e007-4223-88d7-fe0985743e0b I have observed the same issue in both master and the 4.4.1 stable release
1.0
Crop with retouch not working correctly in darkroom view - Due to the position of crop and retouch in the pipe it should be possible to use cropped areas of the image as source spots for retouching. In this somewhat contrived example, I observe the following (see video below) 1. Reset the history for a clean image 2. Use the retouch module to clone a feature from the top of the image into a lower part of the image 3. Crop out the source feature 4. Retouch no longer works as expected (feature no longer present in lower part of the image) 5. Exit to the lighttable view 6. Retouch is applied correctly in the thumbnail (cloned feature is present) 7. Go back into the darkroom view 8. Retouch is again applied incorrectly (cloned feature not present) I have tried exporting and the retouch module is again applied as expected, so this appears to be an issue in the darkroom view only. See the following illustrative video: https://github.com/darktable-org/darktable/assets/9555491/4d99c261-e007-4223-88d7-fe0985743e0b I have observed the same issue in both master and the 4.4.1 stable release
process
crop with retouch not working correctly in darkroom view due to the position of crop and retouch in the pipe it should be possible to use cropped areas of the image as source spots for retouching in this somewhat contrived example i observe the following see video below reset the history for a clean image use the retouch module to clone a feature from the top of the image into a lower part of the image crop out the source feature retouch no longer works as expected feature no longer present in lower part of the image exit to the lighttable view retouch is applied correctly in the thumbnail cloned feature is present go back into the darkroom view retouch is again applied incorrectly cloned feature not present i have tried exporting and the retouch module is again applied as expected so this appears to be an issue in the darkroom view only see the following illustrative video i have observed the same issue in both master and the stable release
1
25
2,490,666,174
IssuesEvent
2015-01-02 18:16:13
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
opened
Clarify Record.getValue() behaviour in Javadoc
C: Documentation P: Medium T: Enhancement
`Record.getValue(Field<T>)` will only consider the argument `Field`'s table name in the case of an ambiguity between columns. This behaviour should be well documented in the Javadoc
1.0
Clarify Record.getValue() behaviour in Javadoc - `Record.getValue(Field<T>)` will only consider the argument `Field`'s table name in the case of an ambiguity between columns. This behaviour should be well documented in the Javadoc
non_process
clarify record getvalue behaviour in javadoc record getvalue field will only consider the argument field s table name in the case of an ambiguity between columns this behaviour should be well documented in the javadoc
0
74,544
9,791,581,541
IssuesEvent
2019-06-10 15:21:10
gentoo/elivepatch-server
https://api.github.com/repos/gentoo/elivepatch-server
closed
Document the REST API
D-Medium T-Documentation enhancement
- [x] Make a list of the current calls - [x] Write an `API.md` - [x] Reference it in the wiki and the `README.md`
1.0
Document the REST API - - [x] Make a list of the current calls - [x] Write an `API.md` - [x] Reference it in the wiki and the `README.md`
non_process
document the rest api make a list of the current calls write an api md reference it in the wiki and the readme md
0
17,740
23,655,905,059
IssuesEvent
2022-08-26 11:11:36
MPMG-DCC-UFMG/C01
https://api.github.com/repos/MPMG-DCC-UFMG/C01
opened
SeleΓ§Γ£o de opΓ§Γ΅es agrupadas em select com optgroup
[0] Desenvolvimento [1] Aprimoramento [3] Processamento DinΓ’mico
## Comportamento Esperado Utilizar passos do Processameto DinΓ’mico para fazer a seleΓ§Γ£o em um `<select>` cujas opΓ§Γ΅es se encontram agrupadas por meio de tags do tipo `<optgroup>`. ## Comportamento Atual O passo "Para cada" junto ao "OpΓ§Γ΅es" nΓ£o identifica as opΓ§Γ΅es agrupadas e o passo "Selecionar" nΓ£o pΓ΄de ser aplicado para fazer a seleΓ§Γ£o. ## Passos para reproduzir a situaΓ§Γ£o 1. Criar um coletor com url base https://pt.barroso.mg.gov.br/Despesa_Total_Pessoal 2. Marcar a opΓ§Γ£o de "Processamento DinΓ’mico" 3. Tentar aplicar os passos atualmente disponΓ­veis para seleΓ§Γ£o de uma das opΓ§Γ΅es do campo "RelatΓ³rio" ## Screenshots Imagem ilustrativa da aparΓͺncia de um select cujas opΓ§Γ΅es se encontram agrupadas por meio de tags do tipo `<optgroup>`. ![image.png](https://images.zenhubusercontent.com/6164879d09cd97647a92c1ce/fcbc237f-4035-4ce2-bc49-1d66931ef028) <br> CΓ³digo html referente Γ  estrutura acima: ![image.png](https://images.zenhubusercontent.com/6164879d09cd97647a92c1ce/2cf86ab0-c409-4663-bb1d-4ba0a6f2e1bb)
1.0
SeleΓ§Γ£o de opΓ§Γ΅es agrupadas em select com optgroup - ## Comportamento Esperado Utilizar passos do Processameto DinΓ’mico para fazer a seleΓ§Γ£o em um `<select>` cujas opΓ§Γ΅es se encontram agrupadas por meio de tags do tipo `<optgroup>`. ## Comportamento Atual O passo "Para cada" junto ao "OpΓ§Γ΅es" nΓ£o identifica as opΓ§Γ΅es agrupadas e o passo "Selecionar" nΓ£o pΓ΄de ser aplicado para fazer a seleΓ§Γ£o. ## Passos para reproduzir a situaΓ§Γ£o 1. Criar um coletor com url base https://pt.barroso.mg.gov.br/Despesa_Total_Pessoal 2. Marcar a opΓ§Γ£o de "Processamento DinΓ’mico" 3. Tentar aplicar os passos atualmente disponΓ­veis para seleΓ§Γ£o de uma das opΓ§Γ΅es do campo "RelatΓ³rio" ## Screenshots Imagem ilustrativa da aparΓͺncia de um select cujas opΓ§Γ΅es se encontram agrupadas por meio de tags do tipo `<optgroup>`. ![image.png](https://images.zenhubusercontent.com/6164879d09cd97647a92c1ce/fcbc237f-4035-4ce2-bc49-1d66931ef028) <br> CΓ³digo html referente Γ  estrutura acima: ![image.png](https://images.zenhubusercontent.com/6164879d09cd97647a92c1ce/2cf86ab0-c409-4663-bb1d-4ba0a6f2e1bb)
process
seleΓ§Γ£o de opΓ§Γ΅es agrupadas em select com optgroup comportamento esperado utilizar passos do processameto dinΓ’mico para fazer a seleΓ§Γ£o em um cujas opΓ§Γ΅es se encontram agrupadas por meio de tags do tipo comportamento atual o passo para cada junto ao opΓ§Γ΅es nΓ£o identifica as opΓ§Γ΅es agrupadas e o passo selecionar nΓ£o pΓ΄de ser aplicado para fazer a seleΓ§Γ£o passos para reproduzir a situaΓ§Γ£o criar um coletor com url base marcar a opΓ§Γ£o de processamento dinΓ’mico tentar aplicar os passos atualmente disponΓ­veis para seleΓ§Γ£o de uma das opΓ§Γ΅es do campo relatΓ³rio screenshots imagem ilustrativa da aparΓͺncia de um select cujas opΓ§Γ΅es se encontram agrupadas por meio de tags do tipo cΓ³digo html referente Γ  estrutura acima
1
17,568
23,383,519,168
IssuesEvent
2022-08-11 11:48:14
inmanta/inmanta
https://api.github.com/repos/inmanta/inmanta
opened
web-console installed via RPM contains changelog directories
bug process
```[rocky@iso5rc web-console]$ find /usr/share/inmanta/web-console/ -type d /usr/share/inmanta/web-console/ /usr/share/inmanta/web-console/1.10.0 /usr/share/inmanta/web-console/1.11.0 /usr/share/inmanta/web-console/1.6.0 /usr/share/inmanta/web-console/1.7.0 /usr/share/inmanta/web-console/1.8.0 /usr/share/inmanta/web-console/1.9.0 /usr/share/inmanta/web-console/1.9.1 /usr/share/inmanta/web-console/fonts /usr/share/inmanta/web-console/unreleased ``` This was tested on an iso5-next host but the issue is assumed to exist for all versions of both OSS and iso.
1.0
web-console installed via RPM contains changelog directories - ```[rocky@iso5rc web-console]$ find /usr/share/inmanta/web-console/ -type d /usr/share/inmanta/web-console/ /usr/share/inmanta/web-console/1.10.0 /usr/share/inmanta/web-console/1.11.0 /usr/share/inmanta/web-console/1.6.0 /usr/share/inmanta/web-console/1.7.0 /usr/share/inmanta/web-console/1.8.0 /usr/share/inmanta/web-console/1.9.0 /usr/share/inmanta/web-console/1.9.1 /usr/share/inmanta/web-console/fonts /usr/share/inmanta/web-console/unreleased ``` This was tested on an iso5-next host but the issue is assumed to exist for all versions of both OSS and iso.
process
web console installed via rpm contains changelog directories find usr share inmanta web console type d usr share inmanta web console usr share inmanta web console usr share inmanta web console usr share inmanta web console usr share inmanta web console usr share inmanta web console usr share inmanta web console usr share inmanta web console usr share inmanta web console fonts usr share inmanta web console unreleased this was tested on an next host but the issue is assumed to exist for all versions of both oss and iso
1
86,618
24,905,673,016
IssuesEvent
2022-10-29 07:52:14
aceade/tdntg
https://api.github.com/repos/aceade/tdntg
opened
Build WebGL upon commit
enhancement build
Rather than build and commit to the repo, I should have an automated build that handles this for me. - https://game.ci/docs/github/getting-started
1.0
Build WebGL upon commit - Rather than build and commit to the repo, I should have an automated build that handles this for me. - https://game.ci/docs/github/getting-started
non_process
build webgl upon commit rather than build and commit to the repo i should have an automated build that handles this for me
0
521,418
15,109,334,167
IssuesEvent
2021-02-08 17:42:45
bcgov/entity
https://api.github.com/repos/bcgov/entity
closed
CP193 - Dawson Co-operative Union - Passcode issues
ENTITY OPS Priority1
#### ServiceNow incident: INC0075879 #### Contact information Staff Name: Julie Harding Staff Email:julie.harding@gov.bc.ca #### Description Email from IT Ops: > Please direct this to the coops online team. Can you please generate the passcode for Dawson Co-operative Union CP193. The client called in advising they need the passcode. **Can you please check existing account name Dawson Co-operative Union that was created by Deanna Larson to ensure the coop passcode has not already been used. I think this account has been created, but likely that no coop has been affiliated yet.** #### Tasks - [x] When ticket has been created, Steven to post the ticket in RocketChat '#Operations Tasks' channel - [x] Add **entity** or **relationships** label to zenhub ticket - [x] Add 'Priority1' label to zenhub ticket - [x] Assign zenhub ticket to milestone: current, and place in pipeline: sprint backlog - [x] Reply All to IT Ops email and provide zenhub ticket number opened and which team it was assigned to - [ ] Dev/BAs to complete work & close zenhub ticket - [ ] Author of zenhub ticket to mark ServiceNow ticket as resolved or ask IT Ops to do so
1.0
CP193 - Dawson Co-operative Union - Passcode issues - #### ServiceNow incident: INC0075879 #### Contact information Staff Name: Julie Harding Staff Email:julie.harding@gov.bc.ca #### Description Email from IT Ops: > Please direct this to the coops online team. Can you please generate the passcode for Dawson Co-operative Union CP193. The client called in advising they need the passcode. **Can you please check existing account name Dawson Co-operative Union that was created by Deanna Larson to ensure the coop passcode has not already been used. I think this account has been created, but likely that no coop has been affiliated yet.** #### Tasks - [x] When ticket has been created, Steven to post the ticket in RocketChat '#Operations Tasks' channel - [x] Add **entity** or **relationships** label to zenhub ticket - [x] Add 'Priority1' label to zenhub ticket - [x] Assign zenhub ticket to milestone: current, and place in pipeline: sprint backlog - [x] Reply All to IT Ops email and provide zenhub ticket number opened and which team it was assigned to - [ ] Dev/BAs to complete work & close zenhub ticket - [ ] Author of zenhub ticket to mark ServiceNow ticket as resolved or ask IT Ops to do so
non_process
dawson co operative union passcode issues servicenow incident contact information staff name julie harding staff email julie harding gov bc ca description email from it ops please direct this to the coops online team can you please generate the passcode for dawson co operative union the client called in advising they need the passcode can you please check existing account name dawson co operative union that was created by deanna larson to ensure the coop passcode has not already been used i think this account has been created but likely that no coop has been affiliated yet tasks when ticket has been created steven to post the ticket in rocketchat operations tasks channel add entity or relationships label to zenhub ticket add label to zenhub ticket assign zenhub ticket to milestone current and place in pipeline sprint backlog reply all to it ops email and provide zenhub ticket number opened and which team it was assigned to dev bas to complete work close zenhub ticket author of zenhub ticket to mark servicenow ticket as resolved or ask it ops to do so
0
185,136
15,012,184,203
IssuesEvent
2021-02-01 00:44:37
CSU-Booking-Platform/application
https://api.github.com/repos/CSU-Booking-Platform/application
closed
Update quality measurements stats
documentation task
<!--- Provide a general summary of the issue in the Title above --> ### What needs to be done just add lines over sprint deadlines and add end of sprint metrics in a nice table ### Out of scope <!-- What is out of scope for this task -->
1.0
Update quality measurements stats - <!--- Provide a general summary of the issue in the Title above --> ### What needs to be done just add lines over sprint deadlines and add end of sprint metrics in a nice table ### Out of scope <!-- What is out of scope for this task -->
non_process
update quality measurements stats what needs to be done just add lines over sprint deadlines and add end of sprint metrics in a nice table out of scope
0
13,009
15,367,122,493
IssuesEvent
2021-03-02 02:34:59
MetaMask/metamask-extension
https://api.github.com/repos/MetaMask/metamask-extension
closed
pull-ws sub dependency resolution leads to a 404
L09-process
Output of `yarn setup`: ``` $ yarn install && yarn patch-package && yarn allow-scripts warning ../package.json: No license field [1/5] πŸ” Validating package.json... [2/5] πŸ” Resolving packages... [3/5] 🚚 Fetching packages... error An unexpected error occurred: "https://codeload.github.com/hugomrdias/pull-ws/tar.gz/8e2ce0bb3b1cd6804828316e937fff8e0bef6225: Request failed \"404 Not Found\"". info If you think this is a bug, please open a bug report with the information provided in "/Users/ryanlanese/Projects/metamask-extension/yarn-error.log". ``` This points to a forked repo of `pull-ws` that no longer exists: https://github.com/hugomrdias/pull-ws https://web.archive.org/web/20210124144255/https://github.com/hugomrdias/pull-ws It should resolve to the latest of https://github.com/pull-stream/pull-ws
1.0
pull-ws sub dependency resolution leads to a 404 - Output of `yarn setup`: ``` $ yarn install && yarn patch-package && yarn allow-scripts warning ../package.json: No license field [1/5] πŸ” Validating package.json... [2/5] πŸ” Resolving packages... [3/5] 🚚 Fetching packages... error An unexpected error occurred: "https://codeload.github.com/hugomrdias/pull-ws/tar.gz/8e2ce0bb3b1cd6804828316e937fff8e0bef6225: Request failed \"404 Not Found\"". info If you think this is a bug, please open a bug report with the information provided in "/Users/ryanlanese/Projects/metamask-extension/yarn-error.log". ``` This points to a forked repo of `pull-ws` that no longer exists: https://github.com/hugomrdias/pull-ws https://web.archive.org/web/20210124144255/https://github.com/hugomrdias/pull-ws It should resolve to the latest of https://github.com/pull-stream/pull-ws
process
pull ws sub dependency resolution leads to a output of yarn setup yarn install yarn patch package yarn allow scripts warning package json no license field πŸ” validating package json πŸ” resolving packages 🚚 fetching packages error an unexpected error occurred request failed not found info if you think this is a bug please open a bug report with the information provided in users ryanlanese projects metamask extension yarn error log this points to a forked repo of pull ws that no longer exists it should resolve to the latest of
1
22,324
30,889,351,981
IssuesEvent
2023-08-04 02:35:52
h4sh5/pypi-auto-scanner
https://api.github.com/repos/h4sh5/pypi-auto-scanner
opened
pih 1.48035 has 2 GuardDog issues
guarddog typosquatting silent-process-execution
https://pypi.org/project/pih https://inspector.pypi.io/project/pih ```{ "dependency": "pih", "version": "1.48035", "result": { "issues": 2, "errors": {}, "results": { "typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: pid, pip", "silent-process-execution": [ { "location": "pih-1.48035/pih/tools.py:778", "code": " result = subprocess.run(command, stdin=subprocess.DEVNULL, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmpoymqjjen/pih" } }```
1.0
pih 1.48035 has 2 GuardDog issues - https://pypi.org/project/pih https://inspector.pypi.io/project/pih ```{ "dependency": "pih", "version": "1.48035", "result": { "issues": 2, "errors": {}, "results": { "typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: pid, pip", "silent-process-execution": [ { "location": "pih-1.48035/pih/tools.py:778", "code": " result = subprocess.run(command, stdin=subprocess.DEVNULL, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmpoymqjjen/pih" } }```
process
pih has guarddog issues dependency pih version result issues errors results typosquatting this package closely ressembles the following package names and might be a typosquatting attempt pid pip silent process execution location pih pih tools py code result subprocess run command stdin subprocess devnull stdout subprocess devnull stderr subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp tmpoymqjjen pih
1
7,313
10,450,973,882
IssuesEvent
2019-09-19 11:50:21
GetTerminus/ngx-tools
https://api.github.com/repos/GetTerminus/ngx-tools
opened
Date utilities and date mocking
Focus: utility Goal: Process Improvement Type: feature
Now that the core products are writing more code and tests concerning dates, we should look into exposing some helpers for them. 1. Transform datetime into UTC: https://github.com/GetTerminus/engage-gui/blob/master/src/app/utilities/date-utils.ts#L1-L10 1. Transform date into UTC: https://github.com/GetTerminus/engage-gui/blob/master/src/app/utilities/date-utils.ts#L12-L18 1. Utility to mock date for testing: https://github.com/GetTerminus/engage-gui/blob/master/src/setup-jest.ts#L15-L23 --- - [ ] Add utilities - [ ] Add tests - [ ] Update docs
1.0
Date utilities and date mocking - Now that the core products are writing more code and tests concerning dates, we should look into exposing some helpers for them. 1. Transform datetime into UTC: https://github.com/GetTerminus/engage-gui/blob/master/src/app/utilities/date-utils.ts#L1-L10 1. Transform date into UTC: https://github.com/GetTerminus/engage-gui/blob/master/src/app/utilities/date-utils.ts#L12-L18 1. Utility to mock date for testing: https://github.com/GetTerminus/engage-gui/blob/master/src/setup-jest.ts#L15-L23 --- - [ ] Add utilities - [ ] Add tests - [ ] Update docs
process
date utilities and date mocking now that the core products are writing more code and tests concerning dates we should look into exposing some helpers for them transform datetime into utc transform date into utc utility to mock date for testing add utilities add tests update docs
1
22,450
6,246,171,977
IssuesEvent
2017-07-13 02:50:31
xceedsoftware/wpftoolkit
https://api.github.com/repos/xceedsoftware/wpftoolkit
closed
Incorrect assembly revision number
CodePlex
<b>doomer[CodePlex]</b> <br />Errors while loading WPFToolkit.Extended.dll from 19 Sep 2012: === Pre-bind state information === ... LOG: DisplayName = WPFToolkit.Extended, Version=1.7.4644.13121, Culture=neutral, PublicKeyToken=3e4669d2f30244f4 ... ... LOG: Post-policy reference: WPFToolkit.Extended, Version=1.7.4644.13121, Culture=neutral, PublicKeyToken=3e4669d2f30244f4 ... LOG: Assembly Name is: WPFToolkit.Extended, Version=1.7.4644.13122, Culture=neutral, PublicKeyToken=3e4669d2f30244f4 WRN: Comparing the assembly name resulted in the mismatch: Revision Number ERR: The assembly reference did not match the assembly definition found. ERR: Failed to complete setup of assembly (hr = 0x80131040). Probing terminated.
1.0
Incorrect assembly revision number - <b>doomer[CodePlex]</b> <br />Errors while loading WPFToolkit.Extended.dll from 19 Sep 2012: === Pre-bind state information === ... LOG: DisplayName = WPFToolkit.Extended, Version=1.7.4644.13121, Culture=neutral, PublicKeyToken=3e4669d2f30244f4 ... ... LOG: Post-policy reference: WPFToolkit.Extended, Version=1.7.4644.13121, Culture=neutral, PublicKeyToken=3e4669d2f30244f4 ... LOG: Assembly Name is: WPFToolkit.Extended, Version=1.7.4644.13122, Culture=neutral, PublicKeyToken=3e4669d2f30244f4 WRN: Comparing the assembly name resulted in the mismatch: Revision Number ERR: The assembly reference did not match the assembly definition found. ERR: Failed to complete setup of assembly (hr = 0x80131040). Probing terminated.
non_process
incorrect assembly revision number doomer errors while loading wpftoolkit extended dll from sep pre bind state information log displayname wpftoolkit extended version culture neutral publickeytoken log post policy reference wpftoolkit extended version culture neutral publickeytoken log assembly name is wpftoolkit extended version culture neutral publickeytoken wrn comparing the assembly name resulted in the mismatch revision number err the assembly reference did not match the assembly definition found err failed to complete setup of assembly hr probing terminated
0
15,806
19,989,451,972
IssuesEvent
2022-01-31 03:20:02
chellimiller/sassy-design-tokens
https://api.github.com/repos/chellimiller/sassy-design-tokens
opened
Add Sass unit tests
process
The `_tokens.scss` file should have unit tests. This will probably be with [`sass-true`](https://www.npmjs.com/package/sass-true).
1.0
Add Sass unit tests - The `_tokens.scss` file should have unit tests. This will probably be with [`sass-true`](https://www.npmjs.com/package/sass-true).
process
add sass unit tests the tokens scss file should have unit tests this will probably be with
1
114,926
9,769,653,451
IssuesEvent
2019-06-06 09:04:47
LIBCAS/INDIHU-Exhibition
https://api.github.com/repos/LIBCAS/INDIHU-Exhibition
closed
OΕ™ezΓ‘vΓ‘nΓ­ obrΓ‘zkΕ― (ΓΊvod do kapitoly, stΓ­racΓ­ los)
waiting for test
Je moΕΎnΓ© nΔ›jak ovlivnit oΕ™ezΓ‘vΓ‘nΓ­ obrΓ‘zku na pozadΓ­ v ΓΊvodu do kapitoly, respektive tuto moΕΎnost vypnout? V současnoti program oΕ™ezΓ‘vΓ‘ obrΓ‘zky automaticky, coΕΎ nenΓ­ vΕΎdy vhodnΓ©. NΓ‘pΕ™Γ­klad v tomto pΕ™Γ­padΔ›: NΓ‘hled souboru: ![vys2](https://user-images.githubusercontent.com/42067492/57700765-e86b6f80-765a-11e9-92c3-b1adb0d541e3.jpg) NΓ‘hled ΓΊvodu do kapitoly: ![vys1](https://user-images.githubusercontent.com/42067492/57700802-f620f500-765a-11e9-81f2-e594a168a6d2.jpg)
1.0
OΕ™ezΓ‘vΓ‘nΓ­ obrΓ‘zkΕ― (ΓΊvod do kapitoly, stΓ­racΓ­ los) - Je moΕΎnΓ© nΔ›jak ovlivnit oΕ™ezΓ‘vΓ‘nΓ­ obrΓ‘zku na pozadΓ­ v ΓΊvodu do kapitoly, respektive tuto moΕΎnost vypnout? V současnoti program oΕ™ezΓ‘vΓ‘ obrΓ‘zky automaticky, coΕΎ nenΓ­ vΕΎdy vhodnΓ©. NΓ‘pΕ™Γ­klad v tomto pΕ™Γ­padΔ›: NΓ‘hled souboru: ![vys2](https://user-images.githubusercontent.com/42067492/57700765-e86b6f80-765a-11e9-92c3-b1adb0d541e3.jpg) NΓ‘hled ΓΊvodu do kapitoly: ![vys1](https://user-images.githubusercontent.com/42067492/57700802-f620f500-765a-11e9-81f2-e594a168a6d2.jpg)
non_process
oΕ™ezΓ‘vΓ‘nΓ­ obrΓ‘zkΕ― ΓΊvod do kapitoly stΓ­racΓ­ los je moΕΎnΓ© nΔ›jak ovlivnit oΕ™ezΓ‘vΓ‘nΓ­ obrΓ‘zku na pozadΓ­ v ΓΊvodu do kapitoly respektive tuto moΕΎnost vypnout v současnoti program oΕ™ezΓ‘vΓ‘ obrΓ‘zky automaticky coΕΎ nenΓ­ vΕΎdy vhodnΓ© nΓ‘pΕ™Γ­klad v tomto pΕ™Γ­padΔ› nΓ‘hled souboru nΓ‘hled ΓΊvodu do kapitoly
0
2,185
4,322,161,212
IssuesEvent
2016-07-25 13:12:40
jhipster/generator-jhipster
https://api.github.com/repos/jhipster/generator-jhipster
closed
Add Kubernetes support
feature request in-progress microservice
I'm looking to adding Kubernetes support w/ @PierreBesson and @jdubois. This is a tracking issue. :D
1.0
Add Kubernetes support - I'm looking to adding Kubernetes support w/ @PierreBesson and @jdubois. This is a tracking issue. :D
non_process
add kubernetes support i m looking to adding kubernetes support w pierrebesson and jdubois this is a tracking issue d
0
12,956
15,339,442,984
IssuesEvent
2021-02-27 02:01:31
DevExpress/testcafe-hammerhead
https://api.github.com/repos/DevExpress/testcafe-hammerhead
closed
Check that our wrappers doesn't have enumerable internal properties and methods
AREA: client STATE: Stale SYSTEM: client side processing TYPE: enhancement
- [ ] `AttributesWrapper` - [ ] `LocationWrapper` - [ ] `StorageWrapper` - [ ] etc
1.0
Check that our wrappers doesn't have enumerable internal properties and methods - - [ ] `AttributesWrapper` - [ ] `LocationWrapper` - [ ] `StorageWrapper` - [ ] etc
process
check that our wrappers doesn t have enumerable internal properties and methods attributeswrapper locationwrapper storagewrapper etc
1
14,392
17,403,960,013
IssuesEvent
2021-08-03 01:21:18
CodeForPittsburgh/food-access-map-data
https://api.github.com/repos/CodeForPittsburgh/food-access-map-data
opened
Test public view/no auth on googlesheets4 package
data processing
This one's for @conorotompkins. We found the google account-level authorization in the prep_Just_Harvest_Google_Sheets.R script (not the exact title) isn't working in our workflow, so we need to test switching to a public (anyone with a link can view) google sheet workbook and see if a no-auth approach will work. Conor will switch the link to https://docs.google.com/spreadsheets/d/1LT1lssZFVcUH-07a9XhbzalpvV_mrSb3dNOd3ln20xQ/ and use the deauth() function in googlesheets4 as needed. https://googlesheets4.tidyverse.org/articles/articles/auth.html
1.0
Test public view/no auth on googlesheets4 package - This one's for @conorotompkins. We found the google account-level authorization in the prep_Just_Harvest_Google_Sheets.R script (not the exact title) isn't working in our workflow, so we need to test switching to a public (anyone with a link can view) google sheet workbook and see if a no-auth approach will work. Conor will switch the link to https://docs.google.com/spreadsheets/d/1LT1lssZFVcUH-07a9XhbzalpvV_mrSb3dNOd3ln20xQ/ and use the deauth() function in googlesheets4 as needed. https://googlesheets4.tidyverse.org/articles/articles/auth.html
process
test public view no auth on package this one s for conorotompkins we found the google account level authorization in the prep just harvest google sheets r script not the exact title isn t working in our workflow so we need to test switching to a public anyone with a link can view google sheet workbook and see if a no auth approach will work conor will switch the link to and use the deauth function in as needed
1
10,555
13,341,116,781
IssuesEvent
2020-08-28 15:25:45
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
stageDependencies
Pri1 devops-cicd-process/tech devops/prod doc-enhancement
Release notes from 4 May said that there is a new support for variables between stages called stageDependencies. https://docs.microsoft.com/en-us/azure/devops/release-notes/2020/sprint-168-update Tried it in a pipeline and got "An error occurred while loading the YAML build pipeline. Unrecognized value: 'stageDependencies'. Located at position 5 within expression: eq( stageDependencies.Build.Plan.outputs['plan.output'], '2' )" How are you suppose to use this feature? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 3f151218-9a11-0078-e038-f96198a76143 * Version Independent ID: 09c4d032-62f3-d97c-79d7-6fbfd89910e9 * Content: [Conditions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/conditions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/conditions.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
stageDependencies - Release notes from 4 May said that there is a new support for variables between stages called stageDependencies. https://docs.microsoft.com/en-us/azure/devops/release-notes/2020/sprint-168-update Tried it in a pipeline and got "An error occurred while loading the YAML build pipeline. Unrecognized value: 'stageDependencies'. Located at position 5 within expression: eq( stageDependencies.Build.Plan.outputs['plan.output'], '2' )" How are you suppose to use this feature? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 3f151218-9a11-0078-e038-f96198a76143 * Version Independent ID: 09c4d032-62f3-d97c-79d7-6fbfd89910e9 * Content: [Conditions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/conditions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/conditions.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
stagedependencies release notes from may said that there is a new support for variables between stages called stagedependencies tried it in a pipeline and got an error occurred while loading the yaml build pipeline unrecognized value stagedependencies located at position within expression eq stagedependencies build plan outputs how are you suppose to use this feature document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
20,197
26,773,207,548
IssuesEvent
2023-01-31 15:25:52
hashgraph/hedera-mirror-node
https://api.github.com/repos/hashgraph/hedera-mirror-node
opened
Release checklist 0.74
enhancement process
### Problem We need a checklist to verify the release is rolled out successfully. ### Solution ## Preparation - [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc) - [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.74.0) - [x] GitHub checks for branch are passing - [ ] Automated Kubernetes deployment successful - [ ] Tag release - [ ] Upload release artifacts - [ ] Manual Submission for GCP Marketplace verification by google - [ ] Publish marketplace release - [ ] Publish release ## Performance - [ ] Deploy to Kubernetes - [ ] Deploy to VM - [ ] gRPC API performance tests - [ ] Importer performance tests - [ ] REST API performance tests ## Previewnet - [ ] Deploy to Kubernetes ## Staging - [ ] Deploy to Kubernetes ## Testnet - [ ] Deploy to VM ## Mainnet - [ ] Deploy to Kubernetes EU - [ ] Deploy to Kubernetes NA - [ ] Deploy to VM - [ ] Deploy to ETL ### Alternatives _No response_
1.0
Release checklist 0.74 - ### Problem We need a checklist to verify the release is rolled out successfully. ### Solution ## Preparation - [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc) - [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.74.0) - [x] GitHub checks for branch are passing - [ ] Automated Kubernetes deployment successful - [ ] Tag release - [ ] Upload release artifacts - [ ] Manual Submission for GCP Marketplace verification by google - [ ] Publish marketplace release - [ ] Publish release ## Performance - [ ] Deploy to Kubernetes - [ ] Deploy to VM - [ ] gRPC API performance tests - [ ] Importer performance tests - [ ] REST API performance tests ## Previewnet - [ ] Deploy to Kubernetes ## Staging - [ ] Deploy to Kubernetes ## Testnet - [ ] Deploy to VM ## Mainnet - [ ] Deploy to Kubernetes EU - [ ] Deploy to Kubernetes NA - [ ] Deploy to VM - [ ] Deploy to ETL ### Alternatives _No response_
process
release checklist problem we need a checklist to verify the release is rolled out successfully solution preparation milestone field populated on relevant nothing open for github checks for branch are passing automated kubernetes deployment successful tag release upload release artifacts manual submission for gcp marketplace verification by google publish marketplace release publish release performance deploy to kubernetes deploy to vm grpc api performance tests importer performance tests rest api performance tests previewnet deploy to kubernetes staging deploy to kubernetes testnet deploy to vm mainnet deploy to kubernetes eu deploy to kubernetes na deploy to vm deploy to etl alternatives no response
1
533,577
15,593,997,587
IssuesEvent
2021-03-18 13:26:49
miaowware/qrm2
https://api.github.com/repos/miaowware/qrm2
closed
Move resources data to another repo
enhancement priority-med refactor
This would allow for updating resources (list of band plans, maps, etc) without having to re-release the bot. Would require some refactoring.
1.0
Move resources data to another repo - This would allow for updating resources (list of band plans, maps, etc) without having to re-release the bot. Would require some refactoring.
non_process
move resources data to another repo this would allow for updating resources list of band plans maps etc without having to re release the bot would require some refactoring
0
4,243
7,187,131,058
IssuesEvent
2018-02-02 03:07:38
Great-Hill-Corporation/quickBlocks
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
closed
Ways to optimize traces
monitors-all status-inprocess type-enhancement
I need a list of blocks with 'huge' traces (there are many all from around Oct 2016 during the DDOS attack. There are even more from when Vitalik cleaned up the DDOS. I can skip over all the blocks that were created during that time and speed up the traversal across the entire chain by about 99.9% If I do, this I should definetly document it, and I should make sure that that I can enable it with a --deep command.
1.0
Ways to optimize traces - I need a list of blocks with 'huge' traces (there are many all from around Oct 2016 during the DDOS attack. There are even more from when Vitalik cleaned up the DDOS. I can skip over all the blocks that were created during that time and speed up the traversal across the entire chain by about 99.9% If I do, this I should definetly document it, and I should make sure that that I can enable it with a --deep command.
process
ways to optimize traces i need a list of blocks with huge traces there are many all from around oct during the ddos attack there are even more from when vitalik cleaned up the ddos i can skip over all the blocks that were created during that time and speed up the traversal across the entire chain by about if i do this i should definetly document it and i should make sure that that i can enable it with a deep command
1
1,565
4,164,119,854
IssuesEvent
2016-06-18 15:27:15
pwittchen/ReactiveWiFi
https://api.github.com/repos/pwittchen/ReactiveWiFi
closed
Release 0.1.0
release process
**Initial release notes**: - added ` Observable<SupplicantState> observeSupplicantState(context)` method, which observes the current WPA supplicant state. - added `Observable<WifiInfo> observeWifiAccessPointChanges(context)` method, which observes the WiFi network the device is connected to. **Things to do**: - [x] bump library version to 0.1.0 :arrow_right: done in https://github.com/pwittchen/ReactiveWiFi/commit/9050eb431d369db957b5948a8f013b0bf4dd51bb - [x] upload Archives to Maven Central Repository - [x] close and release artifact on Nexus - [x] update gh-pages - [x] update `CHANGELOG.md` after Maven Sync - [x] update download section in `README.md` after Maven Sync - [x] create new GitHub release
1.0
Release 0.1.0 - **Initial release notes**: - added ` Observable<SupplicantState> observeSupplicantState(context)` method, which observes the current WPA supplicant state. - added `Observable<WifiInfo> observeWifiAccessPointChanges(context)` method, which observes the WiFi network the device is connected to. **Things to do**: - [x] bump library version to 0.1.0 :arrow_right: done in https://github.com/pwittchen/ReactiveWiFi/commit/9050eb431d369db957b5948a8f013b0bf4dd51bb - [x] upload Archives to Maven Central Repository - [x] close and release artifact on Nexus - [x] update gh-pages - [x] update `CHANGELOG.md` after Maven Sync - [x] update download section in `README.md` after Maven Sync - [x] create new GitHub release
process
release initial release notes added observable observesupplicantstate context method which observes the current wpa supplicant state added observable observewifiaccesspointchanges context method which observes the wifi network the device is connected to things to do bump library version to arrow right done in upload archives to maven central repository close and release artifact on nexus update gh pages update changelog md after maven sync update download section in readme md after maven sync create new github release
1
385,554
26,643,087,963
IssuesEvent
2023-01-25 07:32:53
HSLdevcom/jore4
https://api.github.com/repos/HSLdevcom/jore4
closed
Create a document that lists automated tests per module
documentation testing
List existing test cases in an Excel file to bring visibility to test automation coverage. Use this excel as a base:
1.0
Create a document that lists automated tests per module - List existing test cases in an Excel file to bring visibility to test automation coverage. Use this excel as a base:
non_process
create a document that lists automated tests per module list existing test cases in an excel file to bring visibility to test automation coverage use this excel as a base
0
17,581
23,391,799,029
IssuesEvent
2022-08-11 18:35:56
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
[processor/filter] Add the ability to filter SeverityNumber below a certain severity
priority:p2 processor/filter
**Is your feature request related to a problem? Please describe.** Commonly, you want to reduce the amount of logs that you are sending, and a common way to do this is to filter logs under a certain severity level (e.g. only log "info" level and higher). Currently, there is no way to do that. **Describe the solution you'd like** Add a `min_sev` field to the logs portion of the `filterprocessor` (and the `filterlogs` package) in order to allow for filtering of log signals below `min_sev`. Essentially, the filter would match any logs at or above the level specified by `min_sev`. **Design Questions:** 1. Should the configured severities be the aliases of the severity, or the numeric value of the severity? Should we somehow support both? (e.g. should info be the integer 9, or should it be the literal "INFO"?) (if we were to use strings, we would follow [the table from the spec](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md#displaying-severity)) 2. Should records with no severity (severity 0) match or not match if min_sev is specified with a value > 0? Related issue: #9235 - This issue is mainly about getting parity with the attributes processor, which allows filtering on SeverityText, but SeverityText is less standard since it represents the source severity.
1.0
[processor/filter] Add the ability to filter SeverityNumber below a certain severity - **Is your feature request related to a problem? Please describe.** Commonly, you want to reduce the amount of logs that you are sending, and a common way to do this is to filter logs under a certain severity level (e.g. only log "info" level and higher). Currently, there is no way to do that. **Describe the solution you'd like** Add a `min_sev` field to the logs portion of the `filterprocessor` (and the `filterlogs` package) in order to allow for filtering of log signals below `min_sev`. Essentially, the filter would match any logs at or above the level specified by `min_sev`. **Design Questions:** 1. Should the configured severities be the aliases of the severity, or the numeric value of the severity? Should we somehow support both? (e.g. should info be the integer 9, or should it be the literal "INFO"?) (if we were to use strings, we would follow [the table from the spec](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md#displaying-severity)) 2. Should records with no severity (severity 0) match or not match if min_sev is specified with a value > 0? Related issue: #9235 - This issue is mainly about getting parity with the attributes processor, which allows filtering on SeverityText, but SeverityText is less standard since it represents the source severity.
process
add the ability to filter severitynumber below a certain severity is your feature request related to a problem please describe commonly you want to reduce the amount of logs that you are sending and a common way to do this is to filter logs under a certain severity level e g only log info level and higher currently there is no way to do that describe the solution you d like add a min sev field to the logs portion of the filterprocessor and the filterlogs package in order to allow for filtering of log signals below min sev essentially the filter would match any logs at or above the level specified by min sev design questions should the configured severities be the aliases of the severity or the numeric value of the severity should we somehow support both e g should info be the integer or should it be the literal info if we were to use strings we would follow should records with no severity severity match or not match if min sev is specified with a value related issue this issue is mainly about getting parity with the attributes processor which allows filtering on severitytext but severitytext is less standard since it represents the source severity
1
691
3,184,308,778
IssuesEvent
2015-09-27 07:54:51
pwittchen/ReactiveNetwork
https://api.github.com/repos/pwittchen/ReactiveNetwork
closed
Release v. 0.1.1
release process
Initial release notes: - bumped RxJava to v. 1.0.14 - bumped Gradle Build Tools to v. 1.3.1
1.0
Release v. 0.1.1 - Initial release notes: - bumped RxJava to v. 1.0.14 - bumped Gradle Build Tools to v. 1.3.1
process
release v initial release notes bumped rxjava to v bumped gradle build tools to v
1
19,905
26,358,689,468
IssuesEvent
2023-01-11 11:43:13
JustBru00/RenamePlugin
https://api.github.com/repos/JustBru00/RenamePlugin
opened
Add support for ItemsAdder animated rainbow effect in item renames and other text if possible
Addition Request Medium Priority Processing
Requested by `@powerdev` on spigotmc.org. (@alexanderdidio on github.com) Currently waiting on API implementation from ItemsAdder before starting on this issue. PluginBugs/Issues-ItemsAdder#2256 ![image](https://user-images.githubusercontent.com/11063799/211797636-f3d831f9-c7ae-4ca3-9b8a-61027bcf3c30.png)
1.0
Add support for ItemsAdder animated rainbow effect in item renames and other text if possible - Requested by `@powerdev` on spigotmc.org. (@alexanderdidio on github.com) Currently waiting on API implementation from ItemsAdder before starting on this issue. PluginBugs/Issues-ItemsAdder#2256 ![image](https://user-images.githubusercontent.com/11063799/211797636-f3d831f9-c7ae-4ca3-9b8a-61027bcf3c30.png)
process
add support for itemsadder animated rainbow effect in item renames and other text if possible requested by powerdev on spigotmc org alexanderdidio on github com currently waiting on api implementation from itemsadder before starting on this issue pluginbugs issues itemsadder
1
83,278
16,110,421,299
IssuesEvent
2021-04-27 20:22:09
mathjax/MathJax
https://api.github.com/repos/mathjax/MathJax
closed
Document option to disable noundefined extension for TeX in v3
Code Example Fixed v3 v3.1
**Is your feature request related to a problem? Please describe.** I don't wan't users to be able to save invalid TeX. **Describe the solution you'd like** The library should throw an error if invalid TeX is entered while `noundefined` is disabled. **Describe alternatives you've considered** v 2.7 where this was possible: `TeX: { noUndefined: { disabled: true } }`
1.0
Document option to disable noundefined extension for TeX in v3 - **Is your feature request related to a problem? Please describe.** I don't wan't users to be able to save invalid TeX. **Describe the solution you'd like** The library should throw an error if invalid TeX is entered while `noundefined` is disabled. **Describe alternatives you've considered** v 2.7 where this was possible: `TeX: { noUndefined: { disabled: true } }`
non_process
document option to disable noundefined extension for tex in is your feature request related to a problem please describe i don t wan t users to be able to save invalid tex describe the solution you d like the library should throw an error if invalid tex is entered while noundefined is disabled describe alternatives you ve considered v where this was possible tex noundefined disabled true
0
761,711
26,693,733,899
IssuesEvent
2023-01-27 08:28:23
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
github.com - see bug description
browser-chrome-mobile priority-critical
<!-- @browser: Chrome Mobile 108.0.0 --> <!-- @ua_header: Mozilla/5.0 (Linux; Android 12; SM-A326U1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Mobile Safari/537.36 --> <!-- @reported_with: unknown --> **URL**: https://github.com/apps/woocommmerce-downloads-app **Browser / Version**: Chrome Mobile 108.0.0 **Operating System**: Android 12 **Tested Another Browser**: Yes Internet Explorer **Problem type**: Something else **Description**: Won't allow my account login **Steps to Reproduce**: Still won't allow my account login <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2023/1/7792e93a-8d9f-4b89-a2ee-4c1aad949cba.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❀️_
1.0
github.com - see bug description - <!-- @browser: Chrome Mobile 108.0.0 --> <!-- @ua_header: Mozilla/5.0 (Linux; Android 12; SM-A326U1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Mobile Safari/537.36 --> <!-- @reported_with: unknown --> **URL**: https://github.com/apps/woocommmerce-downloads-app **Browser / Version**: Chrome Mobile 108.0.0 **Operating System**: Android 12 **Tested Another Browser**: Yes Internet Explorer **Problem type**: Something else **Description**: Won't allow my account login **Steps to Reproduce**: Still won't allow my account login <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2023/1/7792e93a-8d9f-4b89-a2ee-4c1aad949cba.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❀️_
non_process
github com see bug description url browser version chrome mobile operating system android tested another browser yes internet explorer problem type something else description won t allow my account login steps to reproduce still won t allow my account login view the screenshot img alt screenshot src browser configuration none from with ❀️
0
51,906
21,912,126,383
IssuesEvent
2022-05-21 08:01:00
Azure/azure-cli
https://api.github.com/repos/Azure/azure-cli
closed
Trouble viewing Logs
Web Apps Service Attention customer-reported needs-author-feedback no-recent-activity
### **This is autogenerated. Please review and update as needed.** ## Describe the bug **Command Name** `az webapp log download` **Errors:** ``` HTTPSConnectionPool(host='dungeonmaster.scm.pgprod.p.azurewebsites.net', port=443): Max retries exceeded with url: /dump (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])"))) Traceback (most recent call last): site-packages/urllib3/contrib/pyopenssl.py, ln 485, in wrap_socket cnx.do_handshake() python3.8/site-packages/OpenSSL/SSL.py, ln 1934, in do_handshake self._raise_ssl_error(self._ssl, result) ... raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='dungeonmaster.scm.pgprod.p.azurewebsites.net', port=443): Max retries exceeded with url: /dump (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])"))) ``` ## To Reproduce: Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information. - _Put any pre-requisite steps here..._ - `az webapp log download --log-file {} --name {} --resource-group {} --subscription {}` ## Expected Behavior ## Environment Summary ``` macOS-10.16-x86_64-i386-64bit Python 3.8.1 azure-cli 2.2.0 ``` ## Additional Context <!--Please don't remove this:--> <!--auto-generated-->
1.0
Trouble viewing Logs - ### **This is autogenerated. Please review and update as needed.** ## Describe the bug **Command Name** `az webapp log download` **Errors:** ``` HTTPSConnectionPool(host='dungeonmaster.scm.pgprod.p.azurewebsites.net', port=443): Max retries exceeded with url: /dump (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])"))) Traceback (most recent call last): site-packages/urllib3/contrib/pyopenssl.py, ln 485, in wrap_socket cnx.do_handshake() python3.8/site-packages/OpenSSL/SSL.py, ln 1934, in do_handshake self._raise_ssl_error(self._ssl, result) ... raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='dungeonmaster.scm.pgprod.p.azurewebsites.net', port=443): Max retries exceeded with url: /dump (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])"))) ``` ## To Reproduce: Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information. - _Put any pre-requisite steps here..._ - `az webapp log download --log-file {} --name {} --resource-group {} --subscription {}` ## Expected Behavior ## Environment Summary ``` macOS-10.16-x86_64-i386-64bit Python 3.8.1 azure-cli 2.2.0 ``` ## Additional Context <!--Please don't remove this:--> <!--auto-generated-->
non_process
trouble viewing logs this is autogenerated please review and update as needed describe the bug command name az webapp log download errors httpsconnectionpool host dungeonmaster scm pgprod p azurewebsites net port max retries exceeded with url dump caused by sslerror sslerror bad handshake error traceback most recent call last site packages contrib pyopenssl py ln in wrap socket cnx do handshake site packages openssl ssl py ln in do handshake self raise ssl error self ssl result raise maxretryerror pool url error or responseerror cause exceptions maxretryerror httpsconnectionpool host dungeonmaster scm pgprod p azurewebsites net port max retries exceeded with url dump caused by sslerror sslerror bad handshake error to reproduce steps to reproduce the behavior note that argument values have been redacted as they may contain sensitive information put any pre requisite steps here az webapp log download log file name resource group subscription expected behavior environment summary macos python azure cli additional context
0
15,598
19,722,941,650
IssuesEvent
2022-01-13 17:01:18
RobertCraigie/prisma-client-py
https://api.github.com/repos/RobertCraigie/prisma-client-py
closed
Python 3.6 EOL
kind/discussion process/candidate
Python 3.6 support is being dropped on the 23rd of December 2021, after this we should also drop support for python 3.6 Changes: - [ ] Dataclass support - [ ] Generic BaseModels - [x] Update documentation to use `asyncio.run()`
1.0
Python 3.6 EOL - Python 3.6 support is being dropped on the 23rd of December 2021, after this we should also drop support for python 3.6 Changes: - [ ] Dataclass support - [ ] Generic BaseModels - [x] Update documentation to use `asyncio.run()`
process
python eol python support is being dropped on the of december after this we should also drop support for python changes dataclass support generic basemodels update documentation to use asyncio run
1
6,842
9,986,576,185
IssuesEvent
2019-07-10 19:28:04
AcademySoftwareFoundation/OpenCue
https://api.github.com/repos/AcademySoftwareFoundation/OpenCue
closed
Add OpenCue repo topics and update description
process
To improve discoverability of the OpenCue repo, we could add some topics. See for example the [list of topics for Agones](https://github.com/googleforgames/agones), such as `game-development` and `multiplayer`. For OpenCue, topics such as the following might be useful: - visual-effects - vfx - fx - animation - aendering - game-development To improve discoverability of the OpenCue website, we could also update the repo description with a link. See for example the following ASWF projects: - https://github.com/AcademySoftwareFoundation/OpenColorIO - https://github.com/AcademySoftwareFoundation/openexr Also the description could provide a bit more detail, something along the lines of: > OpenCue is a render management system you can deploy for visual effects and animation productions. https://www.opencue.io
1.0
Add OpenCue repo topics and update description - To improve discoverability of the OpenCue repo, we could add some topics. See for example the [list of topics for Agones](https://github.com/googleforgames/agones), such as `game-development` and `multiplayer`. For OpenCue, topics such as the following might be useful: - visual-effects - vfx - fx - animation - aendering - game-development To improve discoverability of the OpenCue website, we could also update the repo description with a link. See for example the following ASWF projects: - https://github.com/AcademySoftwareFoundation/OpenColorIO - https://github.com/AcademySoftwareFoundation/openexr Also the description could provide a bit more detail, something along the lines of: > OpenCue is a render management system you can deploy for visual effects and animation productions. https://www.opencue.io
process
add opencue repo topics and update description to improve discoverability of the opencue repo we could add some topics see for example the such as game development and multiplayer for opencue topics such as the following might be useful visual effects vfx fx animation aendering game development to improve discoverability of the opencue website we could also update the repo description with a link see for example the following aswf projects also the description could provide a bit more detail something along the lines of opencue is a render management system you can deploy for visual effects and animation productions
1
61,371
7,461,563,655
IssuesEvent
2018-03-31 04:46:44
gskleres/FruityMod-StS
https://api.github.com/repos/gskleres/FruityMod-StS
reopened
Card 30 Siphon Power
card design updated ready for testing
Card 30 Name: Siphon Power Rarity: 2-Uncommon Type: Attack Basic [2 cost]: Deal 8 damage. Apply 1 Vulnerable. Gain 1 Strength. Upgrade [2 cost]: Deal 11 damage. Apply 2 Vulnerable. Gain 1 Strength.
1.0
Card 30 Siphon Power - Card 30 Name: Siphon Power Rarity: 2-Uncommon Type: Attack Basic [2 cost]: Deal 8 damage. Apply 1 Vulnerable. Gain 1 Strength. Upgrade [2 cost]: Deal 11 damage. Apply 2 Vulnerable. Gain 1 Strength.
non_process
card siphon power card name siphon power rarity uncommon type attack basic deal damage apply vulnerable gain strength upgrade deal damage apply vulnerable gain strength
0
265,598
20,103,687,918
IssuesEvent
2022-02-07 08:20:54
jdi-testing/jdn-ai
https://api.github.com/repos/jdi-testing/jdn-ai
closed
Update readme backed update info
documentation
docker-compose stop docker-compose rm -f docker-compose pull docker-compose up -d
1.0
Update readme backed update info - docker-compose stop docker-compose rm -f docker-compose pull docker-compose up -d
non_process
update readme backed update info docker compose stop docker compose rm f docker compose pull docker compose up d
0
513,516
14,922,767,746
IssuesEvent
2021-01-23 16:10:28
PyTorchLightning/pytorch-lightning
https://api.github.com/repos/PyTorchLightning/pytorch-lightning
closed
Cannot perform test on GLUE example
Priority P1 bug / fix help wanted tutorial / example won't fix
## πŸ› Bug I tried to run the [GLUE example](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb) in the README.md file to see what does the model return when running test. However, when I create a new cell to run `trainer.test()`, I got the following error ``` /usr/local/lib/python3.6/dist-packages/zmq/backend/cython/socket.cpython-36m-x86_64-linux-gnu.so in zmq.backend.cython.socket.Socket.__reduce_cython__() TypeError: no default __reduce__ due to non-trivial __cinit__ ``` After a quick Google search, I suspect it has something to do with the `setup()` method in `GLUEDataModule`. To verify it, I compare the data module before and after I run `trainer.test()` with the code ``` for iter in dm.dataset["test"]: print(iter) ``` The result obtained before the `trainer.test()` is as expected, but the result after is changed back to original data from `datasets` library. ### To Reproduce In the [Colab GLUE notebook](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb), create new cells with the following code ``` mocked_args = """ --model_name_or_path distilbert-base-cased --task_name mrpc --max_epochs 1 --gpus 1""".split() args = parse_args(mocked_args) dm, model, trainer = main(args) ``` ``` for iter in dm.dataset["test"]: print(iter) ``` ``` trainer.fit(model, datamodule=dm) ``` ``` trainer.test(verbose=True) # Error ``` ``` for iter in dm.dataset["test"]: print(iter) ``` ### Expected behavior The output of the 2nd cell and 5th cell should be the same. The 4th cell should perform test on finetuned model. ### Environment * Google Colab * CUDA: - GPU: - Tesla T4 - available: True - version: 10.1 * Packages: - numpy: 1.18.5 - pyTorch_debug: True - pyTorch_version: 1.7.0+cu101 - pytorch-lightning: 1.0.8 - tqdm: 4.41.1 * System: - OS: Linux - architecture: - 64bit - - processor: x86_64 - python: 3.6.9 - version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
1.0
Cannot perform test on GLUE example - ## πŸ› Bug I tried to run the [GLUE example](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb) in the README.md file to see what does the model return when running test. However, when I create a new cell to run `trainer.test()`, I got the following error ``` /usr/local/lib/python3.6/dist-packages/zmq/backend/cython/socket.cpython-36m-x86_64-linux-gnu.so in zmq.backend.cython.socket.Socket.__reduce_cython__() TypeError: no default __reduce__ due to non-trivial __cinit__ ``` After a quick Google search, I suspect it has something to do with the `setup()` method in `GLUEDataModule`. To verify it, I compare the data module before and after I run `trainer.test()` with the code ``` for iter in dm.dataset["test"]: print(iter) ``` The result obtained before the `trainer.test()` is as expected, but the result after is changed back to original data from `datasets` library. ### To Reproduce In the [Colab GLUE notebook](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb), create new cells with the following code ``` mocked_args = """ --model_name_or_path distilbert-base-cased --task_name mrpc --max_epochs 1 --gpus 1""".split() args = parse_args(mocked_args) dm, model, trainer = main(args) ``` ``` for iter in dm.dataset["test"]: print(iter) ``` ``` trainer.fit(model, datamodule=dm) ``` ``` trainer.test(verbose=True) # Error ``` ``` for iter in dm.dataset["test"]: print(iter) ``` ### Expected behavior The output of the 2nd cell and 5th cell should be the same. The 4th cell should perform test on finetuned model. ### Environment * Google Colab * CUDA: - GPU: - Tesla T4 - available: True - version: 10.1 * Packages: - numpy: 1.18.5 - pyTorch_debug: True - pyTorch_version: 1.7.0+cu101 - pytorch-lightning: 1.0.8 - tqdm: 4.41.1 * System: - OS: Linux - architecture: - 64bit - - processor: x86_64 - python: 3.6.9 - version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
non_process
cannot perform test on glue example πŸ› bug i tried to run the in the readme md file to see what does the model return when running test however when i create a new cell to run trainer test i got the following error usr local lib dist packages zmq backend cython socket cpython linux gnu so in zmq backend cython socket socket reduce cython typeerror no default reduce due to non trivial cinit after a quick google search i suspect it has something to do with the setup method in gluedatamodule to verify it i compare the data module before and after i run trainer test with the code for iter in dm dataset print iter the result obtained before the trainer test is as expected but the result after is changed back to original data from datasets library to reproduce in the create new cells with the following code mocked args model name or path distilbert base cased task name mrpc max epochs gpus split args parse args mocked args dm model trainer main args for iter in dm dataset print iter trainer fit model datamodule dm trainer test verbose true error for iter in dm dataset print iter expected behavior the output of the cell and cell should be the same the cell should perform test on finetuned model environment google colab cuda gpu tesla available true version packages numpy pytorch debug true pytorch version pytorch lightning tqdm system os linux architecture processor python version smp thu jul pdt
0
435
2,868,698,817
IssuesEvent
2015-06-05 20:29:17
gremau/NMEG_fluxproc_testing
https://api.github.com/repos/gremau/NMEG_fluxproc_testing
closed
Standardize relative humidity data to percent (0-100)
Ameriflux files QC Process
Ameriflux requests data in percent (0-100) There is some inconsistency in this among files and processing code that should be fixed. This will require changes in `UNM_Ameriflux_prepare_output_data` and `UNM_RemoveBadData` at a minimum
1.0
Standardize relative humidity data to percent (0-100) - Ameriflux requests data in percent (0-100) There is some inconsistency in this among files and processing code that should be fixed. This will require changes in `UNM_Ameriflux_prepare_output_data` and `UNM_RemoveBadData` at a minimum
process
standardize relative humidity data to percent ameriflux requests data in percent there is some inconsistency in this among files and processing code that should be fixed this will require changes in unm ameriflux prepare output data and unm removebaddata at a minimum
1
588,679
17,668,762,951
IssuesEvent
2021-08-23 00:34:53
sglavoie/uol-grades-calculator
https://api.github.com/repos/sglavoie/uol-grades-calculator
opened
All functions should return values to be consumed by an API
enhancement Priority: Medium
Needed by: https://github.com/sglavoie/uol-grades-calculator-server/issues/1 ## Acceptance * [ ] All commands and sub-commands return values to be consumed by an API. ## Tasks Return values for the following commands: * [ ] `generate-sample` * [ ] `plot modules` * [x] `check score-accuracy` * [x] `summarize all` * [x] `summarize done` * [x] `summarize progress` ## Analysis As it stands right now, `ugc` prints some output to the terminal but often does not return anything from functions or methods. This needs to be updated so that an API can work with the flow of data involved.
1.0
All functions should return values to be consumed by an API - Needed by: https://github.com/sglavoie/uol-grades-calculator-server/issues/1 ## Acceptance * [ ] All commands and sub-commands return values to be consumed by an API. ## Tasks Return values for the following commands: * [ ] `generate-sample` * [ ] `plot modules` * [x] `check score-accuracy` * [x] `summarize all` * [x] `summarize done` * [x] `summarize progress` ## Analysis As it stands right now, `ugc` prints some output to the terminal but often does not return anything from functions or methods. This needs to be updated so that an API can work with the flow of data involved.
non_process
all functions should return values to be consumed by an api needed by acceptance all commands and sub commands return values to be consumed by an api tasks return values for the following commands generate sample plot modules check score accuracy summarize all summarize done summarize progress analysis as it stands right now ugc prints some output to the terminal but often does not return anything from functions or methods this needs to be updated so that an api can work with the flow of data involved
0
744
4,153,979,996
IssuesEvent
2016-06-16 09:48:37
ElderByte-/Warden
https://api.github.com/repos/ElderByte-/Warden
closed
Don't expose H2 console in embedded mode
type: architecture
In embedded mode, there is no guarantee which datasource the spring data repositories are using as they get picked up dynamically.
1.0
Don't expose H2 console in embedded mode - In embedded mode, there is no guarantee which datasource the spring data repositories are using as they get picked up dynamically.
non_process
don t expose console in embedded mode in embedded mode there is no guarantee which datasource the spring data repositories are using as they get picked up dynamically
0
60,371
14,542,462,967
IssuesEvent
2020-12-15 15:44:38
GooseWSS/kittydar
https://api.github.com/repos/GooseWSS/kittydar
opened
CVE-2015-8857 (High) detected in uglify-js-2.2.5.tgz, uglify-js-1.3.4.tgz
security vulnerability
## CVE-2015-8857 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>uglify-js-2.2.5.tgz</b>, <b>uglify-js-1.3.4.tgz</b></p></summary> <p> <details><summary><b>uglify-js-2.2.5.tgz</b></p></summary> <p>JavaScript parser, mangler/compressor and beautifier toolkit</p> <p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-2.2.5.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-2.2.5.tgz</a></p> <p>Path to dependency file: kittydar/package.json</p> <p>Path to vulnerable library: kittydar/node_modules/uglify-js/package.json</p> <p> Dependency Hierarchy: - :x: **uglify-js-2.2.5.tgz** (Vulnerable Library) </details> <details><summary><b>uglify-js-1.3.4.tgz</b></p></summary> <p>JavaScript parser and compressor/beautifier toolkit</p> <p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-1.3.4.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-1.3.4.tgz</a></p> <p>Path to dependency file: kittydar/package.json</p> <p>Path to vulnerable library: kittydar/node_modules/browser-pack/node_modules/uglify-js/package.json</p> <p> Dependency Hierarchy: - browserify-2.10.2.tgz (Root Library) - browser-pack-0.7.1.tgz - :x: **uglify-js-1.3.4.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/GooseWSS/kittydar/commit/d47cdc79e976369ea4b8d754bfbe5c6578421393">d47cdc79e976369ea4b8d754bfbe5c6578421393</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The uglify-js package before 2.4.24 for Node.js does not properly account for non-boolean values when rewriting boolean expressions, which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten Javascript. <p>Publish Date: 2017-01-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8857>CVE-2015-8857</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858</a></p> <p>Release Date: 2018-12-15</p> <p>Fix Resolution: v2.4.24</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"uglify-js","packageVersion":"2.2.5","isTransitiveDependency":false,"dependencyTree":"uglify-js:2.2.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v2.4.24"},{"packageType":"javascript/Node.js","packageName":"uglify-js","packageVersion":"1.3.4","isTransitiveDependency":true,"dependencyTree":"browserify:2.10.2;browser-pack:0.7.1;uglify-js:1.3.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v2.4.24"}],"vulnerabilityIdentifier":"CVE-2015-8857","vulnerabilityDetails":"The uglify-js package before 2.4.24 for Node.js does not properly account for non-boolean values when rewriting boolean expressions, which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten Javascript.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8857","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2015-8857 (High) detected in uglify-js-2.2.5.tgz, uglify-js-1.3.4.tgz - ## CVE-2015-8857 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>uglify-js-2.2.5.tgz</b>, <b>uglify-js-1.3.4.tgz</b></p></summary> <p> <details><summary><b>uglify-js-2.2.5.tgz</b></p></summary> <p>JavaScript parser, mangler/compressor and beautifier toolkit</p> <p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-2.2.5.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-2.2.5.tgz</a></p> <p>Path to dependency file: kittydar/package.json</p> <p>Path to vulnerable library: kittydar/node_modules/uglify-js/package.json</p> <p> Dependency Hierarchy: - :x: **uglify-js-2.2.5.tgz** (Vulnerable Library) </details> <details><summary><b>uglify-js-1.3.4.tgz</b></p></summary> <p>JavaScript parser and compressor/beautifier toolkit</p> <p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-1.3.4.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-1.3.4.tgz</a></p> <p>Path to dependency file: kittydar/package.json</p> <p>Path to vulnerable library: kittydar/node_modules/browser-pack/node_modules/uglify-js/package.json</p> <p> Dependency Hierarchy: - browserify-2.10.2.tgz (Root Library) - browser-pack-0.7.1.tgz - :x: **uglify-js-1.3.4.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/GooseWSS/kittydar/commit/d47cdc79e976369ea4b8d754bfbe5c6578421393">d47cdc79e976369ea4b8d754bfbe5c6578421393</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The uglify-js package before 2.4.24 for Node.js does not properly account for non-boolean values when rewriting boolean expressions, which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten Javascript. <p>Publish Date: 2017-01-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8857>CVE-2015-8857</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858</a></p> <p>Release Date: 2018-12-15</p> <p>Fix Resolution: v2.4.24</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"uglify-js","packageVersion":"2.2.5","isTransitiveDependency":false,"dependencyTree":"uglify-js:2.2.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v2.4.24"},{"packageType":"javascript/Node.js","packageName":"uglify-js","packageVersion":"1.3.4","isTransitiveDependency":true,"dependencyTree":"browserify:2.10.2;browser-pack:0.7.1;uglify-js:1.3.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v2.4.24"}],"vulnerabilityIdentifier":"CVE-2015-8857","vulnerabilityDetails":"The uglify-js package before 2.4.24 for Node.js does not properly account for non-boolean values when rewriting boolean expressions, which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten Javascript.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8857","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in uglify js tgz uglify js tgz cve high severity vulnerability vulnerable libraries uglify js tgz uglify js tgz uglify js tgz javascript parser mangler compressor and beautifier toolkit library home page a href path to dependency file kittydar package json path to vulnerable library kittydar node modules uglify js package json dependency hierarchy x uglify js tgz vulnerable library uglify js tgz javascript parser and compressor beautifier toolkit library home page a href path to dependency file kittydar package json path to vulnerable library kittydar node modules browser pack node modules uglify js package json dependency hierarchy browserify tgz root library browser pack tgz x uglify js tgz vulnerable library found in head commit a href found in base branch master vulnerability details the uglify js package before for node js does not properly account for non boolean values when rewriting boolean expressions which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten javascript publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails the uglify js package before for node js does not properly account for non boolean values when rewriting boolean expressions which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten javascript vulnerabilityurl
0
69,662
7,157,045,653
IssuesEvent
2018-01-26 18:27:04
vmware/vic-product
https://api.github.com/repos/vmware/vic-product
opened
Need to implement dangling cleanup within vic-product
component/test priority/medium
On the shared VC, I found an old VCH and many old VCH and VOL folders in the datastore. We need to make sure that we have some dangling cleanup prior to running our tests on the system similar to VIC engine cleanup.
1.0
Need to implement dangling cleanup within vic-product - On the shared VC, I found an old VCH and many old VCH and VOL folders in the datastore. We need to make sure that we have some dangling cleanup prior to running our tests on the system similar to VIC engine cleanup.
non_process
need to implement dangling cleanup within vic product on the shared vc i found an old vch and many old vch and vol folders in the datastore we need to make sure that we have some dangling cleanup prior to running our tests on the system similar to vic engine cleanup
0