Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
7,556
10,678,413,434
IssuesEvent
2019-10-21 17:13:42
googleapis/gapic-generator-go
https://api.github.com/repos/googleapis/gapic-generator-go
closed
chore: Update samplegen readme
type: process
To include: - how to pass in sample config when running as a standalone program - how to include samples when running as a protoc plugin/docker image - how to generate samples only when running as a protoc plugin/docker image
1.0
chore: Update samplegen readme - To include: - how to pass in sample config when running as a standalone program - how to include samples when running as a protoc plugin/docker image - how to generate samples only when running as a protoc plugin/docker image
process
chore update samplegen readme to include how to pass in sample config when running as a standalone program how to include samples when running as a protoc plugin docker image how to generate samples only when running as a protoc plugin docker image
1
83,683
15,712,475,804
IssuesEvent
2021-03-27 12:17:28
emilykaldwin1827/goof
https://api.github.com/repos/emilykaldwin1827/goof
closed
WS-2016-0075 (Medium) detected in moment-2.15.1.tgz
security vulnerability
## WS-2016-0075 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-2.15.1.tgz</b></p></summary> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://registry.npmjs.org/moment/-/moment-2.15.1.tgz">https://registry.npmjs.org/moment/-/moment-2.15.1.tgz</a></p> <p>Path to dependency file: goof/package.json</p> <p>Path to vulnerable library: goof/node_modules/moment/package.json</p> <p> Dependency Hierarchy: - :x: **moment-2.15.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/emilykaldwin1827/goof/commit/27563f2447d85b487d3c44ea67f0f561f0c44b91">27563f2447d85b487d3c44ea67f0f561f0c44b91</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Regular expression denial of service vulnerability in the moment package, by using a specific 40 characters long string in the "format" method. <p>Publish Date: 2016-10-24 <p>URL: <a href=https://github.com/moment/moment/commit/663f33e333212b3800b63592cd8e237ac8fabdb9>WS-2016-0075</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/moment/moment/pull/3525">https://github.com/moment/moment/pull/3525</a></p> <p>Release Date: 2016-10-24</p> <p>Fix Resolution: 2.15.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2016-0075 (Medium) detected in moment-2.15.1.tgz - ## WS-2016-0075 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-2.15.1.tgz</b></p></summary> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://registry.npmjs.org/moment/-/moment-2.15.1.tgz">https://registry.npmjs.org/moment/-/moment-2.15.1.tgz</a></p> <p>Path to dependency file: goof/package.json</p> <p>Path to vulnerable library: goof/node_modules/moment/package.json</p> <p> Dependency Hierarchy: - :x: **moment-2.15.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/emilykaldwin1827/goof/commit/27563f2447d85b487d3c44ea67f0f561f0c44b91">27563f2447d85b487d3c44ea67f0f561f0c44b91</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Regular expression denial of service vulnerability in the moment package, by using a specific 40 characters long string in the "format" method. <p>Publish Date: 2016-10-24 <p>URL: <a href=https://github.com/moment/moment/commit/663f33e333212b3800b63592cd8e237ac8fabdb9>WS-2016-0075</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/moment/moment/pull/3525">https://github.com/moment/moment/pull/3525</a></p> <p>Release Date: 2016-10-24</p> <p>Fix Resolution: 2.15.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
ws medium detected in moment tgz ws medium severity vulnerability vulnerable library moment tgz parse validate manipulate and display dates library home page a href path to dependency file goof package json path to vulnerable library goof node modules moment package json dependency hierarchy x moment tgz vulnerable library found in head commit a href found in base branch master vulnerability details regular expression denial of service vulnerability in the moment package by using a specific characters long string in the format method publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
8,546
11,723,366,200
IssuesEvent
2020-03-10 08:58:56
scikit-learn/scikit-learn
https://api.github.com/repos/scikit-learn/scikit-learn
closed
OneHotEncoder drop 'if_binary' drop one column from all categorical variables
Bug module:preprocessing
#### Describe the bug The `drop` parameter in `OneHotEncoder` when set to `if_binary` drops one column from all categorical variables not only binary variables. I need this option in #15706, therefore I would like to propose a PR unless @rushabh-v would take care of this. #### Steps/Code to Reproduce ``` import numpy as np import scipy as sp import pandas as pd from sklearn.datasets import fetch_openml from sklearn.compose import make_column_transformer from sklearn.preprocessing import OneHotEncoder from sklearn.pipeline import make_pipeline from sklearn.linear_model import Ridge from sklearn.compose import TransformedTargetRegressor from sklearn.model_selection import train_test_split survey = fetch_openml(data_id=534, as_frame=True) X = survey.data[survey.feature_names] y = survey.target.values.ravel() X_train, X_test, y_train, y_test = train_test_split( X, y, random_state=42 ) categorical_columns = ['RACE', 'OCCUPATION', 'SECTOR', 'MARR', 'UNION', 'SEX', 'SOUTH'] preprocessor = make_column_transformer( (OneHotEncoder(drop='if_binary'), categorical_columns), remainder='passthrough' ) model = make_pipeline( preprocessor, TransformedTargetRegressor( regressor=Ridge(alpha=1e-10), func=np.log10, inverse_func=sp.special.exp10 ) ) # Fit the model only on categorical variables model.fit(X_train[categorical_columns], y_train) print("Input feature names") print(model.named_steps['columntransformer'] .named_transformers_['onehotencoder'].categories_) print("Number of modeled input features") print(len(model.named_steps['transformedtargetregressor'].regressor_.coef_)) print(model.named_steps['columntransformer'].named_transformers_['onehotencoder'].drop_idx_) feature_names = (model.named_steps['columntransformer'] .named_transformers_['onehotencoder'] .get_feature_names(input_features=categorical_columns)) print("Output feature names") print(feature_names) print("Number of output feature names") print(len(feature_names)) ``` #### Expected Results The length of input and output feature array is the same. #### Actual Results ``` Input feature names [array(['Hispanic', 'Other', 'White'], dtype=object), array(['Clerical', 'Management', 'Other', 'Professional', 'Sales', 'Service'], dtype=object), array(['Construction', 'Manufacturing', 'Other'], dtype=object), array(['Married', 'Unmarried'], dtype=object), array(['member', 'not_member'], dtype=object), array(['female', 'male'], dtype=object), array(['no', 'yes'], dtype=object)] Number of modeled input features 16 Output feature names ['RACE_Hispanic' 'RACE_Other' 'OCCUPATION_Clerical' 'OCCUPATION_Management' 'OCCUPATION_Other' 'OCCUPATION_Professional' 'OCCUPATION_Sales' 'SECTOR_Construction' 'SECTOR_Manufacturing' 'MARR_Unmarried' 'UNION_not_member' 'SEX_male' 'SOUTH_yes'] Number of output feature names 13 ``` #### Versions ``` >>> import sklearn; sklearn.show_versions() System: python: 3.7.5 (default, Dec 15 2019, 17:54:26) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)] executable: /home/cmarmo/.skldevenv/bin/python machine: Linux-5.3.16-300.fc31.x86_64-x86_64-with-fedora-31-Thirty_One Python dependencies: pip: 20.0.2 setuptools: 40.8.0 sklearn: 0.23.dev0 numpy: 1.17.2 scipy: 1.3.1 Cython: 0.29.13 pandas: 0.25.1 matplotlib: 3.1.1 joblib: 0.13.2 Built with OpenMP: True ```
1.0
OneHotEncoder drop 'if_binary' drop one column from all categorical variables - #### Describe the bug The `drop` parameter in `OneHotEncoder` when set to `if_binary` drops one column from all categorical variables not only binary variables. I need this option in #15706, therefore I would like to propose a PR unless @rushabh-v would take care of this. #### Steps/Code to Reproduce ``` import numpy as np import scipy as sp import pandas as pd from sklearn.datasets import fetch_openml from sklearn.compose import make_column_transformer from sklearn.preprocessing import OneHotEncoder from sklearn.pipeline import make_pipeline from sklearn.linear_model import Ridge from sklearn.compose import TransformedTargetRegressor from sklearn.model_selection import train_test_split survey = fetch_openml(data_id=534, as_frame=True) X = survey.data[survey.feature_names] y = survey.target.values.ravel() X_train, X_test, y_train, y_test = train_test_split( X, y, random_state=42 ) categorical_columns = ['RACE', 'OCCUPATION', 'SECTOR', 'MARR', 'UNION', 'SEX', 'SOUTH'] preprocessor = make_column_transformer( (OneHotEncoder(drop='if_binary'), categorical_columns), remainder='passthrough' ) model = make_pipeline( preprocessor, TransformedTargetRegressor( regressor=Ridge(alpha=1e-10), func=np.log10, inverse_func=sp.special.exp10 ) ) # Fit the model only on categorical variables model.fit(X_train[categorical_columns], y_train) print("Input feature names") print(model.named_steps['columntransformer'] .named_transformers_['onehotencoder'].categories_) print("Number of modeled input features") print(len(model.named_steps['transformedtargetregressor'].regressor_.coef_)) print(model.named_steps['columntransformer'].named_transformers_['onehotencoder'].drop_idx_) feature_names = (model.named_steps['columntransformer'] .named_transformers_['onehotencoder'] .get_feature_names(input_features=categorical_columns)) print("Output feature names") print(feature_names) print("Number of output feature names") print(len(feature_names)) ``` #### Expected Results The length of input and output feature array is the same. #### Actual Results ``` Input feature names [array(['Hispanic', 'Other', 'White'], dtype=object), array(['Clerical', 'Management', 'Other', 'Professional', 'Sales', 'Service'], dtype=object), array(['Construction', 'Manufacturing', 'Other'], dtype=object), array(['Married', 'Unmarried'], dtype=object), array(['member', 'not_member'], dtype=object), array(['female', 'male'], dtype=object), array(['no', 'yes'], dtype=object)] Number of modeled input features 16 Output feature names ['RACE_Hispanic' 'RACE_Other' 'OCCUPATION_Clerical' 'OCCUPATION_Management' 'OCCUPATION_Other' 'OCCUPATION_Professional' 'OCCUPATION_Sales' 'SECTOR_Construction' 'SECTOR_Manufacturing' 'MARR_Unmarried' 'UNION_not_member' 'SEX_male' 'SOUTH_yes'] Number of output feature names 13 ``` #### Versions ``` >>> import sklearn; sklearn.show_versions() System: python: 3.7.5 (default, Dec 15 2019, 17:54:26) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)] executable: /home/cmarmo/.skldevenv/bin/python machine: Linux-5.3.16-300.fc31.x86_64-x86_64-with-fedora-31-Thirty_One Python dependencies: pip: 20.0.2 setuptools: 40.8.0 sklearn: 0.23.dev0 numpy: 1.17.2 scipy: 1.3.1 Cython: 0.29.13 pandas: 0.25.1 matplotlib: 3.1.1 joblib: 0.13.2 Built with OpenMP: True ```
process
onehotencoder drop if binary drop one column from all categorical variables describe the bug the drop parameter in onehotencoder when set to if binary drops one column from all categorical variables not only binary variables i need this option in therefore i would like to propose a pr unless rushabh v would take care of this steps code to reproduce import numpy as np import scipy as sp import pandas as pd from sklearn datasets import fetch openml from sklearn compose import make column transformer from sklearn preprocessing import onehotencoder from sklearn pipeline import make pipeline from sklearn linear model import ridge from sklearn compose import transformedtargetregressor from sklearn model selection import train test split survey fetch openml data id as frame true x survey data y survey target values ravel x train x test y train y test train test split x y random state categorical columns race occupation sector marr union sex south preprocessor make column transformer onehotencoder drop if binary categorical columns remainder passthrough model make pipeline preprocessor transformedtargetregressor regressor ridge alpha func np inverse func sp special fit the model only on categorical variables model fit x train y train print input feature names print model named steps named transformers categories print number of modeled input features print len model named steps regressor coef print model named steps named transformers drop idx feature names model named steps named transformers get feature names input features categorical columns print output feature names print feature names print number of output feature names print len feature names expected results the length of input and output feature array is the same actual results input feature names dtype object array clerical management other professional sales service dtype object array dtype object array dtype object array dtype object array dtype object array dtype object number of modeled input features output feature names race hispanic race other occupation clerical occupation management occupation other occupation professional occupation sales sector construction sector manufacturing marr unmarried union not member sex male south yes number of output feature names versions import sklearn sklearn show versions system python default dec executable home cmarmo skldevenv bin python machine linux with fedora thirty one python dependencies pip setuptools sklearn numpy scipy cython pandas matplotlib joblib built with openmp true
1
11,254
14,020,254,779
IssuesEvent
2020-10-29 19:23:59
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Start Process on remote machine
api-suggestion area-System.Diagnostics.Process untriaged
## Background and Motivation there are some workarounds to launch a Process on a remote machine like: psexec https://stackoverflow.com/questions/25782308/execute-exe-on-remote-machine con: requires external binaries wmi https://stackoverflow.com/questions/428276/how-to-execute-a-command-in-a-remote-computer con: just works on windows ssh https://github.com/sshnet/SSH.NET con: just works with ssh but there is no buildin way to do it. and certainly no way that is cross platform. I suppose the user has to tell dotnet how to start the process by providing the remote platform. if this can be avoided with some black magic the new api can be reduced to just providing the machine name. ## Proposed API ```diff namespace System.Diagnostics { public class ProcessStartInfo { + public string MachineName { get; set; } = Environment.MachineName; + public System.Runtime.InteropServices.OSPlatform? Platform { get; set; } } ``` ## Usage Examples ``` C# Process.Start(new ProcessStartInfo() { FileName = "ping", Arguments = "8.8.8.8", MachineName = "someothermachine" Platform = OSPlatform.Windows }); ``` ## Risks additional complexity in the implementation in case MachineName != Environment.MachineName and Platform != RuntimeInformation.CurrentOSPlatform
1.0
Start Process on remote machine - ## Background and Motivation there are some workarounds to launch a Process on a remote machine like: psexec https://stackoverflow.com/questions/25782308/execute-exe-on-remote-machine con: requires external binaries wmi https://stackoverflow.com/questions/428276/how-to-execute-a-command-in-a-remote-computer con: just works on windows ssh https://github.com/sshnet/SSH.NET con: just works with ssh but there is no buildin way to do it. and certainly no way that is cross platform. I suppose the user has to tell dotnet how to start the process by providing the remote platform. if this can be avoided with some black magic the new api can be reduced to just providing the machine name. ## Proposed API ```diff namespace System.Diagnostics { public class ProcessStartInfo { + public string MachineName { get; set; } = Environment.MachineName; + public System.Runtime.InteropServices.OSPlatform? Platform { get; set; } } ``` ## Usage Examples ``` C# Process.Start(new ProcessStartInfo() { FileName = "ping", Arguments = "8.8.8.8", MachineName = "someothermachine" Platform = OSPlatform.Windows }); ``` ## Risks additional complexity in the implementation in case MachineName != Environment.MachineName and Platform != RuntimeInformation.CurrentOSPlatform
process
start process on remote machine background and motivation there are some workarounds to launch a process on a remote machine like psexec con requires external binaries wmi con just works on windows ssh con just works with ssh but there is no buildin way to do it and certainly no way that is cross platform i suppose the user has to tell dotnet how to start the process by providing the remote platform if this can be avoided with some black magic the new api can be reduced to just providing the machine name proposed api diff namespace system diagnostics public class processstartinfo public string machinename get set environment machinename public system runtime interopservices osplatform platform get set usage examples c process start new processstartinfo filename ping arguments machinename someothermachine platform osplatform windows risks additional complexity in the implementation in case machinename environment machinename and platform runtimeinformation currentosplatform
1
372,347
11,012,989,279
IssuesEvent
2019-12-04 19:29:14
openmsupply/mobile
https://api.github.com/repos/openmsupply/mobile
closed
Wizard component
Bug: development Docs: not needed Effort: small Feature Module: dispensary Priority: high
## Is your feature request related to a problem? Please describe. We don't have a Wizard component ## Describe the solution you'd like A component which integrates the two components #1598 and #1597 into a single component for a wizard ## Describe alternatives you've considered N/A ## Additional context N/A
1.0
Wizard component - ## Is your feature request related to a problem? Please describe. We don't have a Wizard component ## Describe the solution you'd like A component which integrates the two components #1598 and #1597 into a single component for a wizard ## Describe alternatives you've considered N/A ## Additional context N/A
non_process
wizard component is your feature request related to a problem please describe we don t have a wizard component describe the solution you d like a component which integrates the two components and into a single component for a wizard describe alternatives you ve considered n a additional context n a
0
10,013
13,043,882,778
IssuesEvent
2020-07-29 02:56:37
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `DayOfWeek` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `DayOfWeek` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @lonng ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `DayOfWeek` from TiDB - ## Description Port the scalar function `DayOfWeek` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @lonng ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function dayofweek from tidb description port the scalar function dayofweek from tidb to coprocessor score mentor s lonng recommended skills rust programming learning materials already implemented expressions ported from tidb
1
4,247
3,004,916,478
IssuesEvent
2015-07-26 12:59:26
brian-team/brian2
https://api.github.com/repos/brian-team/brian2
opened
Simplify templates?
component: codegen
With all the blocks and derivations they're a little bit difficult to understand at the moment. On the other hand, the use of template extensions and macros makes them more generic and derivable, so maybe it should be left as it is, just explained better?
1.0
Simplify templates? - With all the blocks and derivations they're a little bit difficult to understand at the moment. On the other hand, the use of template extensions and macros makes them more generic and derivable, so maybe it should be left as it is, just explained better?
non_process
simplify templates with all the blocks and derivations they re a little bit difficult to understand at the moment on the other hand the use of template extensions and macros makes them more generic and derivable so maybe it should be left as it is just explained better
0
355,802
10,585,035,416
IssuesEvent
2019-10-08 16:36:37
googleapis/google-cloud-dotnet
https://api.github.com/repos/googleapis/google-cloud-dotnet
closed
Synthesis failed for Google.Cloud.WebRisk.V1Beta1
autosynth failure priority: p1 type: bug
Hello! Autosynth couldn't regenerate Google.Cloud.WebRisk.V1Beta1. :broken_heart: Here's the output from running `synth.py`: ``` Cloning into 'working_repo'... Switched to branch 'autosynth-Google.Cloud.WebRisk.V1Beta1' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--'] synthtool > Executing /tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.WebRisk.V1Beta1/synth.py. Cloning into 'gapic-generator-csharp'... Submodule 'api-common-protos' (https://github.com/googleapis/api-common-protos.git) registered for path 'api-common-protos' Submodule 'protobuf' (https://github.com/protocolbuffers/protobuf.git) registered for path 'protobuf' Cloning into '/tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/api-common-protos'... Cloning into '/tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/protobuf'... Submodule path 'api-common-protos': checked out '4c0a203e3658ae0e56d47c817c2c5904116c0ae0' Submodule path 'protobuf': checked out '815ff7e1fb2d417d5aebcbf5fc46e626b18dc834' Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'protobuf/third_party/benchmark' Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'protobuf/third_party/googletest' Cloning into '/tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/protobuf/third_party/benchmark'... Cloning into '/tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/protobuf/third_party/googletest'... Submodule path 'protobuf/third_party/benchmark': checked out '5b7683f49e1e9223cf9927b24f6fd3d6bd82e3f8' Submodule path 'protobuf/third_party/googletest': checked out '5ec7f0c4a113e2f18ac2c6cc7df51ad6afc24081' Microsoft (R) Build Engine version 15.9.20+g88f5fadfbe for .NET Core Copyright (C) Microsoft Corporation. All rights reserved. Restoring packages for /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/protobuf/csharp/src/Google.Protobuf/Google.Protobuf.csproj... Restoring packages for /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/Google.Api.Generator/Google.Api.Generator.csproj... Generating MSBuild file /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/protobuf/csharp/src/Google.Protobuf/obj/Google.Protobuf.csproj.nuget.g.props. Generating MSBuild file /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/protobuf/csharp/src/Google.Protobuf/obj/Google.Protobuf.csproj.nuget.g.targets. Restore completed in 386.13 ms for /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/protobuf/csharp/src/Google.Protobuf/Google.Protobuf.csproj. Generating MSBuild file /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/Google.Api.Generator/obj/Google.Api.Generator.csproj.nuget.g.props. Generating MSBuild file /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/Google.Api.Generator/obj/Google.Api.Generator.csproj.nuget.g.targets. Restore completed in 473.17 ms for /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/Google.Api.Generator/Google.Api.Generator.csproj. Google.Protobuf -> /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/protobuf/csharp/src/Google.Protobuf/bin/Release/netstandard2.0/Google.Protobuf.dll Google.Api.Generator -> /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/Google.Api.Generator/bin/Release/netcoreapp2.2/linux-x64/Google.Api.Generator.dll Google.Api.Generator -> /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/Google.Api.Generator/bin/Release/netcoreapp2.2/linux-x64/publish/ Cloning into 'gapic-generator'... Cloning into 'googleapis'... > Task :buildSrc:compileJava NO-SOURCE > Task :buildSrc:compileGroovy > Task :buildSrc:processResources NO-SOURCE > Task :buildSrc:classes > Task :buildSrc:jar > Task :buildSrc:assemble > Task :buildSrc:compileTestJava NO-SOURCE > Task :buildSrc:compileTestGroovy NO-SOURCE > Task :buildSrc:processTestResources NO-SOURCE > Task :buildSrc:testClasses UP-TO-DATE > Task :buildSrc:test NO-SOURCE > Task :buildSrc:check UP-TO-DATE > Task :buildSrc:build > Task :extractIncludeProto > Task :extractProto > Task :generateProto > Task :compileJava Note: Some input files use or override a deprecated API. Note: Recompile with -Xlint:deprecation for details. > Task :processResources > Task :createProperties > Task :classes > Task :shadowJar BUILD SUCCESSFUL in 13s 7 actionable tasks: 7 executed Microsoft (R) Build Engine version 15.9.20+g88f5fadfbe for .NET Core Copyright (C) Microsoft Corporation. All rights reserved. Restoring packages for /tmpfs/src/git/autosynth/working_repo/tools/Google.Cloud.Tools.VersionCompat/Google.Cloud.Tools.VersionCompat.csproj... Generating MSBuild file /tmpfs/src/git/autosynth/working_repo/tools/Google.Cloud.Tools.VersionCompat/obj/Google.Cloud.Tools.VersionCompat.csproj.nuget.g.props. Generating MSBuild file /tmpfs/src/git/autosynth/working_repo/tools/Google.Cloud.Tools.VersionCompat/obj/Google.Cloud.Tools.VersionCompat.csproj.nuget.g.targets. Restore completed in 404.13 ms for /tmpfs/src/git/autosynth/working_repo/tools/Google.Cloud.Tools.VersionCompat/Google.Cloud.Tools.VersionCompat.csproj. Google.Cloud.Tools.VersionCompat -> /tmpfs/src/git/autosynth/working_repo/tools/Google.Cloud.Tools.VersionCompat/bin/Debug/netcoreapp2.2/Google.Cloud.Tools.VersionCompat.dll Build succeeded. 0 Warning(s) 0 Error(s) Time Elapsed 00:00:01.73 Building existing version of Google.Cloud.WebRisk.V1Beta1 for compatibility checking Microsoft (R) Build Engine version 15.9.20+g88f5fadfbe for .NET Core Copyright (C) Microsoft Corporation. All rights reserved. Build succeeded. 0 Warning(s) 0 Error(s) Time Elapsed 00:00:01.46 Generating Google.Cloud.WebRisk.V1Beta1 webrisk.yaml doesn't exist. Please check inputs. synthtool > Failed executing /bin/bash generateapis.sh --check_compatibility Google.Cloud.WebRisk.V1Beta1: None Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 87, in <module> main() File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 79, in main spec.loader.exec_module(synth_module) # type: ignore File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed File "/tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.WebRisk.V1Beta1/synth.py", line 20, in <module> hide_output = False) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 39, in run raise exc File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 33, in run encoding="utf-8", File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 418, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '('/bin/bash', 'generateapis.sh', '--check_compatibility', 'Google.Cloud.WebRisk.V1Beta1')' returned non-zero exit status 1. synthtool > Wrote metadata to synth.metadata. Synthesis failed ``` Google internal developers can see the full log [here](https://sponge/c9993cd1-b40a-4f1b-bbdd-b365bfb554af).
1.0
Synthesis failed for Google.Cloud.WebRisk.V1Beta1 - Hello! Autosynth couldn't regenerate Google.Cloud.WebRisk.V1Beta1. :broken_heart: Here's the output from running `synth.py`: ``` Cloning into 'working_repo'... Switched to branch 'autosynth-Google.Cloud.WebRisk.V1Beta1' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--'] synthtool > Executing /tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.WebRisk.V1Beta1/synth.py. Cloning into 'gapic-generator-csharp'... Submodule 'api-common-protos' (https://github.com/googleapis/api-common-protos.git) registered for path 'api-common-protos' Submodule 'protobuf' (https://github.com/protocolbuffers/protobuf.git) registered for path 'protobuf' Cloning into '/tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/api-common-protos'... Cloning into '/tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/protobuf'... Submodule path 'api-common-protos': checked out '4c0a203e3658ae0e56d47c817c2c5904116c0ae0' Submodule path 'protobuf': checked out '815ff7e1fb2d417d5aebcbf5fc46e626b18dc834' Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'protobuf/third_party/benchmark' Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'protobuf/third_party/googletest' Cloning into '/tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/protobuf/third_party/benchmark'... Cloning into '/tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/protobuf/third_party/googletest'... Submodule path 'protobuf/third_party/benchmark': checked out '5b7683f49e1e9223cf9927b24f6fd3d6bd82e3f8' Submodule path 'protobuf/third_party/googletest': checked out '5ec7f0c4a113e2f18ac2c6cc7df51ad6afc24081' Microsoft (R) Build Engine version 15.9.20+g88f5fadfbe for .NET Core Copyright (C) Microsoft Corporation. All rights reserved. Restoring packages for /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/protobuf/csharp/src/Google.Protobuf/Google.Protobuf.csproj... Restoring packages for /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/Google.Api.Generator/Google.Api.Generator.csproj... Generating MSBuild file /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/protobuf/csharp/src/Google.Protobuf/obj/Google.Protobuf.csproj.nuget.g.props. Generating MSBuild file /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/protobuf/csharp/src/Google.Protobuf/obj/Google.Protobuf.csproj.nuget.g.targets. Restore completed in 386.13 ms for /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/protobuf/csharp/src/Google.Protobuf/Google.Protobuf.csproj. Generating MSBuild file /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/Google.Api.Generator/obj/Google.Api.Generator.csproj.nuget.g.props. Generating MSBuild file /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/Google.Api.Generator/obj/Google.Api.Generator.csproj.nuget.g.targets. Restore completed in 473.17 ms for /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/Google.Api.Generator/Google.Api.Generator.csproj. Google.Protobuf -> /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/protobuf/csharp/src/Google.Protobuf/bin/Release/netstandard2.0/Google.Protobuf.dll Google.Api.Generator -> /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/Google.Api.Generator/bin/Release/netcoreapp2.2/linux-x64/Google.Api.Generator.dll Google.Api.Generator -> /tmpfs/src/git/autosynth/working_repo/gapic-generator-csharp/Google.Api.Generator/bin/Release/netcoreapp2.2/linux-x64/publish/ Cloning into 'gapic-generator'... Cloning into 'googleapis'... > Task :buildSrc:compileJava NO-SOURCE > Task :buildSrc:compileGroovy > Task :buildSrc:processResources NO-SOURCE > Task :buildSrc:classes > Task :buildSrc:jar > Task :buildSrc:assemble > Task :buildSrc:compileTestJava NO-SOURCE > Task :buildSrc:compileTestGroovy NO-SOURCE > Task :buildSrc:processTestResources NO-SOURCE > Task :buildSrc:testClasses UP-TO-DATE > Task :buildSrc:test NO-SOURCE > Task :buildSrc:check UP-TO-DATE > Task :buildSrc:build > Task :extractIncludeProto > Task :extractProto > Task :generateProto > Task :compileJava Note: Some input files use or override a deprecated API. Note: Recompile with -Xlint:deprecation for details. > Task :processResources > Task :createProperties > Task :classes > Task :shadowJar BUILD SUCCESSFUL in 13s 7 actionable tasks: 7 executed Microsoft (R) Build Engine version 15.9.20+g88f5fadfbe for .NET Core Copyright (C) Microsoft Corporation. All rights reserved. Restoring packages for /tmpfs/src/git/autosynth/working_repo/tools/Google.Cloud.Tools.VersionCompat/Google.Cloud.Tools.VersionCompat.csproj... Generating MSBuild file /tmpfs/src/git/autosynth/working_repo/tools/Google.Cloud.Tools.VersionCompat/obj/Google.Cloud.Tools.VersionCompat.csproj.nuget.g.props. Generating MSBuild file /tmpfs/src/git/autosynth/working_repo/tools/Google.Cloud.Tools.VersionCompat/obj/Google.Cloud.Tools.VersionCompat.csproj.nuget.g.targets. Restore completed in 404.13 ms for /tmpfs/src/git/autosynth/working_repo/tools/Google.Cloud.Tools.VersionCompat/Google.Cloud.Tools.VersionCompat.csproj. Google.Cloud.Tools.VersionCompat -> /tmpfs/src/git/autosynth/working_repo/tools/Google.Cloud.Tools.VersionCompat/bin/Debug/netcoreapp2.2/Google.Cloud.Tools.VersionCompat.dll Build succeeded. 0 Warning(s) 0 Error(s) Time Elapsed 00:00:01.73 Building existing version of Google.Cloud.WebRisk.V1Beta1 for compatibility checking Microsoft (R) Build Engine version 15.9.20+g88f5fadfbe for .NET Core Copyright (C) Microsoft Corporation. All rights reserved. Build succeeded. 0 Warning(s) 0 Error(s) Time Elapsed 00:00:01.46 Generating Google.Cloud.WebRisk.V1Beta1 webrisk.yaml doesn't exist. Please check inputs. synthtool > Failed executing /bin/bash generateapis.sh --check_compatibility Google.Cloud.WebRisk.V1Beta1: None Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 87, in <module> main() File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 79, in main spec.loader.exec_module(synth_module) # type: ignore File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed File "/tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.WebRisk.V1Beta1/synth.py", line 20, in <module> hide_output = False) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 39, in run raise exc File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 33, in run encoding="utf-8", File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 418, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '('/bin/bash', 'generateapis.sh', '--check_compatibility', 'Google.Cloud.WebRisk.V1Beta1')' returned non-zero exit status 1. synthtool > Wrote metadata to synth.metadata. Synthesis failed ``` Google internal developers can see the full log [here](https://sponge/c9993cd1-b40a-4f1b-bbdd-b365bfb554af).
non_process
synthesis failed for google cloud webrisk hello autosynth couldn t regenerate google cloud webrisk broken heart here s the output from running synth py cloning into working repo switched to branch autosynth google cloud webrisk running synthtool synthtool executing tmpfs src git autosynth working repo apis google cloud webrisk synth py cloning into gapic generator csharp submodule api common protos registered for path api common protos submodule protobuf registered for path protobuf cloning into tmpfs src git autosynth working repo gapic generator csharp api common protos cloning into tmpfs src git autosynth working repo gapic generator csharp protobuf submodule path api common protos checked out submodule path protobuf checked out submodule third party benchmark registered for path protobuf third party benchmark submodule third party googletest registered for path protobuf third party googletest cloning into tmpfs src git autosynth working repo gapic generator csharp protobuf third party benchmark cloning into tmpfs src git autosynth working repo gapic generator csharp protobuf third party googletest submodule path protobuf third party benchmark checked out submodule path protobuf third party googletest checked out microsoft r build engine version for net core copyright c microsoft corporation all rights reserved restoring packages for tmpfs src git autosynth working repo gapic generator csharp protobuf csharp src google protobuf google protobuf csproj restoring packages for tmpfs src git autosynth working repo gapic generator csharp google api generator google api generator csproj generating msbuild file tmpfs src git autosynth working repo gapic generator csharp protobuf csharp src google protobuf obj google protobuf csproj nuget g props generating msbuild file tmpfs src git autosynth working repo gapic generator csharp protobuf csharp src google protobuf obj google protobuf csproj nuget g targets restore completed in ms for tmpfs src git autosynth working repo gapic generator csharp protobuf csharp src google protobuf google protobuf csproj generating msbuild file tmpfs src git autosynth working repo gapic generator csharp google api generator obj google api generator csproj nuget g props generating msbuild file tmpfs src git autosynth working repo gapic generator csharp google api generator obj google api generator csproj nuget g targets restore completed in ms for tmpfs src git autosynth working repo gapic generator csharp google api generator google api generator csproj google protobuf tmpfs src git autosynth working repo gapic generator csharp protobuf csharp src google protobuf bin release google protobuf dll google api generator tmpfs src git autosynth working repo gapic generator csharp google api generator bin release linux google api generator dll google api generator tmpfs src git autosynth working repo gapic generator csharp google api generator bin release linux publish cloning into gapic generator cloning into googleapis task buildsrc compilejava no source task buildsrc compilegroovy task buildsrc processresources no source task buildsrc classes task buildsrc jar task buildsrc assemble task buildsrc compiletestjava no source task buildsrc compiletestgroovy no source task buildsrc processtestresources no source task buildsrc testclasses up to date task buildsrc test no source task buildsrc check up to date task buildsrc build task extractincludeproto task extractproto task generateproto task compilejava note some input files use or override a deprecated api note recompile with xlint deprecation for details task processresources task createproperties task classes task shadowjar build successful in actionable tasks executed microsoft r build engine version for net core copyright c microsoft corporation all rights reserved restoring packages for tmpfs src git autosynth working repo tools google cloud tools versioncompat google cloud tools versioncompat csproj generating msbuild file tmpfs src git autosynth working repo tools google cloud tools versioncompat obj google cloud tools versioncompat csproj nuget g props generating msbuild file tmpfs src git autosynth working repo tools google cloud tools versioncompat obj google cloud tools versioncompat csproj nuget g targets restore completed in ms for tmpfs src git autosynth working repo tools google cloud tools versioncompat google cloud tools versioncompat csproj google cloud tools versioncompat tmpfs src git autosynth working repo tools google cloud tools versioncompat bin debug google cloud tools versioncompat dll build succeeded warning s error s time elapsed building existing version of google cloud webrisk for compatibility checking microsoft r build engine version for net core copyright c microsoft corporation all rights reserved build succeeded warning s error s time elapsed generating google cloud webrisk webrisk yaml doesn t exist please check inputs synthtool failed executing bin bash generateapis sh check compatibility google cloud webrisk none traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src git autosynth env lib site packages synthtool main py line in main file tmpfs src git autosynth env lib site packages click core py line in call return self main args kwargs file tmpfs src git autosynth env lib site packages click core py line in main rv self invoke ctx file tmpfs src git autosynth env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src git autosynth env lib site packages click core py line in invoke return callback args kwargs file tmpfs src git autosynth env lib site packages synthtool main py line in main spec loader exec module synth module type ignore file line in exec module file line in call with frames removed file tmpfs src git autosynth working repo apis google cloud webrisk synth py line in hide output false file tmpfs src git autosynth env lib site packages synthtool shell py line in run raise exc file tmpfs src git autosynth env lib site packages synthtool shell py line in run encoding utf file home kbuilder pyenv versions lib subprocess py line in run output stdout stderr stderr subprocess calledprocesserror command bin bash generateapis sh check compatibility google cloud webrisk returned non zero exit status synthtool wrote metadata to synth metadata synthesis failed google internal developers can see the full log
0
575
3,041,277,535
IssuesEvent
2015-08-07 20:18:02
mitchellh/packer
https://api.github.com/repos/mitchellh/packer
closed
Variables do not work in compress post-processor
bug interpolation post-processor/compress
This is occurring on Mac OS X, with Packer 0.8.1: My post-processors config section looks like this: ``` "post-processors": [ { "type": "vagrant", "keep_input_artifact": true, "output": "{{.BuildName}}.box", "vagrantfile_template": "vagrantfile-windows_2008_r2.template" }, { "type": "compress", "output": "{{.BuildName}}.zip" } ] ``` The output for the vagrant box is correctly named, ie: `windows_2008_r2_vmware.box` But the zip file that is created is not using the BuildName variable correctly. The output .zip file is: `<no value>.zip`
1.0
Variables do not work in compress post-processor - This is occurring on Mac OS X, with Packer 0.8.1: My post-processors config section looks like this: ``` "post-processors": [ { "type": "vagrant", "keep_input_artifact": true, "output": "{{.BuildName}}.box", "vagrantfile_template": "vagrantfile-windows_2008_r2.template" }, { "type": "compress", "output": "{{.BuildName}}.zip" } ] ``` The output for the vagrant box is correctly named, ie: `windows_2008_r2_vmware.box` But the zip file that is created is not using the BuildName variable correctly. The output .zip file is: `<no value>.zip`
process
variables do not work in compress post processor this is occurring on mac os x with packer my post processors config section looks like this post processors type vagrant keep input artifact true output buildname box vagrantfile template vagrantfile windows template type compress output buildname zip the output for the vagrant box is correctly named ie windows vmware box but the zip file that is created is not using the buildname variable correctly the output zip file is zip
1
322,864
9,829,432,276
IssuesEvent
2019-06-15 20:44:38
marklogic/marklogic-data-hub
https://api.github.com/repos/marklogic/marklogic-data-hub
closed
arrow labels between entity can get hidden behind boxes
Component:QuickStart bug priority:low
It would seem that the Entities view in QuickStart has a problem with the placement of labels on lines connecting entities. When running the tutorial, I created the Order and Product entities, as directed. QuickStart wants to render an arrow from Order to Product. Fine. However, I ended moving the entities around because the Order box was generated under the Product box. The label on the arrow from Order to Product is mostly hidden under the Order box if the arrow points "down and to the right", but OK if it points "down and to the left". See screenshot below. There doesn't seem to be a problem if the arrow points upward, oddly enough. QuickStart/DHF v 2.0.4 via Chrome on Windows. ML running on RHEL 7. ![quickstart_arrow_label](https://user-images.githubusercontent.com/1951367/37376571-088f26d6-26e2-11e8-997d-d8ffe3002a78.jpg)
1.0
arrow labels between entity can get hidden behind boxes - It would seem that the Entities view in QuickStart has a problem with the placement of labels on lines connecting entities. When running the tutorial, I created the Order and Product entities, as directed. QuickStart wants to render an arrow from Order to Product. Fine. However, I ended moving the entities around because the Order box was generated under the Product box. The label on the arrow from Order to Product is mostly hidden under the Order box if the arrow points "down and to the right", but OK if it points "down and to the left". See screenshot below. There doesn't seem to be a problem if the arrow points upward, oddly enough. QuickStart/DHF v 2.0.4 via Chrome on Windows. ML running on RHEL 7. ![quickstart_arrow_label](https://user-images.githubusercontent.com/1951367/37376571-088f26d6-26e2-11e8-997d-d8ffe3002a78.jpg)
non_process
arrow labels between entity can get hidden behind boxes it would seem that the entities view in quickstart has a problem with the placement of labels on lines connecting entities when running the tutorial i created the order and product entities as directed quickstart wants to render an arrow from order to product fine however i ended moving the entities around because the order box was generated under the product box the label on the arrow from order to product is mostly hidden under the order box if the arrow points down and to the right but ok if it points down and to the left see screenshot below there doesn t seem to be a problem if the arrow points upward oddly enough quickstart dhf v via chrome on windows ml running on rhel
0
18,777
24,678,890,034
IssuesEvent
2022-10-18 19:25:58
dtcenter/MET
https://api.github.com/repos/dtcenter/MET
opened
Investigate `ascii2nc_airnow_hourly` test in unit_ascii2nc.xml
type: bug alert: NEED ACCOUNT KEY requestor: METplus Team MET: PreProcessing Tools (Point) priority: high
## Describe the Problem ## During review of #2294 for issue #2276, a problem was discovered in the output of the `ascii2nc_airnow_hourly` test in unit_ascii2nc.xml. The output file created by this test (HourlyData_20220312.nc) contains values of Infinity (`Inf`). While the GHA run for that PR did increase the occurrence of Inf in the output, the problem existed prior to those code changes. This issue is to investigate the source of the `Inf` values appearing in the output, and fix the code to avoid them. ### Expected Behavior ### The output of ascii2nc should never contain a value of infinity. The code should be enhanced by adding more error checking to avoid them. Perhaps, they should be reported as bad data value (i.e. -9999) rather than `Inf`? ### Environment ### Describe your runtime environment: *1. Visible in the output of GHA and in the output of the MET nightly build on seneca.* ### To Reproduce ### Describe the steps to reproduce the behavior: *1. Log on to 'seneca'* *2. Go to NB area:* ``` cd /d1/projects/MET/MET_regression/develop/NB20221018 ``` *3. Dump to ascii: ``` Rscript MET-develop/scripts/Rscripts/pntnc2ascii.R MET-develop/test_output/ascii2nc/airnow/HourlyData_20220312.nc > HourlyData_20220312.txt ``` *4. See error in columns 6 and 9 of the output:* ``` grep Inf HourlyData_20220312.txt | wc -l 33 ``` *Post relevant sample data following these instructions:* *https://dtcenter.org/community-code/model-evaluation-tools-met/met-help-desk#ftp* ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [ ] Select **engineer(s)** or **no engineer** required - [ ] Select **scientist(s)** or **no scientist** required ### Labels ### - [ ] Select **component(s)** - [ ] Select **priority** - [ ] Select **requestor(s)** ### Projects and Milestone ### - [ ] Select **Organization** level **Project** for support of the current coordinated release - [ ] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label - [ ] Select **Milestone** as the next bugfix version ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) ## Bugfix Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [ ] Fork this repository or create a branch of **main_\<Version>**. Branch name: `bugfix_<Issue Number>_main_<Version>_<Description>` - [ ] Fix the bug and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **main_\<Version>**. Pull request: `bugfix <Issue Number> main_<Version> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Linked issues** Select: **Organization** level software support **Project** for the current coordinated release Select: **Milestone** as the next bugfix version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Complete the steps above to fix the bug on the **develop** branch. Branch name: `bugfix_<Issue Number>_develop_<Description>` Pull request: `bugfix <Issue Number> develop <Description>` Select: **Reviewer(s)** and **Linked issues** Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Close this issue.
1.0
Investigate `ascii2nc_airnow_hourly` test in unit_ascii2nc.xml - ## Describe the Problem ## During review of #2294 for issue #2276, a problem was discovered in the output of the `ascii2nc_airnow_hourly` test in unit_ascii2nc.xml. The output file created by this test (HourlyData_20220312.nc) contains values of Infinity (`Inf`). While the GHA run for that PR did increase the occurrence of Inf in the output, the problem existed prior to those code changes. This issue is to investigate the source of the `Inf` values appearing in the output, and fix the code to avoid them. ### Expected Behavior ### The output of ascii2nc should never contain a value of infinity. The code should be enhanced by adding more error checking to avoid them. Perhaps, they should be reported as bad data value (i.e. -9999) rather than `Inf`? ### Environment ### Describe your runtime environment: *1. Visible in the output of GHA and in the output of the MET nightly build on seneca.* ### To Reproduce ### Describe the steps to reproduce the behavior: *1. Log on to 'seneca'* *2. Go to NB area:* ``` cd /d1/projects/MET/MET_regression/develop/NB20221018 ``` *3. Dump to ascii: ``` Rscript MET-develop/scripts/Rscripts/pntnc2ascii.R MET-develop/test_output/ascii2nc/airnow/HourlyData_20220312.nc > HourlyData_20220312.txt ``` *4. See error in columns 6 and 9 of the output:* ``` grep Inf HourlyData_20220312.txt | wc -l 33 ``` *Post relevant sample data following these instructions:* *https://dtcenter.org/community-code/model-evaluation-tools-met/met-help-desk#ftp* ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [ ] Select **engineer(s)** or **no engineer** required - [ ] Select **scientist(s)** or **no scientist** required ### Labels ### - [ ] Select **component(s)** - [ ] Select **priority** - [ ] Select **requestor(s)** ### Projects and Milestone ### - [ ] Select **Organization** level **Project** for support of the current coordinated release - [ ] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label - [ ] Select **Milestone** as the next bugfix version ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) ## Bugfix Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [ ] Fork this repository or create a branch of **main_\<Version>**. Branch name: `bugfix_<Issue Number>_main_<Version>_<Description>` - [ ] Fix the bug and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **main_\<Version>**. Pull request: `bugfix <Issue Number> main_<Version> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Linked issues** Select: **Organization** level software support **Project** for the current coordinated release Select: **Milestone** as the next bugfix version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Complete the steps above to fix the bug on the **develop** branch. Branch name: `bugfix_<Issue Number>_develop_<Description>` Pull request: `bugfix <Issue Number> develop <Description>` Select: **Reviewer(s)** and **Linked issues** Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Close this issue.
process
investigate airnow hourly test in unit xml describe the problem during review of for issue a problem was discovered in the output of the airnow hourly test in unit xml the output file created by this test hourlydata nc contains values of infinity inf while the gha run for that pr did increase the occurrence of inf in the output the problem existed prior to those code changes this issue is to investigate the source of the inf values appearing in the output and fix the code to avoid them expected behavior the output of should never contain a value of infinity the code should be enhanced by adding more error checking to avoid them perhaps they should be reported as bad data value i e rather than inf environment describe your runtime environment visible in the output of gha and in the output of the met nightly build on seneca to reproduce describe the steps to reproduce the behavior log on to seneca go to nb area cd projects met met regression develop dump to ascii rscript met develop scripts rscripts r met develop test output airnow hourlydata nc hourlydata txt see error in columns and of the output grep inf hourlydata txt wc l post relevant sample data following these instructions relevant deadlines list relevant project deadlines here or state none funding source define the source of funding and account keys here or state none define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required labels select component s select priority select requestor s projects and milestone select organization level project for support of the current coordinated release select repository level project for development toward the next official release or add alert need project assignment label select milestone as the next bugfix version define related issue s consider the impact to the other metplus components bugfix checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of main branch name bugfix main fix the bug and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into main pull request bugfix main define the pull request metadata as permissions allow select reviewer s and linked issues select organization level software support project for the current coordinated release select milestone as the next bugfix version iterate until the reviewer s accept and merge your changes delete your fork or branch complete the steps above to fix the bug on the develop branch branch name bugfix develop pull request bugfix develop select reviewer s and linked issues select repository level development cycle project for the next official release select milestone as the next official version close this issue
1
1,082
3,546,949,667
IssuesEvent
2016-01-20 06:44:45
deb-sandeep/JoveNotesWebApp
https://api.github.com/repos/deb-sandeep/JoveNotesWebApp
opened
Introduction of @exercise type
enhancement jove_notes_db jove_notes_grammar jove_notes_processor jove_notes_server jove_notes_ui
## Support for exercise questions ### Background In its current form JoveNotes is primarily focused towards the bulk of study for K1-8 classes - retention and recollection (RnR). However as we move towards classes XI and beyond, there is a marked shift in focus towards applicability and problem solving. This enhancement is an endeavor to stretch the envelop of JoveNotes to foray into the arena of applicability. The current vision is to incorporate the following aspects of applicability class of problems: 1. Capture / Digitization - of exercise questions 2. Presentment 3. Operational tracking 4. Collecting data points 5. Providing insights by analyzing data points 6. Scoring of points - extension to current gamification ### In which way are exercise different than the current note element types? Exercise problems (for example, essays, numericals, problem solving etc) are characteristically different from existing note element types, fundamentally because it addresses a different segment of the education topology - applicability, while the existing note elements address the retention and recollection part. Salient points which differentiates exercises from other notes elements are: * **Presentment** - Exercises should not be mixed with RnR presentment, especially during the flash card sessions for the following reasons: * RnR flash cards sessions are rapid fire sessions with an average turn around time per question around 15-20 seconds. Numericals will not fit into the scheme - gear shift jerks. * RnR flash cards are designed to funnel focus on the screen - Numericals will have to be solved manually, causing a loss of focus on RnR streak. * Numericals require a cluster attempt behavior - read the questions paper, prioritize, review, rework, mark and then submit. This is contrary to the atomic way RnR questions are presented. * **Operational tracking** - * For one, numericals don't require the five step spaced sequenced repetition like those of RnR. * The time span between re-presentment is different as compared to RnR * Lot more data needs to be collected at session level - pre-read time, work, review, rework etc ## Modules impacted 1. JoveNotes grammar * com.sandy.xtext.jovenotes * com.sandy.xtext.jovenotes.ui * com.sandy.xtext.jovenotes.tests 2. JoveNotes processor 3. Database 4. JoveNotesWebApp * Dashboard * New numericals section 5. JoveNotesMaker ## Feature branch `feature/@exercise`
1.0
Introduction of @exercise type - ## Support for exercise questions ### Background In its current form JoveNotes is primarily focused towards the bulk of study for K1-8 classes - retention and recollection (RnR). However as we move towards classes XI and beyond, there is a marked shift in focus towards applicability and problem solving. This enhancement is an endeavor to stretch the envelop of JoveNotes to foray into the arena of applicability. The current vision is to incorporate the following aspects of applicability class of problems: 1. Capture / Digitization - of exercise questions 2. Presentment 3. Operational tracking 4. Collecting data points 5. Providing insights by analyzing data points 6. Scoring of points - extension to current gamification ### In which way are exercise different than the current note element types? Exercise problems (for example, essays, numericals, problem solving etc) are characteristically different from existing note element types, fundamentally because it addresses a different segment of the education topology - applicability, while the existing note elements address the retention and recollection part. Salient points which differentiates exercises from other notes elements are: * **Presentment** - Exercises should not be mixed with RnR presentment, especially during the flash card sessions for the following reasons: * RnR flash cards sessions are rapid fire sessions with an average turn around time per question around 15-20 seconds. Numericals will not fit into the scheme - gear shift jerks. * RnR flash cards are designed to funnel focus on the screen - Numericals will have to be solved manually, causing a loss of focus on RnR streak. * Numericals require a cluster attempt behavior - read the questions paper, prioritize, review, rework, mark and then submit. This is contrary to the atomic way RnR questions are presented. * **Operational tracking** - * For one, numericals don't require the five step spaced sequenced repetition like those of RnR. * The time span between re-presentment is different as compared to RnR * Lot more data needs to be collected at session level - pre-read time, work, review, rework etc ## Modules impacted 1. JoveNotes grammar * com.sandy.xtext.jovenotes * com.sandy.xtext.jovenotes.ui * com.sandy.xtext.jovenotes.tests 2. JoveNotes processor 3. Database 4. JoveNotesWebApp * Dashboard * New numericals section 5. JoveNotesMaker ## Feature branch `feature/@exercise`
process
introduction of exercise type support for exercise questions background in its current form jovenotes is primarily focused towards the bulk of study for classes retention and recollection rnr however as we move towards classes xi and beyond there is a marked shift in focus towards applicability and problem solving this enhancement is an endeavor to stretch the envelop of jovenotes to foray into the arena of applicability the current vision is to incorporate the following aspects of applicability class of problems capture digitization of exercise questions presentment operational tracking collecting data points providing insights by analyzing data points scoring of points extension to current gamification in which way are exercise different than the current note element types exercise problems for example essays numericals problem solving etc are characteristically different from existing note element types fundamentally because it addresses a different segment of the education topology applicability while the existing note elements address the retention and recollection part salient points which differentiates exercises from other notes elements are presentment exercises should not be mixed with rnr presentment especially during the flash card sessions for the following reasons rnr flash cards sessions are rapid fire sessions with an average turn around time per question around seconds numericals will not fit into the scheme gear shift jerks rnr flash cards are designed to funnel focus on the screen numericals will have to be solved manually causing a loss of focus on rnr streak numericals require a cluster attempt behavior read the questions paper prioritize review rework mark and then submit this is contrary to the atomic way rnr questions are presented operational tracking for one numericals don t require the five step spaced sequenced repetition like those of rnr the time span between re presentment is different as compared to rnr lot more data needs to be collected at session level pre read time work review rework etc modules impacted jovenotes grammar com sandy xtext jovenotes com sandy xtext jovenotes ui com sandy xtext jovenotes tests jovenotes processor database jovenoteswebapp dashboard new numericals section jovenotesmaker feature branch feature exercise
1
146,476
5,622,385,433
IssuesEvent
2017-04-04 12:42:38
projectcalico/felix
https://api.github.com/repos/projectcalico/felix
closed
Spurious warning during resync period
priority/P3
Depending on the order that updates arrive from the Syncer during a resync, we may log this warning for a profile that does exist but hasn't arrived yet: ``` Mar 13 16:46:11 smc-felix-scale-test calico-felix[28936]: WARNING active_rules_calculator.go 260: Profile not known or invalid, generating dummy profile that drops all traffic. profileID="prof-093 ``` We should defer such warnings until the end of the resync period.
1.0
Spurious warning during resync period - Depending on the order that updates arrive from the Syncer during a resync, we may log this warning for a profile that does exist but hasn't arrived yet: ``` Mar 13 16:46:11 smc-felix-scale-test calico-felix[28936]: WARNING active_rules_calculator.go 260: Profile not known or invalid, generating dummy profile that drops all traffic. profileID="prof-093 ``` We should defer such warnings until the end of the resync period.
non_process
spurious warning during resync period depending on the order that updates arrive from the syncer during a resync we may log this warning for a profile that does exist but hasn t arrived yet mar smc felix scale test calico felix warning active rules calculator go profile not known or invalid generating dummy profile that drops all traffic profileid prof we should defer such warnings until the end of the resync period
0
376,831
26,219,444,878
IssuesEvent
2023-01-04 13:45:55
Textualize/textual
https://api.github.com/repos/Textualize/textual
closed
Docs: Improve mkdocs Checkbox Styling
documentation
The current styling of mkdocs material theme uses a gray circle with a white check inside of it for an unchecked checkbox, this is confused with the checked checkbox that has a styling of a green circle with a white check inside of it. Example: ![Screenshot from 2022-12-18 16-47-23](https://user-images.githubusercontent.com/1612303/208304827-ef9fcb53-9264-4ea0-952e-e542d4afe466.png) According to the following discussion (https://github.com/Textualize/textual/discussions/1021#discussioncomment-4440632) it would be nice to implement a styling based on the following example (https://github.com/WMRamadan/mkdocs-styling) to better distinguish between an unchecked box and a checked box within the textual docs. Before & After Example: ![mkdocs_task_list_styling_example](https://user-images.githubusercontent.com/1612303/208304931-da9a92c3-a097-47ef-9b11-ee37a848afcd.jpg)
1.0
Docs: Improve mkdocs Checkbox Styling - The current styling of mkdocs material theme uses a gray circle with a white check inside of it for an unchecked checkbox, this is confused with the checked checkbox that has a styling of a green circle with a white check inside of it. Example: ![Screenshot from 2022-12-18 16-47-23](https://user-images.githubusercontent.com/1612303/208304827-ef9fcb53-9264-4ea0-952e-e542d4afe466.png) According to the following discussion (https://github.com/Textualize/textual/discussions/1021#discussioncomment-4440632) it would be nice to implement a styling based on the following example (https://github.com/WMRamadan/mkdocs-styling) to better distinguish between an unchecked box and a checked box within the textual docs. Before & After Example: ![mkdocs_task_list_styling_example](https://user-images.githubusercontent.com/1612303/208304931-da9a92c3-a097-47ef-9b11-ee37a848afcd.jpg)
non_process
docs improve mkdocs checkbox styling the current styling of mkdocs material theme uses a gray circle with a white check inside of it for an unchecked checkbox this is confused with the checked checkbox that has a styling of a green circle with a white check inside of it example according to the following discussion it would be nice to implement a styling based on the following example to better distinguish between an unchecked box and a checked box within the textual docs before after example
0
8,568
11,738,737,635
IssuesEvent
2020-03-11 16:32:22
hashgraph/hedera-mirror-node
https://api.github.com/repos/hashgraph/hedera-mirror-node
closed
Helm Chart
P2 enhancement process
**Problem** We need to accelerate our deployment process and moving to Kubernetes would help us with that goal. We could do all of the mirror node migration at [once](https://github.com/hashgraph/hedera-mirror-node/issues/346) or we could do a single component at a time to get our feet wet. Since `hedera-mirror-grpc` is a new project for HCS that's not in production, it is the best candidate to convert. **Solution** - Push docker images on commit to master (https://github.com/hashgraph/hedera-mirror-node/issues/335) - Create a v2 Helm child chart for `hedera-mirror-grpc` at `charts/hedera-mirror-grpc` - Create a v2 (Helm 3) wrapper chart `hedera-mirror-node` in top level `charts/` with `hedera-mirror-grpc` chart in its list of requirements - Test locally with minikube/k3d/kind/etc - Work with Ops to get a single worker GKE cluster setup - Test chart in that cluster **Alternatives** https://github.com/hashgraph/hedera-mirror-node/issues/346 **Additional Context**
1.0
Helm Chart - **Problem** We need to accelerate our deployment process and moving to Kubernetes would help us with that goal. We could do all of the mirror node migration at [once](https://github.com/hashgraph/hedera-mirror-node/issues/346) or we could do a single component at a time to get our feet wet. Since `hedera-mirror-grpc` is a new project for HCS that's not in production, it is the best candidate to convert. **Solution** - Push docker images on commit to master (https://github.com/hashgraph/hedera-mirror-node/issues/335) - Create a v2 Helm child chart for `hedera-mirror-grpc` at `charts/hedera-mirror-grpc` - Create a v2 (Helm 3) wrapper chart `hedera-mirror-node` in top level `charts/` with `hedera-mirror-grpc` chart in its list of requirements - Test locally with minikube/k3d/kind/etc - Work with Ops to get a single worker GKE cluster setup - Test chart in that cluster **Alternatives** https://github.com/hashgraph/hedera-mirror-node/issues/346 **Additional Context**
process
helm chart problem we need to accelerate our deployment process and moving to kubernetes would help us with that goal we could do all of the mirror node migration at or we could do a single component at a time to get our feet wet since hedera mirror grpc is a new project for hcs that s not in production it is the best candidate to convert solution push docker images on commit to master create a helm child chart for hedera mirror grpc at charts hedera mirror grpc create a helm wrapper chart hedera mirror node in top level charts with hedera mirror grpc chart in its list of requirements test locally with minikube kind etc work with ops to get a single worker gke cluster setup test chart in that cluster alternatives additional context
1
19,364
25,493,329,990
IssuesEvent
2022-11-27 11:12:24
altillimity/SatDump
https://api.github.com/repos/altillimity/SatDump
closed
Metop demod failure with message "expected >"
bug Processing
**Description of the issue** When decoding a 4 MHz baseband of a Metop B pass, SatDump will fail with a cryptic error message "expected >". No other message is shown and there appears to be nothing useful in the log either. The only thing that gets produced is a single admin message and a cadu file (that does open in LeanHRPT and shows a huge missing chunk in the AVHRR and other instruments) . The signal of the pass was strong with a snr of about 15 dB, the only unusual thing is that at a certain point the BER goes up to about 0.22, then lowers (Reed Solomon all red, but it doesn't desync). **Hardware (SDR/PC/OS)** Thinkpad X260 (Intel i5-6200u, Intel HD Graphics 520); PlutoSDR Pop!_OS 22.04 **Version (Eg, 1.0.0, CI Build #171)** 1.0.1 built yesterday at 23.30 **Screenshots** Cryptic error message ![immagine](https://user-images.githubusercontent.com/12469744/204123630-d1fed002-0f08-461c-ae3f-c99de86c5a75.png) The moment when Reed Solomon goes all red ![immagine](https://user-images.githubusercontent.com/12469744/204123742-2c2086d6-546f-49ed-a0da-44f409d6b115.png) **CADUs and other data useful for debugging** The log [satdump.txt](https://github.com/altillimity/SatDump/files/10097723/satdump.txt) The baseband (WARNING large file ~9GB, maybe use wget) [baseband](http://proxima.a-centauri.com/stuff/baseband_1701300000Hz_21-43-04_26-11-2022.wav) The output produced [output](http://proxima.a-centauri.com/stuff/Metop_fail.zip) Thank you again and sorry for finding yet another bug :smile: :heart:
1.0
Metop demod failure with message "expected >" - **Description of the issue** When decoding a 4 MHz baseband of a Metop B pass, SatDump will fail with a cryptic error message "expected >". No other message is shown and there appears to be nothing useful in the log either. The only thing that gets produced is a single admin message and a cadu file (that does open in LeanHRPT and shows a huge missing chunk in the AVHRR and other instruments) . The signal of the pass was strong with a snr of about 15 dB, the only unusual thing is that at a certain point the BER goes up to about 0.22, then lowers (Reed Solomon all red, but it doesn't desync). **Hardware (SDR/PC/OS)** Thinkpad X260 (Intel i5-6200u, Intel HD Graphics 520); PlutoSDR Pop!_OS 22.04 **Version (Eg, 1.0.0, CI Build #171)** 1.0.1 built yesterday at 23.30 **Screenshots** Cryptic error message ![immagine](https://user-images.githubusercontent.com/12469744/204123630-d1fed002-0f08-461c-ae3f-c99de86c5a75.png) The moment when Reed Solomon goes all red ![immagine](https://user-images.githubusercontent.com/12469744/204123742-2c2086d6-546f-49ed-a0da-44f409d6b115.png) **CADUs and other data useful for debugging** The log [satdump.txt](https://github.com/altillimity/SatDump/files/10097723/satdump.txt) The baseband (WARNING large file ~9GB, maybe use wget) [baseband](http://proxima.a-centauri.com/stuff/baseband_1701300000Hz_21-43-04_26-11-2022.wav) The output produced [output](http://proxima.a-centauri.com/stuff/Metop_fail.zip) Thank you again and sorry for finding yet another bug :smile: :heart:
process
metop demod failure with message expected description of the issue when decoding a mhz baseband of a metop b pass satdump will fail with a cryptic error message expected no other message is shown and there appears to be nothing useful in the log either the only thing that gets produced is a single admin message and a cadu file that does open in leanhrpt and shows a huge missing chunk in the avhrr and other instruments the signal of the pass was strong with a snr of about db the only unusual thing is that at a certain point the ber goes up to about then lowers reed solomon all red but it doesn t desync hardware sdr pc os thinkpad intel intel hd graphics plutosdr pop os version eg ci build built yesterday at screenshots cryptic error message the moment when reed solomon goes all red cadus and other data useful for debugging the log the baseband warning large file maybe use wget the output produced thank you again and sorry for finding yet another bug smile heart
1
167,723
26,539,992,425
IssuesEvent
2023-01-19 18:26:19
NASA-AMMOS/aerie-ui
https://api.github.com/repos/NASA-AMMOS/aerie-ui
opened
Design how to display event relative planning relationships in the side panel
design
There isn't an approach for showing relative links in the side panel. We could consider doing something similar to decomposition, or take a different approach if more appropriate. We should take into consideration how someone gets to this view, and how they navigate between the activity instance details and this information. <img width="1098" alt="image" src="https://user-images.githubusercontent.com/6529667/213528986-34b268cb-f360-42a5-8a95-4b433094e9d0.png"> This is a spinoff from: https://github.com/NASA-AMMOS/aerie-ui/issues/255
1.0
Design how to display event relative planning relationships in the side panel - There isn't an approach for showing relative links in the side panel. We could consider doing something similar to decomposition, or take a different approach if more appropriate. We should take into consideration how someone gets to this view, and how they navigate between the activity instance details and this information. <img width="1098" alt="image" src="https://user-images.githubusercontent.com/6529667/213528986-34b268cb-f360-42a5-8a95-4b433094e9d0.png"> This is a spinoff from: https://github.com/NASA-AMMOS/aerie-ui/issues/255
non_process
design how to display event relative planning relationships in the side panel there isn t an approach for showing relative links in the side panel we could consider doing something similar to decomposition or take a different approach if more appropriate we should take into consideration how someone gets to this view and how they navigate between the activity instance details and this information img width alt image src this is a spinoff from
0
12,864
15,254,278,524
IssuesEvent
2021-02-20 11:19:19
arunkumar9t2/scabbard
https://api.github.com/repos/arunkumar9t2/scabbard
closed
build problem when plugin is enabled (dagger.releasablereferences @CanReleaseReferences)
module:processor
I have this problem when the plugin is enabled. > A failure occurred while executing org.jetbrains.kotlin.gradle.internal.KaptExecution > java.lang.reflect.InvocationTargetException (no error message) and when i build with --stacktrace . this is the details. > Caused by: java.lang.NoClassDefFoundError: dagger/releasablereferences/ForReleasableReferences at dagger.internal.codegen.ForReleasableReferencesValidator.annotations(ForReleasableReferencesValidator.java:69) at dagger.shaded.auto.common.BasicAnnotationProcessor.getSupportedAnnotationClasses(BasicAnnotationProcessor.java:147) at dagger.shaded.auto.common.BasicAnnotationProcessor.getSupportedAnnotationTypes(BasicAnnotationProcessor.java:159) at dagger.shaded.auto.common.BasicAnnotationProcessor.getSupportedAnnotationTypes(BasicAnnotationProcessor.java:103) at org.jetbrains.kotlin.kapt3.base.incremental.IncrementalProcessor.getSupportedAnnotationTypes(incrementalProcessors.kt) at org.jetbrains.kotlin.kapt3.base.ProcessorWrapper.getSupportedAnnotationTypes(annotationProcessing.kt:187) at com.sun.tools.javac.processing.JavacProcessingEnvironment$ProcessorState.<init>(JavacProcessingEnvironment.java:513) at com.sun.tools.javac.processing.JavacProcessingEnvironment$DiscoveredProcessors$ProcessorStateIterator.next(JavacProcessingEnvironment.java:605) at com.sun.tools.javac.processing.JavacProcessingEnvironment.discoverAndRunProcs(JavacProcessingEnvironment.java:698) at com.sun.tools.javac.processing.JavacProcessingEnvironment.access$1800(JavacProcessingEnvironment.java:91) at com.sun.tools.javac.processing.JavacProcessingEnvironment$Round.run(JavacProcessingEnvironment.java:1043) at com.sun.tools.javac.processing.JavacProcessingEnvironment.doProcessing(JavacProcessingEnvironment.java:1184) at com.sun.tools.javac.main.JavaCompiler.processAnnotations(JavaCompiler.java:1170) at com.sun.tools.javac.main.JavaCompiler.processAnnotations(JavaCompiler.java:1068) at org.jetbrains.kotlin.kapt3.base.AnnotationProcessingKt.doAnnotationProcessing(annotationProcessing.kt:78) ... 27 more Caused by: java.lang.ClassNotFoundException: dagger.releasablereferences.ForReleasableReferences ... 42 more i think it is related to [this issue](https://github.com/google/dagger/issues/1117).
1.0
build problem when plugin is enabled (dagger.releasablereferences @CanReleaseReferences) - I have this problem when the plugin is enabled. > A failure occurred while executing org.jetbrains.kotlin.gradle.internal.KaptExecution > java.lang.reflect.InvocationTargetException (no error message) and when i build with --stacktrace . this is the details. > Caused by: java.lang.NoClassDefFoundError: dagger/releasablereferences/ForReleasableReferences at dagger.internal.codegen.ForReleasableReferencesValidator.annotations(ForReleasableReferencesValidator.java:69) at dagger.shaded.auto.common.BasicAnnotationProcessor.getSupportedAnnotationClasses(BasicAnnotationProcessor.java:147) at dagger.shaded.auto.common.BasicAnnotationProcessor.getSupportedAnnotationTypes(BasicAnnotationProcessor.java:159) at dagger.shaded.auto.common.BasicAnnotationProcessor.getSupportedAnnotationTypes(BasicAnnotationProcessor.java:103) at org.jetbrains.kotlin.kapt3.base.incremental.IncrementalProcessor.getSupportedAnnotationTypes(incrementalProcessors.kt) at org.jetbrains.kotlin.kapt3.base.ProcessorWrapper.getSupportedAnnotationTypes(annotationProcessing.kt:187) at com.sun.tools.javac.processing.JavacProcessingEnvironment$ProcessorState.<init>(JavacProcessingEnvironment.java:513) at com.sun.tools.javac.processing.JavacProcessingEnvironment$DiscoveredProcessors$ProcessorStateIterator.next(JavacProcessingEnvironment.java:605) at com.sun.tools.javac.processing.JavacProcessingEnvironment.discoverAndRunProcs(JavacProcessingEnvironment.java:698) at com.sun.tools.javac.processing.JavacProcessingEnvironment.access$1800(JavacProcessingEnvironment.java:91) at com.sun.tools.javac.processing.JavacProcessingEnvironment$Round.run(JavacProcessingEnvironment.java:1043) at com.sun.tools.javac.processing.JavacProcessingEnvironment.doProcessing(JavacProcessingEnvironment.java:1184) at com.sun.tools.javac.main.JavaCompiler.processAnnotations(JavaCompiler.java:1170) at com.sun.tools.javac.main.JavaCompiler.processAnnotations(JavaCompiler.java:1068) at org.jetbrains.kotlin.kapt3.base.AnnotationProcessingKt.doAnnotationProcessing(annotationProcessing.kt:78) ... 27 more Caused by: java.lang.ClassNotFoundException: dagger.releasablereferences.ForReleasableReferences ... 42 more i think it is related to [this issue](https://github.com/google/dagger/issues/1117).
process
build problem when plugin is enabled dagger releasablereferences canreleasereferences i have this problem when the plugin is enabled a failure occurred while executing org jetbrains kotlin gradle internal kaptexecution java lang reflect invocationtargetexception no error message and when i build with stacktrace this is the details caused by java lang noclassdeffounderror dagger releasablereferences forreleasablereferences at dagger internal codegen forreleasablereferencesvalidator annotations forreleasablereferencesvalidator java at dagger shaded auto common basicannotationprocessor getsupportedannotationclasses basicannotationprocessor java at dagger shaded auto common basicannotationprocessor getsupportedannotationtypes basicannotationprocessor java at dagger shaded auto common basicannotationprocessor getsupportedannotationtypes basicannotationprocessor java at org jetbrains kotlin base incremental incrementalprocessor getsupportedannotationtypes incrementalprocessors kt at org jetbrains kotlin base processorwrapper getsupportedannotationtypes annotationprocessing kt at com sun tools javac processing javacprocessingenvironment processorstate javacprocessingenvironment java at com sun tools javac processing javacprocessingenvironment discoveredprocessors processorstateiterator next javacprocessingenvironment java at com sun tools javac processing javacprocessingenvironment discoverandrunprocs javacprocessingenvironment java at com sun tools javac processing javacprocessingenvironment access javacprocessingenvironment java at com sun tools javac processing javacprocessingenvironment round run javacprocessingenvironment java at com sun tools javac processing javacprocessingenvironment doprocessing javacprocessingenvironment java at com sun tools javac main javacompiler processannotations javacompiler java at com sun tools javac main javacompiler processannotations javacompiler java at org jetbrains kotlin base annotationprocessingkt doannotationprocessing annotationprocessing kt more caused by java lang classnotfoundexception dagger releasablereferences forreleasablereferences more i think it is related to
1
56,311
8,066,542,918
IssuesEvent
2018-08-04 17:05:40
chartjs/Chart.js
https://api.github.com/repos/chartjs/Chart.js
closed
Flex mode in bar chart not documented
type: documentation
@nagix I was trying to review your PR https://github.com/chartjs/Chart.js/pull/4658. Sorry for the delay! I was having a lot of difficulty because I'm not too familiar with the bar code and it's been a long time since I've looked at it. I noticed some missing documentation. I'd like to add it first to make sure I understand what your code is supposed to be doing https://www.chartjs.org/docs/latest/charts/bar.html is missing documentation for `flex` as added in https://github.com/chartjs/Chart.js/pull/4994
1.0
Flex mode in bar chart not documented - @nagix I was trying to review your PR https://github.com/chartjs/Chart.js/pull/4658. Sorry for the delay! I was having a lot of difficulty because I'm not too familiar with the bar code and it's been a long time since I've looked at it. I noticed some missing documentation. I'd like to add it first to make sure I understand what your code is supposed to be doing https://www.chartjs.org/docs/latest/charts/bar.html is missing documentation for `flex` as added in https://github.com/chartjs/Chart.js/pull/4994
non_process
flex mode in bar chart not documented nagix i was trying to review your pr sorry for the delay i was having a lot of difficulty because i m not too familiar with the bar code and it s been a long time since i ve looked at it i noticed some missing documentation i d like to add it first to make sure i understand what your code is supposed to be doing is missing documentation for flex as added in
0
284,360
8,737,623,887
IssuesEvent
2018-12-11 23:16:04
linux-audit/audit-kernel
https://api.github.com/repos/linux-audit/audit-kernel
closed
RFE: group all audit task parameters together
enhancement priority/low
Move all audit-related task parameters out of struct task_struct into a dedicated structure allocated at task creation. At the moment this includes loginuid, sessionid and audit_context.
1.0
RFE: group all audit task parameters together - Move all audit-related task parameters out of struct task_struct into a dedicated structure allocated at task creation. At the moment this includes loginuid, sessionid and audit_context.
non_process
rfe group all audit task parameters together move all audit related task parameters out of struct task struct into a dedicated structure allocated at task creation at the moment this includes loginuid sessionid and audit context
0
10,405
13,204,215,960
IssuesEvent
2020-08-14 15:30:09
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Document describes a feature which is not finished
Pri2 devops-cicd-process/tech devops/prod doc-bug
Just found out the feature is not actually finished: https://developercommunity.visualstudio.com/content/problem/954120/pipeline-trigger-unexpected-value-trigger.html Thanks for waisting my time. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 86285f72-9e28-da97-59bb-c29eb60f627d * Version Independent ID: 18d5a591-a7d3-c261-6bff-8808ae433f54 * Content: [Configure pipeline triggers - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/pipeline-triggers?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/pipeline-triggers.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/pipeline-triggers.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @ashkir * Microsoft Alias: **ashkir**
1.0
Document describes a feature which is not finished - Just found out the feature is not actually finished: https://developercommunity.visualstudio.com/content/problem/954120/pipeline-trigger-unexpected-value-trigger.html Thanks for waisting my time. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 86285f72-9e28-da97-59bb-c29eb60f627d * Version Independent ID: 18d5a591-a7d3-c261-6bff-8808ae433f54 * Content: [Configure pipeline triggers - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/pipeline-triggers?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/pipeline-triggers.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/pipeline-triggers.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @ashkir * Microsoft Alias: **ashkir**
process
document describes a feature which is not finished just found out the feature is not actually finished thanks for waisting my time document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login ashkir microsoft alias ashkir
1
4,772
7,642,062,480
IssuesEvent
2018-05-08 07:58:44
Bw2801/environment
https://api.github.com/repos/Bw2801/environment
opened
Sleep in while loop
enhancement processor
Add a `Thread.sleep` statement in the while loop which keeps the processor running to lower the CPU consumption.
1.0
Sleep in while loop - Add a `Thread.sleep` statement in the while loop which keeps the processor running to lower the CPU consumption.
process
sleep in while loop add a thread sleep statement in the while loop which keeps the processor running to lower the cpu consumption
1
19,662
26,024,539,202
IssuesEvent
2022-12-21 15:17:53
hashgraph/hedera-mirror-node
https://api.github.com/repos/hashgraph/hedera-mirror-node
closed
Retain the old automated release workflow
bug process
### Problem We need to retain the old automated release workflow which uses maven build system for a while until we are confident that no more patch releases are needed for <= 0.70 ### Solution add the old automated release github wokflow back ### Alternatives _No response_
1.0
Retain the old automated release workflow - ### Problem We need to retain the old automated release workflow which uses maven build system for a while until we are confident that no more patch releases are needed for <= 0.70 ### Solution add the old automated release github wokflow back ### Alternatives _No response_
process
retain the old automated release workflow problem we need to retain the old automated release workflow which uses maven build system for a while until we are confident that no more patch releases are needed for solution add the old automated release github wokflow back alternatives no response
1
337,228
24,531,665,361
IssuesEvent
2022-10-11 16:57:30
elastic/apm-aws-lambda
https://api.github.com/repos/elastic/apm-aws-lambda
closed
Create dedicated book and make docs Elastic Stack version independent
documentation Team:Docs aws-λ-extension
Similar to how docs work for apm agents, let's introduce a separate lambda extension docs books. This is important to * better document features introduced in specific versions * remove the stack version branches from the repository and the requirement to backport docs changes * have better support for extension version aligned release notes * have better support for extension | apm-server compatibility matrix
1.0
Create dedicated book and make docs Elastic Stack version independent - Similar to how docs work for apm agents, let's introduce a separate lambda extension docs books. This is important to * better document features introduced in specific versions * remove the stack version branches from the repository and the requirement to backport docs changes * have better support for extension version aligned release notes * have better support for extension | apm-server compatibility matrix
non_process
create dedicated book and make docs elastic stack version independent similar to how docs work for apm agents let s introduce a separate lambda extension docs books this is important to better document features introduced in specific versions remove the stack version branches from the repository and the requirement to backport docs changes have better support for extension version aligned release notes have better support for extension apm server compatibility matrix
0
8,598
11,759,008,525
IssuesEvent
2020-03-13 16:26:24
nltk/nltk
https://api.github.com/repos/nltk/nltk
closed
Weird conflict with nltk and google storage api under ProcessPool
inactive multithread / multiprocessing
Hello, I experienced some weird error when using `nltk` and `google.cloud.storage` under `ProcessPool`. After `import nltk`, I tried to extract data from google storage from the main process, it works fine. See `first_main_then_pool()` below. However, when I extract data from a process pool executor directly after `import nltk`, See `only_pool()`, I get the following error: ``` Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/yipjustin/tmp/a.py", line 29, in <module> only_pool() File "/Users/yipjustin/tmp/a.py", line 25, in only_pool print(executor.submit(get, 'b.txt').result()) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/concurrent/futures/_base.py", line 405, in result return self.__get_result() File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/concurrent/futures/_base.py", line 357, in __get_result raise self._exception concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. ``` I am curious to see if anyone has experienced similar issue. Thanks! Code: ``` import concurrent.futures from google.cloud import storage def get(path): bucket_name = 'yipjustin-a' client = storage.Client() bucket = client.lookup_bucket(bucket_name) blob = bucket.get_blob(path) return blob.download_as_string() def first_main_then_pool(): # GOOD import nltk print(get('a.txt')) with concurrent.futures.ProcessPoolExecutor() as executor: print(executor.submit(get, 'b.txt').result()) def only_pool(): # BAD import nltk with concurrent.futures.ProcessPoolExecutor() as executor: print(executor.submit(get, 'b.txt').result()) ```
1.0
Weird conflict with nltk and google storage api under ProcessPool - Hello, I experienced some weird error when using `nltk` and `google.cloud.storage` under `ProcessPool`. After `import nltk`, I tried to extract data from google storage from the main process, it works fine. See `first_main_then_pool()` below. However, when I extract data from a process pool executor directly after `import nltk`, See `only_pool()`, I get the following error: ``` Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/yipjustin/tmp/a.py", line 29, in <module> only_pool() File "/Users/yipjustin/tmp/a.py", line 25, in only_pool print(executor.submit(get, 'b.txt').result()) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/concurrent/futures/_base.py", line 405, in result return self.__get_result() File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/concurrent/futures/_base.py", line 357, in __get_result raise self._exception concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. ``` I am curious to see if anyone has experienced similar issue. Thanks! Code: ``` import concurrent.futures from google.cloud import storage def get(path): bucket_name = 'yipjustin-a' client = storage.Client() bucket = client.lookup_bucket(bucket_name) blob = bucket.get_blob(path) return blob.download_as_string() def first_main_then_pool(): # GOOD import nltk print(get('a.txt')) with concurrent.futures.ProcessPoolExecutor() as executor: print(executor.submit(get, 'b.txt').result()) def only_pool(): # BAD import nltk with concurrent.futures.ProcessPoolExecutor() as executor: print(executor.submit(get, 'b.txt').result()) ```
process
weird conflict with nltk and google storage api under processpool hello i experienced some weird error when using nltk and google cloud storage under processpool after import nltk i tried to extract data from google storage from the main process it works fine see first main then pool below however when i extract data from a process pool executor directly after import nltk see only pool i get the following error traceback most recent call last file library frameworks python framework versions lib runpy py line in run module as main main mod spec file library frameworks python framework versions lib runpy py line in run code exec code run globals file users yipjustin tmp a py line in only pool file users yipjustin tmp a py line in only pool print executor submit get b txt result file library frameworks python framework versions lib concurrent futures base py line in result return self get result file library frameworks python framework versions lib concurrent futures base py line in get result raise self exception concurrent futures process brokenprocesspool a process in the process pool was terminated abruptly while the future was running or pending i am curious to see if anyone has experienced similar issue thanks code import concurrent futures from google cloud import storage def get path bucket name yipjustin a client storage client bucket client lookup bucket bucket name blob bucket get blob path return blob download as string def first main then pool good import nltk print get a txt with concurrent futures processpoolexecutor as executor print executor submit get b txt result def only pool bad import nltk with concurrent futures processpoolexecutor as executor print executor submit get b txt result
1
48,274
2,996,863,888
IssuesEvent
2015-07-23 01:12:25
handsontable/handsontable
https://api.github.com/repos/handsontable/handsontable
closed
manualColumnMove guide not showing up in the right spot
Change Milestone candidate Plugin: resize / move Priority: normal
I guess the title sums it up... =) When using manualColumnMove, you are able to move the columns, but the guide DIV isn't showing up in the right location (or sometimes not at all). Grab the handle for column "C" and slowly drag it to the right, the guide will show up directly over the "C". As you drag further the guide appears on the far right side of the "D" column, etc. Confirmed with 0.12.4 on IE, Chrome, and FF (Latest versions) http://jsfiddle.net/pascoea/2m78hqyc/1/
1.0
manualColumnMove guide not showing up in the right spot - I guess the title sums it up... =) When using manualColumnMove, you are able to move the columns, but the guide DIV isn't showing up in the right location (or sometimes not at all). Grab the handle for column "C" and slowly drag it to the right, the guide will show up directly over the "C". As you drag further the guide appears on the far right side of the "D" column, etc. Confirmed with 0.12.4 on IE, Chrome, and FF (Latest versions) http://jsfiddle.net/pascoea/2m78hqyc/1/
non_process
manualcolumnmove guide not showing up in the right spot i guess the title sums it up when using manualcolumnmove you are able to move the columns but the guide div isn t showing up in the right location or sometimes not at all grab the handle for column c and slowly drag it to the right the guide will show up directly over the c as you drag further the guide appears on the far right side of the d column etc confirmed with on ie chrome and ff latest versions
0
11,956
14,725,715,980
IssuesEvent
2021-01-06 05:23:26
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
opened
Deactivate Customers
anc-core anc-ui anp-2 ant-feature grt-ui processes pl-wish list
In GitLab by @kdjstudios on Oct 22, 2016, 23:42 Is it possible to deactivate customers and/or choose to only list "active" customers in the dropdown? If all accounts under a customer are terminated then the customer should not be displayed.
1.0
Deactivate Customers - In GitLab by @kdjstudios on Oct 22, 2016, 23:42 Is it possible to deactivate customers and/or choose to only list "active" customers in the dropdown? If all accounts under a customer are terminated then the customer should not be displayed.
process
deactivate customers in gitlab by kdjstudios on oct is it possible to deactivate customers and or choose to only list active customers in the dropdown if all accounts under a customer are terminated then the customer should not be displayed
1
908
3,371,067,636
IssuesEvent
2015-11-23 17:31:29
neuropoly/spinalcordtoolbox
https://api.github.com/repos/neuropoly/spinalcordtoolbox
closed
ValueError: min() arg is an empty sequence
bug priority: high sct_process_segmentation
data: sct_testing_data/data/mt error: ~~~ sct_process_segmentation -i left_dorsal_column.nii.gz -p csa -l 2:5 -t label/template/MNI-Poly-AMU_level.nii.gz OK: left_dorsal_column.nii.gz Check parameters: .. segmentation file: left_dorsal_column.nii.gz Create temporary folder... mkdir tmp.151119161125_478478/ Copying input data to tmp folder and convert to nii... sct_convert -i /Users/julien/data/temp/sct_testing_data/data/mt/left_dorsal_column.nii.gz -o tmp.151119161125_478478/segmentation.nii.gz Change orientation to RPI... sct_image -i segmentation.nii.gz -setorient RPI -o segmentation_RPI.nii.gz Open segmentation volume... Get data dimensions... 40 x 40 x 5 Smooth centerline/segmentation... .. Get center of mass of the centerline/segmentation... .. Smoothing algo = hanning .. Windows length = 50 Window size is too small. No smoothing was applied. Window size is too small. No smoothing was applied. Compute CSA... Smooth CSA across slices... .. Hanning window: 50 mm Write text file... z=0: 8.44098367816 mm^2 z=1: 7.57861092753 mm^2 Create volume of CSA values... sct_image -i csa_volume_RPI.nii.gz -setorient RPI -o csa_volume.nii.gz Generate output files... WARNING: File /Users/julien/data/temp/sct_testing_data/data/mt/csa_volume.nii.gz already exists. Deleting it... File created: /Users/julien/data/temp/sct_testing_data/data/mt/csa_volume.nii.gz Find slices corresponding to vertebral levels... WARNING: the bottom vertebral level you selected is lower to the lowest level available --> Selected the lowest vertebral level available: 3 Traceback (most recent call last): File "/Users/julien/code/spinalcordtoolbox/bin/sct_process_segmentation", line 689, in <module> main() File "/Users/julien/code/spinalcordtoolbox/bin/sct_process_segmentation", line 155, in main compute_csa(fname_segmentation, verbose, remove_temp_files, step, smoothing_param, figure_fit, param.file_csa_volume, slices, vert_lev, fname_vertebral_labeling, algo_fitting = param.algo_fitting, type_window= param.type_window, window_length=param.window_length) File "/Users/julien/code/spinalcordtoolbox/bin/sct_process_segmentation", line 499, in compute_csa slices, vert_levels_list, warning = get_slices_matching_with_vertebral_levels(Image(path_data+file_csa_volume).data, vert_levels, Image(fname_vertebral_labeling).data, 1) File "/Users/julien/code/spinalcordtoolbox/scripts/sct_extract_metric.py", line 479, in get_slices_matching_with_vertebral_levels distance_min_among_positive_value = min(abs(distance[distance > 0])) # minimal distance among the negative ValueError: min() arg is an empty sequence ~~~
1.0
ValueError: min() arg is an empty sequence - data: sct_testing_data/data/mt error: ~~~ sct_process_segmentation -i left_dorsal_column.nii.gz -p csa -l 2:5 -t label/template/MNI-Poly-AMU_level.nii.gz OK: left_dorsal_column.nii.gz Check parameters: .. segmentation file: left_dorsal_column.nii.gz Create temporary folder... mkdir tmp.151119161125_478478/ Copying input data to tmp folder and convert to nii... sct_convert -i /Users/julien/data/temp/sct_testing_data/data/mt/left_dorsal_column.nii.gz -o tmp.151119161125_478478/segmentation.nii.gz Change orientation to RPI... sct_image -i segmentation.nii.gz -setorient RPI -o segmentation_RPI.nii.gz Open segmentation volume... Get data dimensions... 40 x 40 x 5 Smooth centerline/segmentation... .. Get center of mass of the centerline/segmentation... .. Smoothing algo = hanning .. Windows length = 50 Window size is too small. No smoothing was applied. Window size is too small. No smoothing was applied. Compute CSA... Smooth CSA across slices... .. Hanning window: 50 mm Write text file... z=0: 8.44098367816 mm^2 z=1: 7.57861092753 mm^2 Create volume of CSA values... sct_image -i csa_volume_RPI.nii.gz -setorient RPI -o csa_volume.nii.gz Generate output files... WARNING: File /Users/julien/data/temp/sct_testing_data/data/mt/csa_volume.nii.gz already exists. Deleting it... File created: /Users/julien/data/temp/sct_testing_data/data/mt/csa_volume.nii.gz Find slices corresponding to vertebral levels... WARNING: the bottom vertebral level you selected is lower to the lowest level available --> Selected the lowest vertebral level available: 3 Traceback (most recent call last): File "/Users/julien/code/spinalcordtoolbox/bin/sct_process_segmentation", line 689, in <module> main() File "/Users/julien/code/spinalcordtoolbox/bin/sct_process_segmentation", line 155, in main compute_csa(fname_segmentation, verbose, remove_temp_files, step, smoothing_param, figure_fit, param.file_csa_volume, slices, vert_lev, fname_vertebral_labeling, algo_fitting = param.algo_fitting, type_window= param.type_window, window_length=param.window_length) File "/Users/julien/code/spinalcordtoolbox/bin/sct_process_segmentation", line 499, in compute_csa slices, vert_levels_list, warning = get_slices_matching_with_vertebral_levels(Image(path_data+file_csa_volume).data, vert_levels, Image(fname_vertebral_labeling).data, 1) File "/Users/julien/code/spinalcordtoolbox/scripts/sct_extract_metric.py", line 479, in get_slices_matching_with_vertebral_levels distance_min_among_positive_value = min(abs(distance[distance > 0])) # minimal distance among the negative ValueError: min() arg is an empty sequence ~~~
process
valueerror min arg is an empty sequence data sct testing data data mt error sct process segmentation i left dorsal column nii gz p csa l t label template mni poly amu level nii gz ok left dorsal column nii gz check parameters segmentation file left dorsal column nii gz create temporary folder mkdir tmp copying input data to tmp folder and convert to nii sct convert i users julien data temp sct testing data data mt left dorsal column nii gz o tmp segmentation nii gz change orientation to rpi sct image i segmentation nii gz setorient rpi o segmentation rpi nii gz open segmentation volume get data dimensions x x smooth centerline segmentation get center of mass of the centerline segmentation smoothing algo hanning windows length window size is too small no smoothing was applied window size is too small no smoothing was applied compute csa smooth csa across slices hanning window mm write text file z mm z mm create volume of csa values sct image i csa volume rpi nii gz setorient rpi o csa volume nii gz generate output files warning file users julien data temp sct testing data data mt csa volume nii gz already exists deleting it file created users julien data temp sct testing data data mt csa volume nii gz find slices corresponding to vertebral levels warning the bottom vertebral level you selected is lower to the lowest level available selected the lowest vertebral level available traceback most recent call last file users julien code spinalcordtoolbox bin sct process segmentation line in main file users julien code spinalcordtoolbox bin sct process segmentation line in main compute csa fname segmentation verbose remove temp files step smoothing param figure fit param file csa volume slices vert lev fname vertebral labeling algo fitting param algo fitting type window param type window window length param window length file users julien code spinalcordtoolbox bin sct process segmentation line in compute csa slices vert levels list warning get slices matching with vertebral levels image path data file csa volume data vert levels image fname vertebral labeling data file users julien code spinalcordtoolbox scripts sct extract metric py line in get slices matching with vertebral levels distance min among positive value min abs distance minimal distance among the negative valueerror min arg is an empty sequence
1
684,222
23,411,760,056
IssuesEvent
2022-08-12 18:22:01
pgmpy/pgmpy
https://api.github.com/repos/pgmpy/pgmpy
closed
`factor_product` returns same object as input for single argument
Bug High Priority
### Subject of the issue `pgmpy.factors.base.factor_product` returns the same object as input. No copy is made ### Your environment * pgmpy version - 0.1.18 * Python version - 3.8.10 * Operating System - WSL 2 on Windows 10 Enterprise ### Steps to reproduce In the case where `factor_product` gets a single argument it returns the same object as the input ``` >>> from pgmpy.factors.discrete import DiscreteFactor >>> import numpy as np >>> phi = DiscreteFactor(['x1', 'x2'], [2,2], np.ones(4)) >>> from pgmpy.factors.base import factor_product >>> phi <DiscreteFactor representing phi(x1:2, x2:2) at 0x7f7e46d44370> >>> factor_product(*[phi]) <DiscreteFactor representing phi(x1:2, x2:2) at 0x7f7e46d44370> ``` This created a problem in `DBNInference._get_factor` where the returned factor is modified (https://github.com/pgmpy/pgmpy/blob/dev/pgmpy/inference/dbn_inference.py#L203) leading to the factors in `belief_prop.junction_tree` being modified when they probably shouldn't be ### Expected behaviour `factor_product` should probably return a copied input in the single argument case. ### Actual behaviour `factor_product` returns the same object as the input
1.0
`factor_product` returns same object as input for single argument - ### Subject of the issue `pgmpy.factors.base.factor_product` returns the same object as input. No copy is made ### Your environment * pgmpy version - 0.1.18 * Python version - 3.8.10 * Operating System - WSL 2 on Windows 10 Enterprise ### Steps to reproduce In the case where `factor_product` gets a single argument it returns the same object as the input ``` >>> from pgmpy.factors.discrete import DiscreteFactor >>> import numpy as np >>> phi = DiscreteFactor(['x1', 'x2'], [2,2], np.ones(4)) >>> from pgmpy.factors.base import factor_product >>> phi <DiscreteFactor representing phi(x1:2, x2:2) at 0x7f7e46d44370> >>> factor_product(*[phi]) <DiscreteFactor representing phi(x1:2, x2:2) at 0x7f7e46d44370> ``` This created a problem in `DBNInference._get_factor` where the returned factor is modified (https://github.com/pgmpy/pgmpy/blob/dev/pgmpy/inference/dbn_inference.py#L203) leading to the factors in `belief_prop.junction_tree` being modified when they probably shouldn't be ### Expected behaviour `factor_product` should probably return a copied input in the single argument case. ### Actual behaviour `factor_product` returns the same object as the input
non_process
factor product returns same object as input for single argument subject of the issue pgmpy factors base factor product returns the same object as input no copy is made your environment pgmpy version python version operating system wsl on windows enterprise steps to reproduce in the case where factor product gets a single argument it returns the same object as the input from pgmpy factors discrete import discretefactor import numpy as np phi discretefactor np ones from pgmpy factors base import factor product phi factor product this created a problem in dbninference get factor where the returned factor is modified leading to the factors in belief prop junction tree being modified when they probably shouldn t be expected behaviour factor product should probably return a copied input in the single argument case actual behaviour factor product returns the same object as the input
0
10,653
13,449,495,067
IssuesEvent
2020-09-08 16:59:30
amor71/LiuAlgoTrader
https://api.github.com/repos/amor71/LiuAlgoTrader
closed
automatic optimization for number of consumer process
in-process
automatically calculate the optimal number of process based on HW capabilities
1.0
automatic optimization for number of consumer process - automatically calculate the optimal number of process based on HW capabilities
process
automatic optimization for number of consumer process automatically calculate the optimal number of process based on hw capabilities
1
13,314
15,783,717,141
IssuesEvent
2021-04-01 14:19:14
googleapis/python-bigquery-sqlalchemy
https://api.github.com/repos/googleapis/python-bigquery-sqlalchemy
closed
Include sqlalchemy/VERSION in the user-agent string when constructing BQ client(s)
api: bigquery type: process
**Is your feature request related to a problem? Please describe.** I'd like to be able to attribute requests from this library. This is useful for measuring the impact of this project and helping me prioritize engineering resources across our open source connectors. **Describe the solution you'd like** Construct a [ClientInfo](https://googleapis.dev/python/google-api-core/latest/client_info.html) object and pass to the `bigquery.Client` constructor whereever we use it. **Describe alternatives you've considered** N/A **Additional context** * Ibis implementation: https://github.com/ibis-project/ibis/blob/165f78de8f4f0121ba2c601b5c9f89bc0f65a593/ibis/backends/bigquery/client.py#L55-L62 and https://github.com/ibis-project/ibis/blob/165f78de8f4f0121ba2c601b5c9f89bc0f65a593/ibis/backends/bigquery/client.py#L407-L411 * Pandas implementation: https://github.com/pydata/pandas-gbq/blob/22a6064ee616fbdd14ce2c8bf8bfe1ed7d3b6291/pandas_gbq/gbq.py#L401-L428
1.0
Include sqlalchemy/VERSION in the user-agent string when constructing BQ client(s) - **Is your feature request related to a problem? Please describe.** I'd like to be able to attribute requests from this library. This is useful for measuring the impact of this project and helping me prioritize engineering resources across our open source connectors. **Describe the solution you'd like** Construct a [ClientInfo](https://googleapis.dev/python/google-api-core/latest/client_info.html) object and pass to the `bigquery.Client` constructor whereever we use it. **Describe alternatives you've considered** N/A **Additional context** * Ibis implementation: https://github.com/ibis-project/ibis/blob/165f78de8f4f0121ba2c601b5c9f89bc0f65a593/ibis/backends/bigquery/client.py#L55-L62 and https://github.com/ibis-project/ibis/blob/165f78de8f4f0121ba2c601b5c9f89bc0f65a593/ibis/backends/bigquery/client.py#L407-L411 * Pandas implementation: https://github.com/pydata/pandas-gbq/blob/22a6064ee616fbdd14ce2c8bf8bfe1ed7d3b6291/pandas_gbq/gbq.py#L401-L428
process
include sqlalchemy version in the user agent string when constructing bq client s is your feature request related to a problem please describe i d like to be able to attribute requests from this library this is useful for measuring the impact of this project and helping me prioritize engineering resources across our open source connectors describe the solution you d like construct a object and pass to the bigquery client constructor whereever we use it describe alternatives you ve considered n a additional context ibis implementation and pandas implementation
1
8,190
11,387,027,634
IssuesEvent
2020-01-29 14:21:09
devssa/onde-codar-em-salvador
https://api.github.com/repos/devssa/onde-codar-em-salvador
opened
ANALISTA DE PROCESSOS na [AVANSYS]
HELP WANTED MODELAGEM DE PROCESSOS PROCESSOS SALVADOR
<!-- ================================================== POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS! Use: "Desenvolvedor Front-end" ao invés de "Front-End Developer" \o/ Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]` ================================================== --> ## Local - Salvador ## Requisitos **Obrigatórios:** - Ensino superior completo - experiência em analise, diagnostico de processos de negócios com foco no cliente, modelagem de processos para automação, criação de indicadores de desempenho de processos, monitoramento de processos com base em indicadores de desempenho e identificação de oportunidade de evoluções e correções. ## Contratação - a combinar ## Nossa empresa - A Avansys Tecnologia presta efetivamente serviços especializados de desenvolvimento e manutenção de softwares utilizando sua Fabrica de Software CMMI3, Consultoria para todos os tipos de organização, serviço de Service Desk e Outsourcing para suprir todas as necessidades de sua organização. ## Como se candidatar - Por favor envie um email para curriculo@avansys.com.br com seu CV anexado - enviar no assunto: Analista de processos - 2020
2.0
ANALISTA DE PROCESSOS na [AVANSYS] - <!-- ================================================== POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS! Use: "Desenvolvedor Front-end" ao invés de "Front-End Developer" \o/ Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]` ================================================== --> ## Local - Salvador ## Requisitos **Obrigatórios:** - Ensino superior completo - experiência em analise, diagnostico de processos de negócios com foco no cliente, modelagem de processos para automação, criação de indicadores de desempenho de processos, monitoramento de processos com base em indicadores de desempenho e identificação de oportunidade de evoluções e correções. ## Contratação - a combinar ## Nossa empresa - A Avansys Tecnologia presta efetivamente serviços especializados de desenvolvimento e manutenção de softwares utilizando sua Fabrica de Software CMMI3, Consultoria para todos os tipos de organização, serviço de Service Desk e Outsourcing para suprir todas as necessidades de sua organização. ## Como se candidatar - Por favor envie um email para curriculo@avansys.com.br com seu CV anexado - enviar no assunto: Analista de processos - 2020
process
analista de processos na por favor só poste se a vaga for para salvador e cidades vizinhas use desenvolvedor front end ao invés de front end developer o exemplo desenvolvedor front end na local salvador requisitos obrigatórios ensino superior completo experiência em analise diagnostico de processos de negócios com foco no cliente modelagem de processos para automação criação de indicadores de desempenho de processos monitoramento de processos com base em indicadores de desempenho e identificação de oportunidade de evoluções e correções contratação a combinar nossa empresa a avansys tecnologia presta efetivamente serviços especializados de desenvolvimento e manutenção de softwares utilizando sua fabrica de software consultoria para todos os tipos de organização serviço de service desk e outsourcing para suprir todas as necessidades de sua organização como se candidatar por favor envie um email para curriculo avansys com br com seu cv anexado enviar no assunto analista de processos
1
13,042
15,384,979,022
IssuesEvent
2021-03-03 05:42:43
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[iOS] Sign up flow > OS suggestions for password disrupting user flow
Bug P1 Process: under observation UX iOS
During sign-up, there are OS-triggered password/Keychain suggestions is interfering with the user flow as it is not easy to use or dismiss them and get back the default keypad to type in a password. The user gets stuck occasionally when this happens and needs to go a screen back and revisit the screen again to load it afresh and try again. Issue is not reproducible every time but more often when a new version of the app is installed on the device and used for the first time. Marking it as 'under observation' for QA to replicate exact steps for it.
1.0
[iOS] Sign up flow > OS suggestions for password disrupting user flow - During sign-up, there are OS-triggered password/Keychain suggestions is interfering with the user flow as it is not easy to use or dismiss them and get back the default keypad to type in a password. The user gets stuck occasionally when this happens and needs to go a screen back and revisit the screen again to load it afresh and try again. Issue is not reproducible every time but more often when a new version of the app is installed on the device and used for the first time. Marking it as 'under observation' for QA to replicate exact steps for it.
process
sign up flow os suggestions for password disrupting user flow during sign up there are os triggered password keychain suggestions is interfering with the user flow as it is not easy to use or dismiss them and get back the default keypad to type in a password the user gets stuck occasionally when this happens and needs to go a screen back and revisit the screen again to load it afresh and try again issue is not reproducible every time but more often when a new version of the app is installed on the device and used for the first time marking it as under observation for qa to replicate exact steps for it
1
15,876
20,053,003,831
IssuesEvent
2022-02-03 09:04:48
plazi/treatmentBank
https://api.github.com/repos/plazi/treatmentBank
opened
figures not linked since Feb 1.
help wanted invalid processing BLR
@gsautter is there a reason that the figures in the recently batch processed articles (since Feb 1) are not linked to the images? e.g. https://tb.plazi.org/GgServer/summary/FFC07C53B27DBE37FFD19038FFEAFFE2 ![image](https://user-images.githubusercontent.com/4609956/152311825-223ebd7b-56ec-40cf-9f0b-bb34e301f977.png) https://tb.plazi.org/GgServer/summary/FFFF7518FFBD1432FF8A2D7CA4693666a ![image](https://user-images.githubusercontent.com/4609956/152311948-42a30f3c-7989-4c0d-8f6b-d57ce7b95ae7.png) https://tb.plazi.org/GgServer/summary/FF9F895C2F5BFFD79154FFEFFFFEFFC4 ![image](https://user-images.githubusercontent.com/4609956/152312162-16543ac4-25cb-4d86-9bf0-24ce46df121b.png)
1.0
figures not linked since Feb 1. - @gsautter is there a reason that the figures in the recently batch processed articles (since Feb 1) are not linked to the images? e.g. https://tb.plazi.org/GgServer/summary/FFC07C53B27DBE37FFD19038FFEAFFE2 ![image](https://user-images.githubusercontent.com/4609956/152311825-223ebd7b-56ec-40cf-9f0b-bb34e301f977.png) https://tb.plazi.org/GgServer/summary/FFFF7518FFBD1432FF8A2D7CA4693666a ![image](https://user-images.githubusercontent.com/4609956/152311948-42a30f3c-7989-4c0d-8f6b-d57ce7b95ae7.png) https://tb.plazi.org/GgServer/summary/FF9F895C2F5BFFD79154FFEFFFFEFFC4 ![image](https://user-images.githubusercontent.com/4609956/152312162-16543ac4-25cb-4d86-9bf0-24ce46df121b.png)
process
figures not linked since feb gsautter is there a reason that the figures in the recently batch processed articles since feb are not linked to the images e g
1
102,718
16,581,278,597
IssuesEvent
2021-05-31 12:12:23
nextcloud/server
https://api.github.com/repos/nextcloud/server
opened
Improve ratelimiting on IPv6
enhancement security
For IPv6 ratelimiting we should consider increasing the netblock size based on invalid attempts from any given block. Ref https://github.com/nextcloud-gmbh/h1/issues/55
True
Improve ratelimiting on IPv6 - For IPv6 ratelimiting we should consider increasing the netblock size based on invalid attempts from any given block. Ref https://github.com/nextcloud-gmbh/h1/issues/55
non_process
improve ratelimiting on for ratelimiting we should consider increasing the netblock size based on invalid attempts from any given block ref
0
3,388
13,159,731,073
IssuesEvent
2020-08-10 16:19:35
carbon-design-system/carbon
https://api.github.com/repos/carbon-design-system/carbon
closed
[DataTable] with batch actions, batch actions covers TableToolbarContent, unable to use Search after selection
component: data-table status: needs triage 🕵️‍♀️ status: waiting for maintainer response 💬 type: bug 🐛
## Description Issue is related with DataTable, TableToolbarContent, TableBatchActions and TableToolbarSearch. When at least one row is selected, TableBatchActions are covering TableToolbarContent, which means there isn't possible to use search (TableToolbarSearch) anymore. Try: http://react.carbondesignsystem.com/?path=/story/datatable--with-batch-actions ## Background IBM application would like to use paginated DataTable with Search input, for tables with 1000+ rows, e.g. 1. Search for **gender**, select 2. Search for **city**, select etc. ## Steps to reproduce the issue **Scenario I:** 1. Search for '1' 2. Select Load Balancer 1 Result: Batch actions are covering toolbar content, you are not able to clear search input and search for different value e.g. '3' **Scenario II:** 1. Select Load Balancer 1 Result: Batch actions are covering toolbar content, you are not able to use search e.g. to find '3' ![image](https://user-images.githubusercontent.com/17591704/65584144-bbbc5280-df80-11e9-82b6-1014085bf48f.png) ## What package(s) are you using? "carbon-components": "10.5.0", "carbon-components-react": "7.5.0", "carbon-icons": "7.0.7",
True
[DataTable] with batch actions, batch actions covers TableToolbarContent, unable to use Search after selection - ## Description Issue is related with DataTable, TableToolbarContent, TableBatchActions and TableToolbarSearch. When at least one row is selected, TableBatchActions are covering TableToolbarContent, which means there isn't possible to use search (TableToolbarSearch) anymore. Try: http://react.carbondesignsystem.com/?path=/story/datatable--with-batch-actions ## Background IBM application would like to use paginated DataTable with Search input, for tables with 1000+ rows, e.g. 1. Search for **gender**, select 2. Search for **city**, select etc. ## Steps to reproduce the issue **Scenario I:** 1. Search for '1' 2. Select Load Balancer 1 Result: Batch actions are covering toolbar content, you are not able to clear search input and search for different value e.g. '3' **Scenario II:** 1. Select Load Balancer 1 Result: Batch actions are covering toolbar content, you are not able to use search e.g. to find '3' ![image](https://user-images.githubusercontent.com/17591704/65584144-bbbc5280-df80-11e9-82b6-1014085bf48f.png) ## What package(s) are you using? "carbon-components": "10.5.0", "carbon-components-react": "7.5.0", "carbon-icons": "7.0.7",
non_process
with batch actions batch actions covers tabletoolbarcontent unable to use search after selection description issue is related with datatable tabletoolbarcontent tablebatchactions and tabletoolbarsearch when at least one row is selected tablebatchactions are covering tabletoolbarcontent which means there isn t possible to use search tabletoolbarsearch anymore try background ibm application would like to use paginated datatable with search input for tables with rows e g search for gender select search for city select etc steps to reproduce the issue scenario i search for select load balancer result batch actions are covering toolbar content you are not able to clear search input and search for different value e g scenario ii select load balancer result batch actions are covering toolbar content you are not able to use search e g to find what package s are you using carbon components carbon components react carbon icons
0
63,587
12,341,727,121
IssuesEvent
2020-05-14 22:38:50
NCAR/MET
https://api.github.com/repos/NCAR/MET
closed
Fix several small bugs in met-9.0.
component: library code priority: high requestor: NCAR/RAL type: bug
## Describe the Problem ## Several small issues have arisen in the last week for which bugfixes should be provided. For convenience I'm grouping them into one issue and pull request. The details are listed below: (1) The Gerrity score (GER column of MCTS line type) has nan's (not-a-number) in the output, as reported by Julie Prestopnik for the RAL Skymet project. nan's are never acceptable in the output from the MET tools. They indicate that an illegal mathematical option was performed. MET should check for these and write a bad data value to the output instead. (2) Specifying MAXGAUSS as an smoothing method in the interp dictionary of Grid-Stat results in this error message, as reported by Tina Kalb: ``` ERROR : interp_gaussian_dp() -> the gaussian weights were not computed (max_r: 0). ``` MET should be fixed to support this smoothing method. (3) The compilation of ascii2nc fails when not using the "--enable-python" configuration option, as reported by Hank Fisher when compiling for HWT. In ascii2nc.cc, the include "global_python.h" line should be moved into the ifdef ENABLE_PYTHON block. (4) Update the user’s guide: The descriptions of the MED_FO and MED_OF statistics in Appendix C should be clarified, as reported by Sarah Griffin via met-help and confirmed by Eric Gilleland. See comments below. Add note about desc[] to the TC-Gen section, as described in the comment below from Dan. ### Expected Behavior ### (1) nan's should never be written by MET. (2) Grid-Stat should support the MAXGAUSS smoothing method. (3) ascii2nc should compile with and without the --enable-python option. (4) The documentation should be clarified. ### Environment ### Describe your runtime environment: Various. ### To Reproduce ### Describe the steps to reproduce the behavior: For issue (1): 1. Save the attached mctc.txt file containing a single MCTC output line from Point-Stat: [mctc.txt](https://github.com/NCAR/MET/files/4618783/mctc.txt) 2. Run the following Stat-Analysis job: ``` stat_analysis -lookin mctc.txt -job aggregate_stat -line_type MCTC -out_line_type MCTS ``` 3. Note the nan value in the output: ``` JOB_LIST: -job aggregate_stat -line_type MCTC -out_line_type MCTS -out_alpha 0.05000 COL_NAME: TOTAL N_CAT ACC ACC_NCL ACC_NCU ACC_BCL ACC_BCU HK HK_BCL HK_BCU HSS HSS_BCL HSS_BCU GER GER_BCL GER_BCU MCTS: 326 9 0.93865 0.90715 0.95994 NA NA 0.21065 NA NA 0.11063 NA NA nan NA NA ``` For issue (2): Run Grid-Stat using the MAXGAUSS method: ``` { method = MAXGAUSS; width = 11; } ``` For issue (3): Not necessary since Hank already confirmed that the proposed change fixes the compilation problem. ### Relevant Deadlines ### None. ### Funding Source ### None. ## Define the Metadata ## ### Assignee ### - [X] Select **engineer(s)** or **no engineer** required (John Halley Gotway) - [X] Select **scientist(s)** or **no scientist** required (Eric Gilleland) ### Labels ### - [X] Select **component(s)** - [X] Select **priority** - [X] Select **requestor(s)** ### Projects and Milestone ### - [X] Review **projects** and select relevant **Repository** and **Organization** ones - [X] Select **milestone** ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [X] [METplus](https://github.com/NCAR/METplus/issues/new/choose), [MET](https://github.com/NCAR/MET/issues/new/choose), [METdb](https://github.com/NCAR/METdb/issues/new/choose), [METviewer](https://github.com/NCAR/METviewer/issues/new/choose), [METexpress](https://github.com/NCAR/METexpress/issues/new/choose), [METcalcpy](https://github.com/NCAR/METcalcpy/issues/new/choose), [METplotpy](https://github.com/NCAR/METplotpy/issues/new/choose) No impacts. ## Bugfix Checklist ## See the [METplus Workflow](https://ncar.github.io/METplus/Contributors_Guide/github_workflow.html) for details. - [x] Complete the issue definition above. - [x] Fork this repository or create a branch of **master_\<Version>**. Branch name: `bugfix_<Issue Number>_master_<Version>_<Description>` - [x] Fix the bug and test your changes. - [x] Add/update unit tests. - [x] Add/update documentation. - [x] Push local changes to GitHub. - [x] Submit a pull request to merge into **master_\<Version>**. Pull request: `bugfix <Issue Number> master_<Version> <Description>` - [x] Iterate until the reviewer(s) accept and merge your changes. - [x] Delete your fork or branch. - [x] Complete the steps above to fix the bug on the **develop** branch. Branch name: `bugfix_<Issue Number>_develop_<Description>` Pull request: `bugfix <Issue Number> develop <Description>` - [x] Close this issue.
1.0
Fix several small bugs in met-9.0. - ## Describe the Problem ## Several small issues have arisen in the last week for which bugfixes should be provided. For convenience I'm grouping them into one issue and pull request. The details are listed below: (1) The Gerrity score (GER column of MCTS line type) has nan's (not-a-number) in the output, as reported by Julie Prestopnik for the RAL Skymet project. nan's are never acceptable in the output from the MET tools. They indicate that an illegal mathematical option was performed. MET should check for these and write a bad data value to the output instead. (2) Specifying MAXGAUSS as an smoothing method in the interp dictionary of Grid-Stat results in this error message, as reported by Tina Kalb: ``` ERROR : interp_gaussian_dp() -> the gaussian weights were not computed (max_r: 0). ``` MET should be fixed to support this smoothing method. (3) The compilation of ascii2nc fails when not using the "--enable-python" configuration option, as reported by Hank Fisher when compiling for HWT. In ascii2nc.cc, the include "global_python.h" line should be moved into the ifdef ENABLE_PYTHON block. (4) Update the user’s guide: The descriptions of the MED_FO and MED_OF statistics in Appendix C should be clarified, as reported by Sarah Griffin via met-help and confirmed by Eric Gilleland. See comments below. Add note about desc[] to the TC-Gen section, as described in the comment below from Dan. ### Expected Behavior ### (1) nan's should never be written by MET. (2) Grid-Stat should support the MAXGAUSS smoothing method. (3) ascii2nc should compile with and without the --enable-python option. (4) The documentation should be clarified. ### Environment ### Describe your runtime environment: Various. ### To Reproduce ### Describe the steps to reproduce the behavior: For issue (1): 1. Save the attached mctc.txt file containing a single MCTC output line from Point-Stat: [mctc.txt](https://github.com/NCAR/MET/files/4618783/mctc.txt) 2. Run the following Stat-Analysis job: ``` stat_analysis -lookin mctc.txt -job aggregate_stat -line_type MCTC -out_line_type MCTS ``` 3. Note the nan value in the output: ``` JOB_LIST: -job aggregate_stat -line_type MCTC -out_line_type MCTS -out_alpha 0.05000 COL_NAME: TOTAL N_CAT ACC ACC_NCL ACC_NCU ACC_BCL ACC_BCU HK HK_BCL HK_BCU HSS HSS_BCL HSS_BCU GER GER_BCL GER_BCU MCTS: 326 9 0.93865 0.90715 0.95994 NA NA 0.21065 NA NA 0.11063 NA NA nan NA NA ``` For issue (2): Run Grid-Stat using the MAXGAUSS method: ``` { method = MAXGAUSS; width = 11; } ``` For issue (3): Not necessary since Hank already confirmed that the proposed change fixes the compilation problem. ### Relevant Deadlines ### None. ### Funding Source ### None. ## Define the Metadata ## ### Assignee ### - [X] Select **engineer(s)** or **no engineer** required (John Halley Gotway) - [X] Select **scientist(s)** or **no scientist** required (Eric Gilleland) ### Labels ### - [X] Select **component(s)** - [X] Select **priority** - [X] Select **requestor(s)** ### Projects and Milestone ### - [X] Review **projects** and select relevant **Repository** and **Organization** ones - [X] Select **milestone** ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [X] [METplus](https://github.com/NCAR/METplus/issues/new/choose), [MET](https://github.com/NCAR/MET/issues/new/choose), [METdb](https://github.com/NCAR/METdb/issues/new/choose), [METviewer](https://github.com/NCAR/METviewer/issues/new/choose), [METexpress](https://github.com/NCAR/METexpress/issues/new/choose), [METcalcpy](https://github.com/NCAR/METcalcpy/issues/new/choose), [METplotpy](https://github.com/NCAR/METplotpy/issues/new/choose) No impacts. ## Bugfix Checklist ## See the [METplus Workflow](https://ncar.github.io/METplus/Contributors_Guide/github_workflow.html) for details. - [x] Complete the issue definition above. - [x] Fork this repository or create a branch of **master_\<Version>**. Branch name: `bugfix_<Issue Number>_master_<Version>_<Description>` - [x] Fix the bug and test your changes. - [x] Add/update unit tests. - [x] Add/update documentation. - [x] Push local changes to GitHub. - [x] Submit a pull request to merge into **master_\<Version>**. Pull request: `bugfix <Issue Number> master_<Version> <Description>` - [x] Iterate until the reviewer(s) accept and merge your changes. - [x] Delete your fork or branch. - [x] Complete the steps above to fix the bug on the **develop** branch. Branch name: `bugfix_<Issue Number>_develop_<Description>` Pull request: `bugfix <Issue Number> develop <Description>` - [x] Close this issue.
non_process
fix several small bugs in met describe the problem several small issues have arisen in the last week for which bugfixes should be provided for convenience i m grouping them into one issue and pull request the details are listed below the gerrity score ger column of mcts line type has nan s not a number in the output as reported by julie prestopnik for the ral skymet project nan s are never acceptable in the output from the met tools they indicate that an illegal mathematical option was performed met should check for these and write a bad data value to the output instead specifying maxgauss as an smoothing method in the interp dictionary of grid stat results in this error message as reported by tina kalb error interp gaussian dp the gaussian weights were not computed max r met should be fixed to support this smoothing method the compilation of fails when not using the enable python configuration option as reported by hank fisher when compiling for hwt in cc the include global python h line should be moved into the ifdef enable python block update the user’s guide the descriptions of the med fo and med of statistics in appendix c should be clarified as reported by sarah griffin via met help and confirmed by eric gilleland see comments below add note about desc to the tc gen section as described in the comment below from dan expected behavior nan s should never be written by met grid stat should support the maxgauss smoothing method should compile with and without the enable python option the documentation should be clarified environment describe your runtime environment various to reproduce describe the steps to reproduce the behavior for issue save the attached mctc txt file containing a single mctc output line from point stat run the following stat analysis job stat analysis lookin mctc txt job aggregate stat line type mctc out line type mcts note the nan value in the output job list job aggregate stat line type mctc out line type mcts out alpha col name total n cat acc acc ncl acc ncu acc bcl acc bcu hk hk bcl hk bcu hss hss bcl hss bcu ger ger bcl ger bcu mcts na na na na na na nan na na for issue run grid stat using the maxgauss method method maxgauss width for issue not necessary since hank already confirmed that the proposed change fixes the compilation problem relevant deadlines none funding source none define the metadata assignee select engineer s or no engineer required john halley gotway select scientist s or no scientist required eric gilleland labels select component s select priority select requestor s projects and milestone review projects and select relevant repository and organization ones select milestone define related issue s consider the impact to the other metplus components no impacts bugfix checklist see the for details complete the issue definition above fork this repository or create a branch of master branch name bugfix master fix the bug and test your changes add update unit tests add update documentation push local changes to github submit a pull request to merge into master pull request bugfix master iterate until the reviewer s accept and merge your changes delete your fork or branch complete the steps above to fix the bug on the develop branch branch name bugfix develop pull request bugfix develop close this issue
0
20,300
26,938,617,486
IssuesEvent
2023-02-07 23:12:05
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
incompatible_python_disable_py2
type: process incompatible-change team-Rules-Python migration-ready breaking-change-7.0
This issue tracks enabling `--incompatible_python_disable_py2` (implemented in https://github.com/bazelbuild/bazel/commit/d1bbf4b7698e019c11b7d1c483eb0b4959954060) as part of #15684 When enabled, the Python rules will reject Python 2 only attribute values. This includes: * `python_version=PY2` * `srcs_version=PY2` and `srcs_version=PY2ONLY` * Setting `py2_runtime` To fix, you must update your rules (or dependencies) to not set these Python 2 values: * For `srcs_version`: remove the attribute. * For `py2_runtime`: remove the attribute. * For `python_version`: remove the attribute from `py_binary` and `py_test`. For `py_runtime`, the entire target can be deleted, as there is nothing that can use it. Note that the `PyInfo` and `PyRuntimeInfo` providers do not raise an error. This is simply because providers can't access flags to know whether to enforce behavior; code should still be updated as appropriate: * For PyInfo, stop reading/writing `has_py2_only_sources` and `has_py3_only_sources`. * For PyRuntimeInfo, set `python_version=PY3` Most of this can be automated using [buildozer](https://github.com/bazelbuild/buildtools/tree/master/buildozer) ``` remove python_version //...%py_binary remove python_version //...%py_test remove srcs_version //...%py_binary remove srcs_version //...%py_test remove srcs_version //...%py_library remove py2_runtime //...%py_runtime_pair ``` It may not be your code with a problem, but a dependency. In this case, you probably just need to update your dependency to a newer version -- most projects no longer require Python 2. * For rules_pkg, upgrade to 0.3.0 or greater
1.0
incompatible_python_disable_py2 - This issue tracks enabling `--incompatible_python_disable_py2` (implemented in https://github.com/bazelbuild/bazel/commit/d1bbf4b7698e019c11b7d1c483eb0b4959954060) as part of #15684 When enabled, the Python rules will reject Python 2 only attribute values. This includes: * `python_version=PY2` * `srcs_version=PY2` and `srcs_version=PY2ONLY` * Setting `py2_runtime` To fix, you must update your rules (or dependencies) to not set these Python 2 values: * For `srcs_version`: remove the attribute. * For `py2_runtime`: remove the attribute. * For `python_version`: remove the attribute from `py_binary` and `py_test`. For `py_runtime`, the entire target can be deleted, as there is nothing that can use it. Note that the `PyInfo` and `PyRuntimeInfo` providers do not raise an error. This is simply because providers can't access flags to know whether to enforce behavior; code should still be updated as appropriate: * For PyInfo, stop reading/writing `has_py2_only_sources` and `has_py3_only_sources`. * For PyRuntimeInfo, set `python_version=PY3` Most of this can be automated using [buildozer](https://github.com/bazelbuild/buildtools/tree/master/buildozer) ``` remove python_version //...%py_binary remove python_version //...%py_test remove srcs_version //...%py_binary remove srcs_version //...%py_test remove srcs_version //...%py_library remove py2_runtime //...%py_runtime_pair ``` It may not be your code with a problem, but a dependency. In this case, you probably just need to update your dependency to a newer version -- most projects no longer require Python 2. * For rules_pkg, upgrade to 0.3.0 or greater
process
incompatible python disable this issue tracks enabling incompatible python disable implemented in as part of when enabled the python rules will reject python only attribute values this includes python version srcs version and srcs version setting runtime to fix you must update your rules or dependencies to not set these python values for srcs version remove the attribute for runtime remove the attribute for python version remove the attribute from py binary and py test for py runtime the entire target can be deleted as there is nothing that can use it note that the pyinfo and pyruntimeinfo providers do not raise an error this is simply because providers can t access flags to know whether to enforce behavior code should still be updated as appropriate for pyinfo stop reading writing has only sources and has only sources for pyruntimeinfo set python version most of this can be automated using remove python version py binary remove python version py test remove srcs version py binary remove srcs version py test remove srcs version py library remove runtime py runtime pair it may not be your code with a problem but a dependency in this case you probably just need to update your dependency to a newer version most projects no longer require python for rules pkg upgrade to or greater
1
7,436
10,550,312,545
IssuesEvent
2019-10-03 10:45:17
ericadamski/alphabet-keys
https://api.github.com/repos/ericadamski/alphabet-keys
closed
Add a link back to this repo from the site
enhancement good first issue hacktoberfest help wanted 👩‍💻in process
Add a github icon that links back to this repo so people can explore the code!
1.0
Add a link back to this repo from the site - Add a github icon that links back to this repo so people can explore the code!
process
add a link back to this repo from the site add a github icon that links back to this repo so people can explore the code
1
39,667
6,760,334,151
IssuesEvent
2017-10-24 20:15:33
iRail/iRail-docs
https://api.github.com/repos/iRail/iRail-docs
opened
Mention base url
missing-documentation
People likely want to know where our API is located. This is not mentioned in the current docs. See https://github.com/iRail/iRail/issues/324
1.0
Mention base url - People likely want to know where our API is located. This is not mentioned in the current docs. See https://github.com/iRail/iRail/issues/324
non_process
mention base url people likely want to know where our api is located this is not mentioned in the current docs see
0
152,678
13,464,358,774
IssuesEvent
2020-09-09 19:05:54
AssemblyScript/working-group
https://api.github.com/repos/AssemblyScript/working-group
closed
AssemblyScript Public Meeting #20 - September 9th, 2020
documentation enhancement good first issue
# Date and Time This public meeting will take place: September 9th, 2020, 18:00 UTC (11:00 AM US PDT, UTC -8) # General Agenda * Agenda Items from comments left on this Github issue * Additional in-meeting comments / discussion * [If time allows] Recap of [WebAssembly CG meeting](https://github.com/WebAssembly/meetings) if anyone attended Feel free to comment on this issue if you have any agenda items you would like to bring up. Meeting Notes will be placed on this issue for those who cannot make the meeting. # Meeting Information We will use Google Meet for our meetings. Our meeting room is: https://meet.google.com/ofw-zkpi-aek . You can also [add the event to your Google Calendar](https://calendar.google.com/event?action=TEMPLATE&tmeid=cjU0ajZtY2MwajY2NTMxOHVwNzlvb2VnOGlfMjAyMDA1MDZUMTgwMDAwWiBhYXJvbkBhYXJvbnRoZWRldi5jb20&tmsrc=aaron%40aaronthedev.com) Anyone who is contributing to the AssemblyScript project, building something with AssemblyScript, interested in the Assembly project or WebAssembly in general, is welcome to join! 😄 Notes will be taken by the host, and posted after the meeting. Notes are free to be edited through comments on the meeting notes at a later time.
1.0
AssemblyScript Public Meeting #20 - September 9th, 2020 - # Date and Time This public meeting will take place: September 9th, 2020, 18:00 UTC (11:00 AM US PDT, UTC -8) # General Agenda * Agenda Items from comments left on this Github issue * Additional in-meeting comments / discussion * [If time allows] Recap of [WebAssembly CG meeting](https://github.com/WebAssembly/meetings) if anyone attended Feel free to comment on this issue if you have any agenda items you would like to bring up. Meeting Notes will be placed on this issue for those who cannot make the meeting. # Meeting Information We will use Google Meet for our meetings. Our meeting room is: https://meet.google.com/ofw-zkpi-aek . You can also [add the event to your Google Calendar](https://calendar.google.com/event?action=TEMPLATE&tmeid=cjU0ajZtY2MwajY2NTMxOHVwNzlvb2VnOGlfMjAyMDA1MDZUMTgwMDAwWiBhYXJvbkBhYXJvbnRoZWRldi5jb20&tmsrc=aaron%40aaronthedev.com) Anyone who is contributing to the AssemblyScript project, building something with AssemblyScript, interested in the Assembly project or WebAssembly in general, is welcome to join! 😄 Notes will be taken by the host, and posted after the meeting. Notes are free to be edited through comments on the meeting notes at a later time.
non_process
assemblyscript public meeting september date and time this public meeting will take place september utc am us pdt utc general agenda agenda items from comments left on this github issue additional in meeting comments discussion recap of if anyone attended feel free to comment on this issue if you have any agenda items you would like to bring up meeting notes will be placed on this issue for those who cannot make the meeting meeting information we will use google meet for our meetings our meeting room is you can also anyone who is contributing to the assemblyscript project building something with assemblyscript interested in the assembly project or webassembly in general is welcome to join 😄 notes will be taken by the host and posted after the meeting notes are free to be edited through comments on the meeting notes at a later time
0
6,375
9,422,917,317
IssuesEvent
2019-04-11 10:29:30
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
native.existing_rules in BUILD files
P1 team-Starlark type: process
In general, when there's a global symbol in BUILD files (e.g. `cc_library`), we can use it from the bzl file with the native module (e.g. `native.cc_library`). However: * The native module is (accidentally) visible in BUILD files. So `native.cc_library` also works in BUILD files. I think we shouldn't expose the native module anymore in BUILD files. * `native.existing_rules` and `native.existing_rule` are the only two functions that are not exposed as global symbols in BUILD files. I see two possibilities: 1. Just remove the native module from BUILD files. `existing_rules` will not be usable from BUILD files directly. This can be fine, as we don't want to encourage the use of this function. It was added as a workaround in macros. 2. Remove the native module from BUILD files, expose `existing_rule`(s) as a global function in BUILD files. This is a bit more consistent. Note that the `existing_rule`(s) functions are poorly defined, have design issues, can lead to performance and maintenance issues. So we don't really want to encourage their use. (cc @c-parsons, @brandjon, @alandonovan)
1.0
native.existing_rules in BUILD files - In general, when there's a global symbol in BUILD files (e.g. `cc_library`), we can use it from the bzl file with the native module (e.g. `native.cc_library`). However: * The native module is (accidentally) visible in BUILD files. So `native.cc_library` also works in BUILD files. I think we shouldn't expose the native module anymore in BUILD files. * `native.existing_rules` and `native.existing_rule` are the only two functions that are not exposed as global symbols in BUILD files. I see two possibilities: 1. Just remove the native module from BUILD files. `existing_rules` will not be usable from BUILD files directly. This can be fine, as we don't want to encourage the use of this function. It was added as a workaround in macros. 2. Remove the native module from BUILD files, expose `existing_rule`(s) as a global function in BUILD files. This is a bit more consistent. Note that the `existing_rule`(s) functions are poorly defined, have design issues, can lead to performance and maintenance issues. So we don't really want to encourage their use. (cc @c-parsons, @brandjon, @alandonovan)
process
native existing rules in build files in general when there s a global symbol in build files e g cc library we can use it from the bzl file with the native module e g native cc library however the native module is accidentally visible in build files so native cc library also works in build files i think we shouldn t expose the native module anymore in build files native existing rules and native existing rule are the only two functions that are not exposed as global symbols in build files i see two possibilities just remove the native module from build files existing rules will not be usable from build files directly this can be fine as we don t want to encourage the use of this function it was added as a workaround in macros remove the native module from build files expose existing rule s as a global function in build files this is a bit more consistent note that the existing rule s functions are poorly defined have design issues can lead to performance and maintenance issues so we don t really want to encourage their use cc c parsons brandjon alandonovan
1
38,466
8,486,204,263
IssuesEvent
2018-10-26 10:07:04
mozilla/addons-frontend
https://api.github.com/repos/mozilla/addons-frontend
closed
The holy grail: develop locally with the production Docker config
component: code quality priority: p4 triaged
We can do it! AMO is deployed to production with [Docker](https://github.com/mozilla-services/puppet-config/tree/master/amo/modules) and we develop locally with a completely separate and unrelated [Docker configuration](http://addons-server.readthedocs.io/en/latest/topics/install/docker.html). Let's adjust the production Docker config for local development. If you're not going to dream big, why dream at all?!
1.0
The holy grail: develop locally with the production Docker config - We can do it! AMO is deployed to production with [Docker](https://github.com/mozilla-services/puppet-config/tree/master/amo/modules) and we develop locally with a completely separate and unrelated [Docker configuration](http://addons-server.readthedocs.io/en/latest/topics/install/docker.html). Let's adjust the production Docker config for local development. If you're not going to dream big, why dream at all?!
non_process
the holy grail develop locally with the production docker config we can do it amo is deployed to production with and we develop locally with a completely separate and unrelated let s adjust the production docker config for local development if you re not going to dream big why dream at all
0
1,551
4,155,934,524
IssuesEvent
2016-06-16 16:19:46
altoxml/schema
https://api.github.com/repos/altoxml/schema
closed
Vocabulary for ProcessingStepDescriptions
1 submitted processing history
One more from the wish list. The nature of common *ProcessingStep elements (layout analysis, any kind of postcorrection) is only incompletely captured by MIX's change history and seem often to be out of scope of the MIX schema. It would therefore be beneficial to define a (optional?) vocabulary of possible processingStepDescription attribute values to increase interoperability between data sources. Any comments?
1.0
Vocabulary for ProcessingStepDescriptions - One more from the wish list. The nature of common *ProcessingStep elements (layout analysis, any kind of postcorrection) is only incompletely captured by MIX's change history and seem often to be out of scope of the MIX schema. It would therefore be beneficial to define a (optional?) vocabulary of possible processingStepDescription attribute values to increase interoperability between data sources. Any comments?
process
vocabulary for processingstepdescriptions one more from the wish list the nature of common processingstep elements layout analysis any kind of postcorrection is only incompletely captured by mix s change history and seem often to be out of scope of the mix schema it would therefore be beneficial to define a optional vocabulary of possible processingstepdescription attribute values to increase interoperability between data sources any comments
1
16,246
20,796,855,375
IssuesEvent
2022-03-17 10:08:08
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] Responsive issue > Search bar issue
Bug P2 Participant manager Process: Fixed Process: Tested dev
Responsive issue > Search bar issue 1. Customer logo is not getting displayed when admin clicks on the search bar 2. Search bar > after getting the searched data, the search bar with entered data is not getting displayed ![image](https://user-images.githubusercontent.com/71445210/134492090-4337f629-bbdb-4351-abf8-4939c13922da.png)
2.0
[PM] Responsive issue > Search bar issue - Responsive issue > Search bar issue 1. Customer logo is not getting displayed when admin clicks on the search bar 2. Search bar > after getting the searched data, the search bar with entered data is not getting displayed ![image](https://user-images.githubusercontent.com/71445210/134492090-4337f629-bbdb-4351-abf8-4939c13922da.png)
process
responsive issue search bar issue responsive issue search bar issue customer logo is not getting displayed when admin clicks on the search bar search bar after getting the searched data the search bar with entered data is not getting displayed
1
16,636
21,707,260,791
IssuesEvent
2022-05-10 10:45:56
sjmog/smartflix
https://api.github.com/repos/sjmog/smartflix
opened
Synchronous API connection
04-background-processing Ruby/HTTP Ruby/JSON
In the previous ticket, you implemented a tested `show` route to display more details about any show the user might click on. Now it's time to start enriching these requests with some data from the Open Movie Database API. In this challenge, we’ll use the `show` controller action to connect to the API and dump show data into the view as JSON immediately, using the [omdbapi.com](https://omdbapi.com). This action will connect to OMDb, fetch data for a given show, and parse the response data on-the-fly so every time a user visits a show they'll get data fetched from the 3rd party service. To fetch the data we’ll use an HTTP client library responsible for HTTP requests. > We're going to test all this in a moment. For simplicity's sake, let's proceed through this first bit without testing. ## To complete this challenge, you will have to: - [ ] Generate an OMDb API key in order to call the API. - [ ] Install an HTTP client library of your choice. - [ ] In the controller action for viewing a single show, set up a connection with the API and grab data about the viewed show from it. - [ ] Render the parsed JSON response to the browser, somewhere in the existing show page. - [ ] Make sure you handle the case where there's no show! ## Tips - If you want to read more about how to do HTTP requests in Ruby, check out [this article from Twilio](https://www.twilio.com/blog/5-ways-make-http-requests-ruby). - For inspiration in choosing an HTTP client, check out [Awesome Ruby](https://github.com/markets/awesome-ruby#http-clients-and-tools). You can additionally check whether the library has a built-in parse method or you need to use [JSON](https://ruby-doc.org/stdlib-2.7.5/libdoc/json/rdoc/JSON.html).
1.0
Synchronous API connection - In the previous ticket, you implemented a tested `show` route to display more details about any show the user might click on. Now it's time to start enriching these requests with some data from the Open Movie Database API. In this challenge, we’ll use the `show` controller action to connect to the API and dump show data into the view as JSON immediately, using the [omdbapi.com](https://omdbapi.com). This action will connect to OMDb, fetch data for a given show, and parse the response data on-the-fly so every time a user visits a show they'll get data fetched from the 3rd party service. To fetch the data we’ll use an HTTP client library responsible for HTTP requests. > We're going to test all this in a moment. For simplicity's sake, let's proceed through this first bit without testing. ## To complete this challenge, you will have to: - [ ] Generate an OMDb API key in order to call the API. - [ ] Install an HTTP client library of your choice. - [ ] In the controller action for viewing a single show, set up a connection with the API and grab data about the viewed show from it. - [ ] Render the parsed JSON response to the browser, somewhere in the existing show page. - [ ] Make sure you handle the case where there's no show! ## Tips - If you want to read more about how to do HTTP requests in Ruby, check out [this article from Twilio](https://www.twilio.com/blog/5-ways-make-http-requests-ruby). - For inspiration in choosing an HTTP client, check out [Awesome Ruby](https://github.com/markets/awesome-ruby#http-clients-and-tools). You can additionally check whether the library has a built-in parse method or you need to use [JSON](https://ruby-doc.org/stdlib-2.7.5/libdoc/json/rdoc/JSON.html).
process
synchronous api connection in the previous ticket you implemented a tested show route to display more details about any show the user might click on now it s time to start enriching these requests with some data from the open movie database api in this challenge we’ll use the show controller action to connect to the api and dump show data into the view as json immediately using the this action will connect to omdb fetch data for a given show and parse the response data on the fly so every time a user visits a show they ll get data fetched from the party service to fetch the data we’ll use an http client library responsible for http requests we re going to test all this in a moment for simplicity s sake let s proceed through this first bit without testing to complete this challenge you will have to generate an omdb api key in order to call the api install an http client library of your choice in the controller action for viewing a single show set up a connection with the api and grab data about the viewed show from it render the parsed json response to the browser somewhere in the existing show page make sure you handle the case where there s no show tips if you want to read more about how to do http requests in ruby check out for inspiration in choosing an http client check out you can additionally check whether the library has a built in parse method or you need to use
1
172,284
6,501,258,325
IssuesEvent
2017-08-23 08:55:53
aaronshappell/picabot
https://api.github.com/repos/aaronshappell/picabot
opened
prism transcoder error - Error: read ECONNRESET
Priority: low Type: bug
Will occasionally get `prism transcoder error - Error: read ECONNRESET`. It doesn't appear to hinder anything or have any affects at the moment.
1.0
prism transcoder error - Error: read ECONNRESET - Will occasionally get `prism transcoder error - Error: read ECONNRESET`. It doesn't appear to hinder anything or have any affects at the moment.
non_process
prism transcoder error error read econnreset will occasionally get prism transcoder error error read econnreset it doesn t appear to hinder anything or have any affects at the moment
0
16,179
20,625,876,147
IssuesEvent
2022-03-07 22:28:38
zotero/zotero
https://api.github.com/repos/zotero/zotero
opened
Allow annotations to be added to word processor documents directly
Word Processor Integration
https://forums.zotero.org/discussion/94889/zotero-beta-annotations-feature-request Would probably be trivial to do this as part of the Add Note dialog, but it would be impossible to find anything. Seems like you'd want to be able to browse/search by the parent item and then select a child annotation. If we add a button to the plugins to make it possible to insert the selected item from Zotero, that could also work, once annotations show in the items list.
1.0
Allow annotations to be added to word processor documents directly - https://forums.zotero.org/discussion/94889/zotero-beta-annotations-feature-request Would probably be trivial to do this as part of the Add Note dialog, but it would be impossible to find anything. Seems like you'd want to be able to browse/search by the parent item and then select a child annotation. If we add a button to the plugins to make it possible to insert the selected item from Zotero, that could also work, once annotations show in the items list.
process
allow annotations to be added to word processor documents directly would probably be trivial to do this as part of the add note dialog but it would be impossible to find anything seems like you d want to be able to browse search by the parent item and then select a child annotation if we add a button to the plugins to make it possible to insert the selected item from zotero that could also work once annotations show in the items list
1
742
3,214,327,501
IssuesEvent
2015-10-07 00:52:03
broadinstitute/hellbender-dataflow
https://api.github.com/repos/broadinstitute/hellbender-dataflow
opened
Support writing large BAM files in dataflow
Dataflow DataflowPreprocessingPipeline enhancement
_From @droazen on July 8, 2015 17:0_ Currently tools like `MarkDuplicatesDataflow` use the `SmallBamWriter` to write their output, which makes them unable to handle large BAM outputs. We should switch to using a sharded/large BAM writer, when one materializes. _Copied from original issue: broadinstitute/hellbender#621_
1.0
Support writing large BAM files in dataflow - _From @droazen on July 8, 2015 17:0_ Currently tools like `MarkDuplicatesDataflow` use the `SmallBamWriter` to write their output, which makes them unable to handle large BAM outputs. We should switch to using a sharded/large BAM writer, when one materializes. _Copied from original issue: broadinstitute/hellbender#621_
process
support writing large bam files in dataflow from droazen on july currently tools like markduplicatesdataflow use the smallbamwriter to write their output which makes them unable to handle large bam outputs we should switch to using a sharded large bam writer when one materializes copied from original issue broadinstitute hellbender
1
5,825
8,664,126,363
IssuesEvent
2018-11-28 19:16:23
nodejs/node
https://api.github.com/repos/nodejs/node
closed
I think child_process.fork() should officially support { detached: true }
child_process feature request good first issue
<!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: v9.2.1 * **Platform**: Linux ip-172-31-29-251 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux * **Subsystem**: child_process <!-- Enter your issue details below this comment. --> `child_process.spawn()` supports an option [`detached`](https://nodejs.org/api/child_process.html#child_process_options_detached) which "makes it possible for the child process to continue running after the parent exits." `child_process.fork()` does not officially support `detached` as one of its options (by "officially support", I mean that it is not documented as a valid option); but I think it should be officially supported. My reasons: 1. It works today. (details below) 2. It's useful. (details below) 3. I can't think of any reason that it shouldn't be supported. If you agree, then no code changes would be required, but the documentation for `child_process.fork()` would need to list `detached` as a valid option. (Also, TypeScript's `@types/node` would need to be updated, but that would probably be a separate GitHub issue somewhere else.) **1. It works today:** First of all, if you look at the [current source](https://github.com/nodejs/node/blob/b1e6c0d44c075d8d3fee6c60fc92b90876700a30/lib/child_process.js#L54) for `child_process.fork()`, it's clear (and not surprising) that `fork()` is just a simple wrapper around `spawn()`. It passes most options through unchanged. To prove that `detached` works with `fork()`: save this as demo.js: ```js // launch with "node demo.js" or "node demo.js detached" const child_process = require('child_process') if (process.argv.indexOf('--daemon') === -1) { let options = {}; if (process.argv.indexOf('detached') >= 0) { options.detached = true; } const child = child_process.fork(__filename, ['--daemon'], options); console.log('hello from parent; press ^C to terminate parent') process.stdin.read() } else { console.log(`hello from child, my pid is ${process.pid}`) setInterval(() => {}, 5000) } ``` To see the NON-detached behavior, launch it with `node demo.js`. It will call `fork()`, so there are now two instances running. Then press ^C; if you do `ps aux | grep demo.js` you will see that both instances terminated. To see the detached behavior, repeat the above but with `node demo.js detached`. In this case, after ^C, the child process is still running. **2. It's useful:** `child_process.fork()` can be useful for starting daemon processes, and `detached` is certainly useful for daemons.
1.0
I think child_process.fork() should officially support { detached: true } - <!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: v9.2.1 * **Platform**: Linux ip-172-31-29-251 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux * **Subsystem**: child_process <!-- Enter your issue details below this comment. --> `child_process.spawn()` supports an option [`detached`](https://nodejs.org/api/child_process.html#child_process_options_detached) which "makes it possible for the child process to continue running after the parent exits." `child_process.fork()` does not officially support `detached` as one of its options (by "officially support", I mean that it is not documented as a valid option); but I think it should be officially supported. My reasons: 1. It works today. (details below) 2. It's useful. (details below) 3. I can't think of any reason that it shouldn't be supported. If you agree, then no code changes would be required, but the documentation for `child_process.fork()` would need to list `detached` as a valid option. (Also, TypeScript's `@types/node` would need to be updated, but that would probably be a separate GitHub issue somewhere else.) **1. It works today:** First of all, if you look at the [current source](https://github.com/nodejs/node/blob/b1e6c0d44c075d8d3fee6c60fc92b90876700a30/lib/child_process.js#L54) for `child_process.fork()`, it's clear (and not surprising) that `fork()` is just a simple wrapper around `spawn()`. It passes most options through unchanged. To prove that `detached` works with `fork()`: save this as demo.js: ```js // launch with "node demo.js" or "node demo.js detached" const child_process = require('child_process') if (process.argv.indexOf('--daemon') === -1) { let options = {}; if (process.argv.indexOf('detached') >= 0) { options.detached = true; } const child = child_process.fork(__filename, ['--daemon'], options); console.log('hello from parent; press ^C to terminate parent') process.stdin.read() } else { console.log(`hello from child, my pid is ${process.pid}`) setInterval(() => {}, 5000) } ``` To see the NON-detached behavior, launch it with `node demo.js`. It will call `fork()`, so there are now two instances running. Then press ^C; if you do `ps aux | grep demo.js` you will see that both instances terminated. To see the detached behavior, repeat the above but with `node demo.js detached`. In this case, after ^C, the child process is still running. **2. It's useful:** `child_process.fork()` can be useful for starting daemon processes, and `detached` is certainly useful for daemons.
process
i think child process fork should officially support detached true thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version platform linux ip generic ubuntu smp wed oct utc gnu linux subsystem child process child process spawn supports an option which makes it possible for the child process to continue running after the parent exits child process fork does not officially support detached as one of its options by officially support i mean that it is not documented as a valid option but i think it should be officially supported my reasons it works today details below it s useful details below i can t think of any reason that it shouldn t be supported if you agree then no code changes would be required but the documentation for child process fork would need to list detached as a valid option also typescript s types node would need to be updated but that would probably be a separate github issue somewhere else it works today first of all if you look at the for child process fork it s clear and not surprising that fork is just a simple wrapper around spawn it passes most options through unchanged to prove that detached works with fork save this as demo js js launch with node demo js or node demo js detached const child process require child process if process argv indexof daemon let options if process argv indexof detached options detached true const child child process fork filename options console log hello from parent press c to terminate parent process stdin read else console log hello from child my pid is process pid setinterval to see the non detached behavior launch it with node demo js it will call fork so there are now two instances running then press c if you do ps aux grep demo js you will see that both instances terminated to see the detached behavior repeat the above but with node demo js detached in this case after c the child process is still running it s useful child process fork can be useful for starting daemon processes and detached is certainly useful for daemons
1
65,825
14,761,947,285
IssuesEvent
2021-01-09 01:06:31
rsoreq/zaproxy
https://api.github.com/repos/rsoreq/zaproxy
opened
CVE-2020-36187 (Medium) detected in jackson-databind-2.9.2.jar, jackson-databind-2.9.10.jar
security vulnerability
## CVE-2020-36187 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.2.jar</b>, <b>jackson-databind-2.9.10.jar</b></p></summary> <p> <details><summary><b>jackson-databind-2.9.2.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: zaproxy/buildSrc/build.gradle.kts</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.2/1d8d8cb7cf26920ba57fb61fa56da88cc123b21f/jackson-databind-2.9.2.jar</p> <p> Dependency Hierarchy: - kotlin-reflect-1.3.72.jar (Root Library) - :x: **jackson-databind-2.9.2.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.9.10.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: zaproxy</p> <p>Path to vulnerable library: /tmp/ws-ua_20200729112444_WCAEYA/downloadResource_JMENZF/20200729112922/jackson-databind-2.9.10.jar</p> <p> Dependency Hierarchy: - wiremock-jre8-2.25.1.jar (Root Library) - zjsonpatch-0.4.4.jar - :x: **jackson-databind-2.9.10.jar** (Vulnerable Library) </details> <p>Found in base branch: <b>develop</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.datasources.SharedPoolDataSource. <p>Publish Date: 2021-01-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36187>CVE-2020-36187</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2997">https://github.com/FasterXML/jackson-databind/issues/2997</a></p> <p>Release Date: 2021-01-06</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.2","isTransitiveDependency":true,"dependencyTree":"org.jetbrains.kotlin:kotlin-reflect:1.3.72;com.fasterxml.jackson.core:jackson-databind:2.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.10","isTransitiveDependency":true,"dependencyTree":"com.github.tomakehurst:wiremock-jre8:2.25.1;com.flipkart.zjsonpatch:zjsonpatch:0.4.4;com.fasterxml.jackson.core:jackson-databind:2.9.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"}],"vulnerabilityIdentifier":"CVE-2020-36187","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.datasources.SharedPoolDataSource.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36187","cvss3Severity":"medium","cvss3Score":"4.2","cvss3Metrics":{"A":"Low","AC":"High","PR":"Low","S":"Unchanged","C":"Low","UI":"Required","AV":"Local","I":"Low"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-36187 (Medium) detected in jackson-databind-2.9.2.jar, jackson-databind-2.9.10.jar - ## CVE-2020-36187 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.2.jar</b>, <b>jackson-databind-2.9.10.jar</b></p></summary> <p> <details><summary><b>jackson-databind-2.9.2.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: zaproxy/buildSrc/build.gradle.kts</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.2/1d8d8cb7cf26920ba57fb61fa56da88cc123b21f/jackson-databind-2.9.2.jar</p> <p> Dependency Hierarchy: - kotlin-reflect-1.3.72.jar (Root Library) - :x: **jackson-databind-2.9.2.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.9.10.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: zaproxy</p> <p>Path to vulnerable library: /tmp/ws-ua_20200729112444_WCAEYA/downloadResource_JMENZF/20200729112922/jackson-databind-2.9.10.jar</p> <p> Dependency Hierarchy: - wiremock-jre8-2.25.1.jar (Root Library) - zjsonpatch-0.4.4.jar - :x: **jackson-databind-2.9.10.jar** (Vulnerable Library) </details> <p>Found in base branch: <b>develop</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.datasources.SharedPoolDataSource. <p>Publish Date: 2021-01-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36187>CVE-2020-36187</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2997">https://github.com/FasterXML/jackson-databind/issues/2997</a></p> <p>Release Date: 2021-01-06</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.2","isTransitiveDependency":true,"dependencyTree":"org.jetbrains.kotlin:kotlin-reflect:1.3.72;com.fasterxml.jackson.core:jackson-databind:2.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.10","isTransitiveDependency":true,"dependencyTree":"com.github.tomakehurst:wiremock-jre8:2.25.1;com.flipkart.zjsonpatch:zjsonpatch:0.4.4;com.fasterxml.jackson.core:jackson-databind:2.9.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"}],"vulnerabilityIdentifier":"CVE-2020-36187","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.datasources.SharedPoolDataSource.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36187","cvss3Severity":"medium","cvss3Score":"4.2","cvss3Metrics":{"A":"Low","AC":"High","PR":"Low","S":"Unchanged","C":"Low","UI":"Required","AV":"Local","I":"Low"},"extraData":{}}</REMEDIATE> -->
non_process
cve medium detected in jackson databind jar jackson databind jar cve medium severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file zaproxy buildsrc build gradle kts path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy kotlin reflect jar root library x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file zaproxy path to vulnerable library tmp ws ua wcaeya downloadresource jmenzf jackson databind jar dependency hierarchy wiremock jar root library zjsonpatch jar x jackson databind jar vulnerable library found in base branch develop vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp dbcp datasources sharedpooldatasource publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction required scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp dbcp datasources sharedpooldatasource vulnerabilityurl
0
157,978
6,019,808,124
IssuesEvent
2017-06-07 15:12:26
department-of-veterans-affairs/caseflow
https://api.github.com/repos/department-of-veterans-affairs/caseflow
opened
App Canvas | Enable a "per application" out of service mode
bug-medium-priority tech-improvement
Sometimes only one part of *Caseflow* will have an issue. But others are just fine, so we need a way to disable one applications, while others function: 1. Break Dispatch 1. Certification is fine 1. Get scared because we need to take Dispatch out of service 1. Not want to stop Certifications from happening 1. Get sad. - Tagged all the tech leads for visibility. ## Related Issue https://github.com/department-of-veterans-affairs/caseflow/issues/2211
1.0
App Canvas | Enable a "per application" out of service mode - Sometimes only one part of *Caseflow* will have an issue. But others are just fine, so we need a way to disable one applications, while others function: 1. Break Dispatch 1. Certification is fine 1. Get scared because we need to take Dispatch out of service 1. Not want to stop Certifications from happening 1. Get sad. - Tagged all the tech leads for visibility. ## Related Issue https://github.com/department-of-veterans-affairs/caseflow/issues/2211
non_process
app canvas enable a per application out of service mode sometimes only one part of caseflow will have an issue but others are just fine so we need a way to disable one applications while others function break dispatch certification is fine get scared because we need to take dispatch out of service not want to stop certifications from happening get sad tagged all the tech leads for visibility related issue
0
348,603
24,917,448,737
IssuesEvent
2022-10-30 15:17:34
Azure/ResourceModules
https://api.github.com/repos/Azure/ResourceModules
opened
[Wiki]: Default contributions to use token for `namePrefix`
documentation enhancement
### Description To simplify the contribution story, update documentation for [Getting started](https://github.com/Azure/ResourceModules/wiki/Getting%20started%20-%20Scenario%202%20Onboard%20module%20library%20and%20CI%20environment#31-update-default-nameprefix) to not suggest adding the variable to the `settings.yml` file. Instead, suggest staying with the variable.
1.0
[Wiki]: Default contributions to use token for `namePrefix` - ### Description To simplify the contribution story, update documentation for [Getting started](https://github.com/Azure/ResourceModules/wiki/Getting%20started%20-%20Scenario%202%20Onboard%20module%20library%20and%20CI%20environment#31-update-default-nameprefix) to not suggest adding the variable to the `settings.yml` file. Instead, suggest staying with the variable.
non_process
default contributions to use token for nameprefix description to simplify the contribution story update documentation for to not suggest adding the variable to the settings yml file instead suggest staying with the variable
0
65,126
7,857,511,059
IssuesEvent
2018-06-21 11:02:08
Albert221/crowdie-android
https://api.github.com/repos/Albert221/crowdie-android
opened
Members markers
design enhancement
Markers of users should look like an ordinary marker (maybe a little smaller) with a `face` icon inside instead of a darker dot. Marker's color should be randomly generated so that each member would have its own, unique color. This marker would be visible: - on the map *obviously* - on the members list, on the *start* of an item
1.0
Members markers - Markers of users should look like an ordinary marker (maybe a little smaller) with a `face` icon inside instead of a darker dot. Marker's color should be randomly generated so that each member would have its own, unique color. This marker would be visible: - on the map *obviously* - on the members list, on the *start* of an item
non_process
members markers markers of users should look like an ordinary marker maybe a little smaller with a face icon inside instead of a darker dot marker s color should be randomly generated so that each member would have its own unique color this marker would be visible on the map obviously on the members list on the start of an item
0
1,158
3,640,781,108
IssuesEvent
2016-02-13 04:25:49
triplea-game/triplea
https://api.github.com/repos/triplea-game/triplea
closed
Github issue management
Process Improvement
I started a quick wiki doc on what we track in github issues. Please take a look: http://github.com/triplea-game/triplea/wiki/TripleA-Github-Issue-Management Eventually this will be a good page to link to from our Readme to help people understand better how to submit feature requests, and how we organize/manage those tickets.
1.0
Github issue management - I started a quick wiki doc on what we track in github issues. Please take a look: http://github.com/triplea-game/triplea/wiki/TripleA-Github-Issue-Management Eventually this will be a good page to link to from our Readme to help people understand better how to submit feature requests, and how we organize/manage those tickets.
process
github issue management i started a quick wiki doc on what we track in github issues please take a look eventually this will be a good page to link to from our readme to help people understand better how to submit feature requests and how we organize manage those tickets
1
7,118
10,266,254,162
IssuesEvent
2019-08-22 20:56:34
automotive-edge-computing-consortium/AECC
https://api.github.com/repos/automotive-edge-computing-consortium/AECC
opened
Interlink between different technical solutions.
priority:High status:OnHold type:Process
Need to understand the interlink relationship among technical solutions defined in WG2 to check if the architecture fulfills the requirements written in the URD. Once FAD and TR 1.0 are created, this topic will be addressed in the next step.
1.0
Interlink between different technical solutions. - Need to understand the interlink relationship among technical solutions defined in WG2 to check if the architecture fulfills the requirements written in the URD. Once FAD and TR 1.0 are created, this topic will be addressed in the next step.
process
interlink between different technical solutions need to understand the interlink relationship among technical solutions defined in to check if the architecture fulfills the requirements written in the urd once fad and tr are created this topic will be addressed in the next step
1
125,237
4,954,663,107
IssuesEvent
2016-12-01 18:14:00
ag-csw/LDStreamHMMLearn
https://api.github.com/repos/ag-csw/LDStreamHMMLearn
opened
Handling data and parameters
less priority
In the long-term, the simulated data should be encapsulated in a data object, which also maintains a record of the parameters used in the simulations. In the current implementation, one must be careful to know what parameters where used for the simulation, and the array of data can easily get separated from these parameters.
1.0
Handling data and parameters - In the long-term, the simulated data should be encapsulated in a data object, which also maintains a record of the parameters used in the simulations. In the current implementation, one must be careful to know what parameters where used for the simulation, and the array of data can easily get separated from these parameters.
non_process
handling data and parameters in the long term the simulated data should be encapsulated in a data object which also maintains a record of the parameters used in the simulations in the current implementation one must be careful to know what parameters where used for the simulation and the array of data can easily get separated from these parameters
0
17,945
23,938,087,225
IssuesEvent
2022-09-11 14:18:01
OpenDataScotland/the_od_bods
https://api.github.com/repos/OpenDataScotland/the_od_bods
closed
Fix dataset owners in multi-org portals - CKAN
bug data processing back end
Some data portals are aggregated portals themselves meaning there are actually multiple owners but we have been operating on the assumption of a single portal owner. This issue is for the CKAN sources only.
1.0
Fix dataset owners in multi-org portals - CKAN - Some data portals are aggregated portals themselves meaning there are actually multiple owners but we have been operating on the assumption of a single portal owner. This issue is for the CKAN sources only.
process
fix dataset owners in multi org portals ckan some data portals are aggregated portals themselves meaning there are actually multiple owners but we have been operating on the assumption of a single portal owner this issue is for the ckan sources only
1
17,670
23,494,212,463
IssuesEvent
2022-08-17 22:17:58
benthosdev/benthos
https://api.github.com/repos/benthosdev/benthos
opened
Add support for parquet logical types to parquet_encode processor
enhancement processors
In some cases, users will need to specify the logical type in the `schema` field. Details here: https://github.com/apache/parquet-format/blob/master/LogicalTypes.md For example, when using `type: BYTE_ARRAY` to encode a string value, they might want to set the logical type to `STRING` so decoders will be able to interpret it correctly. For example, given this config: ```yaml input: generate: mapping: root.test = "deadbeef" count: 1 interval: 0s pipeline: processors: - parquet_encode: schema: - name: test type: BYTE_ARRAY output: file: path: output.parquet codec: all-bytes ``` will produce a parquet binary which, when decoded with parquet-tools will contain a base64-encoded value: ```shell > docker run --rm -v$(pwd):/tmp/parquet nathanhowell/parquet-tools cat /tmp/parquet/output.parquet test = ZGVhZGJlZWY= ``` however, if we change [this](https://github.com/benthosdev/benthos/blob/ba4b1d13570756ac273774d1a2c4772fef18680a/internal/impl/parquet/processor_encode.go#L121) line of code to `n = parquet.String()`, then parquet-tools will output `test = deadbeef`.
1.0
Add support for parquet logical types to parquet_encode processor - In some cases, users will need to specify the logical type in the `schema` field. Details here: https://github.com/apache/parquet-format/blob/master/LogicalTypes.md For example, when using `type: BYTE_ARRAY` to encode a string value, they might want to set the logical type to `STRING` so decoders will be able to interpret it correctly. For example, given this config: ```yaml input: generate: mapping: root.test = "deadbeef" count: 1 interval: 0s pipeline: processors: - parquet_encode: schema: - name: test type: BYTE_ARRAY output: file: path: output.parquet codec: all-bytes ``` will produce a parquet binary which, when decoded with parquet-tools will contain a base64-encoded value: ```shell > docker run --rm -v$(pwd):/tmp/parquet nathanhowell/parquet-tools cat /tmp/parquet/output.parquet test = ZGVhZGJlZWY= ``` however, if we change [this](https://github.com/benthosdev/benthos/blob/ba4b1d13570756ac273774d1a2c4772fef18680a/internal/impl/parquet/processor_encode.go#L121) line of code to `n = parquet.String()`, then parquet-tools will output `test = deadbeef`.
process
add support for parquet logical types to parquet encode processor in some cases users will need to specify the logical type in the schema field details here for example when using type byte array to encode a string value they might want to set the logical type to string so decoders will be able to interpret it correctly for example given this config yaml input generate mapping root test deadbeef count interval pipeline processors parquet encode schema name test type byte array output file path output parquet codec all bytes will produce a parquet binary which when decoded with parquet tools will contain a encoded value shell docker run rm v pwd tmp parquet nathanhowell parquet tools cat tmp parquet output parquet test zgvhzgjlzwy however if we change line of code to n parquet string then parquet tools will output test deadbeef
1
7,305
10,443,166,438
IssuesEvent
2019-09-18 14:24:05
threefoldtech/0-robot
https://api.github.com/repos/threefoldtech/0-robot
closed
Services dependency API
process_wontfix type_feature
Facts: A service can rely on other services types of relation : - parent child : a service create another one. (usually a high level service that wraps other services - utilization : a service use an action from another services In the case of a parent child relation. It is required for the parent services to keep track of how to reach the robots of all the services it has created. Currently this needs to be done manually by the template creator using the shame of the parent service. The idea would be to provide an interface in the service template API that provide a way to keep track of the relations. Benefits: - unified way for developer to model services relations - allow to have some background routines making sure that all children are properly deleted in case some robots were offline during deletion of the parent service - automatic state update and self-healing : parent watch child states and trigger self-healing actions automatically to keep the application running. API: Info required to reach a service : - guid - robot URL, better if alias of node id - secret
1.0
Services dependency API - Facts: A service can rely on other services types of relation : - parent child : a service create another one. (usually a high level service that wraps other services - utilization : a service use an action from another services In the case of a parent child relation. It is required for the parent services to keep track of how to reach the robots of all the services it has created. Currently this needs to be done manually by the template creator using the shame of the parent service. The idea would be to provide an interface in the service template API that provide a way to keep track of the relations. Benefits: - unified way for developer to model services relations - allow to have some background routines making sure that all children are properly deleted in case some robots were offline during deletion of the parent service - automatic state update and self-healing : parent watch child states and trigger self-healing actions automatically to keep the application running. API: Info required to reach a service : - guid - robot URL, better if alias of node id - secret
process
services dependency api facts a service can rely on other services types of relation parent child a service create another one usually a high level service that wraps other services utilization a service use an action from another services in the case of a parent child relation it is required for the parent services to keep track of how to reach the robots of all the services it has created currently this needs to be done manually by the template creator using the shame of the parent service the idea would be to provide an interface in the service template api that provide a way to keep track of the relations benefits unified way for developer to model services relations allow to have some background routines making sure that all children are properly deleted in case some robots were offline during deletion of the parent service automatic state update and self healing parent watch child states and trigger self healing actions automatically to keep the application running api info required to reach a service guid robot url better if alias of node id secret
1
266,592
28,379,755,069
IssuesEvent
2023-04-13 01:23:12
Colafusion/uni-react-web-app-dev
https://api.github.com/repos/Colafusion/uni-react-web-app-dev
closed
CVE-2021-23440 (High) detected in set-value-2.0.1.tgz - autoclosed
Mend: dependency security vulnerability
## CVE-2021-23440 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>set-value-2.0.1.tgz</b></p></summary> <p>Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.</p> <p>Library home page: <a href="https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz">https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/set-value/package.json</p> <p> Dependency Hierarchy: - react-scripts-4.0.3.tgz (Root Library) - webpack-4.44.2.tgz - micromatch-3.1.10.tgz - snapdragon-0.8.2.tgz - base-0.11.2.tgz - cache-base-1.0.1.tgz - :x: **set-value-2.0.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Colafusion/uni-react-web-app-dev/commit/002f0374605af4b58e9d4b4207ae3751f1c153b9">002f0374605af4b58e9d4b4207ae3751f1c153b9</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects the package set-value before <2.0.1, >=3.0.0 <4.0.1. A type confusion vulnerability can lead to a bypass of CVE-2019-10747 when the user-provided keys used in the path parameter are arrays. Mend Note: After conducting further research, Mend has determined that all versions of set-value up to version 4.0.0 are vulnerable to CVE-2021-23440. <p>Publish Date: 2021-09-12 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23440>CVE-2021-23440</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2021-09-12</p> <p>Fix Resolution (set-value): 4.0.1</p> <p>Direct dependency fix Resolution (react-scripts): 5.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-23440 (High) detected in set-value-2.0.1.tgz - autoclosed - ## CVE-2021-23440 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>set-value-2.0.1.tgz</b></p></summary> <p>Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.</p> <p>Library home page: <a href="https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz">https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/set-value/package.json</p> <p> Dependency Hierarchy: - react-scripts-4.0.3.tgz (Root Library) - webpack-4.44.2.tgz - micromatch-3.1.10.tgz - snapdragon-0.8.2.tgz - base-0.11.2.tgz - cache-base-1.0.1.tgz - :x: **set-value-2.0.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Colafusion/uni-react-web-app-dev/commit/002f0374605af4b58e9d4b4207ae3751f1c153b9">002f0374605af4b58e9d4b4207ae3751f1c153b9</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects the package set-value before <2.0.1, >=3.0.0 <4.0.1. A type confusion vulnerability can lead to a bypass of CVE-2019-10747 when the user-provided keys used in the path parameter are arrays. Mend Note: After conducting further research, Mend has determined that all versions of set-value up to version 4.0.0 are vulnerable to CVE-2021-23440. <p>Publish Date: 2021-09-12 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23440>CVE-2021-23440</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2021-09-12</p> <p>Fix Resolution (set-value): 4.0.1</p> <p>Direct dependency fix Resolution (react-scripts): 5.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in set value tgz autoclosed cve high severity vulnerability vulnerable library set value tgz create nested values and any intermediaries using dot notation a b c paths library home page a href path to dependency file package json path to vulnerable library node modules set value package json dependency hierarchy react scripts tgz root library webpack tgz micromatch tgz snapdragon tgz base tgz cache base tgz x set value tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package set value before a type confusion vulnerability can lead to a bypass of cve when the user provided keys used in the path parameter are arrays mend note after conducting further research mend has determined that all versions of set value up to version are vulnerable to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution set value direct dependency fix resolution react scripts step up your open source security game with mend
0
18,622
24,579,561,932
IssuesEvent
2022-10-13 14:41:28
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[Consent API] [Android] [IOS] Data Sharing screen shot
Feature request Process: Fixed Process: Tested QA Process: Tested dev
please add a `dataSharingScreenShot ` as a field and provide a data sharing screen shot details in `ConsentStatusBean `of "/updateEligibilityConsentStatus" API.
3.0
[Consent API] [Android] [IOS] Data Sharing screen shot - please add a `dataSharingScreenShot ` as a field and provide a data sharing screen shot details in `ConsentStatusBean `of "/updateEligibilityConsentStatus" API.
process
data sharing screen shot please add a datasharingscreenshot as a field and provide a data sharing screen shot details in consentstatusbean of updateeligibilityconsentstatus api
1
245,030
26,503,761,510
IssuesEvent
2023-01-18 12:20:23
rsoreq/kendo-ui-core
https://api.github.com/repos/rsoreq/kendo-ui-core
opened
CVE-2022-25901 (Medium) detected in cookiejar-1.3.0.tgz, cookiejar-2.1.2.tgz
security vulnerability
## CVE-2022-25901 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>cookiejar-1.3.0.tgz</b>, <b>cookiejar-2.1.2.tgz</b></p></summary> <p> <details><summary><b>cookiejar-1.3.0.tgz</b></p></summary> <p>simple persistent cookiejar system</p> <p>Library home page: <a href="https://registry.npmjs.org/cookiejar/-/cookiejar-1.3.0.tgz">https://registry.npmjs.org/cookiejar/-/cookiejar-1.3.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/cookiejar/package.json</p> <p> Dependency Hierarchy: - :x: **cookiejar-1.3.0.tgz** (Vulnerable Library) </details> <details><summary><b>cookiejar-2.1.2.tgz</b></p></summary> <p>simple persistent cookiejar system</p> <p>Library home page: <a href="https://registry.npmjs.org/cookiejar/-/cookiejar-2.1.2.tgz">https://registry.npmjs.org/cookiejar/-/cookiejar-2.1.2.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/faye/node_modules/cookiejar/package.json</p> <p> Dependency Hierarchy: - faye-0.8.3.tgz (Root Library) - :x: **cookiejar-2.1.2.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/rsoreq/kendo-ui-core/commit/62afbcdf79c4c7052417ecc86eb31bd6bc04e1ad">62afbcdf79c4c7052417ecc86eb31bd6bc04e1ad</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Versions of the package cookiejar before 2.1.4 are vulnerable to Regular Expression Denial of Service (ReDoS) via the Cookie.parse function, which uses an insecure regular expression. <p>Publish Date: 2023-01-18 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25901>CVE-2022-25901</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2023-01-18</p> <p>Fix Resolution: cookiejar - 2.1.4</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
True
CVE-2022-25901 (Medium) detected in cookiejar-1.3.0.tgz, cookiejar-2.1.2.tgz - ## CVE-2022-25901 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>cookiejar-1.3.0.tgz</b>, <b>cookiejar-2.1.2.tgz</b></p></summary> <p> <details><summary><b>cookiejar-1.3.0.tgz</b></p></summary> <p>simple persistent cookiejar system</p> <p>Library home page: <a href="https://registry.npmjs.org/cookiejar/-/cookiejar-1.3.0.tgz">https://registry.npmjs.org/cookiejar/-/cookiejar-1.3.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/cookiejar/package.json</p> <p> Dependency Hierarchy: - :x: **cookiejar-1.3.0.tgz** (Vulnerable Library) </details> <details><summary><b>cookiejar-2.1.2.tgz</b></p></summary> <p>simple persistent cookiejar system</p> <p>Library home page: <a href="https://registry.npmjs.org/cookiejar/-/cookiejar-2.1.2.tgz">https://registry.npmjs.org/cookiejar/-/cookiejar-2.1.2.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/faye/node_modules/cookiejar/package.json</p> <p> Dependency Hierarchy: - faye-0.8.3.tgz (Root Library) - :x: **cookiejar-2.1.2.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/rsoreq/kendo-ui-core/commit/62afbcdf79c4c7052417ecc86eb31bd6bc04e1ad">62afbcdf79c4c7052417ecc86eb31bd6bc04e1ad</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Versions of the package cookiejar before 2.1.4 are vulnerable to Regular Expression Denial of Service (ReDoS) via the Cookie.parse function, which uses an insecure regular expression. <p>Publish Date: 2023-01-18 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25901>CVE-2022-25901</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2023-01-18</p> <p>Fix Resolution: cookiejar - 2.1.4</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
non_process
cve medium detected in cookiejar tgz cookiejar tgz cve medium severity vulnerability vulnerable libraries cookiejar tgz cookiejar tgz cookiejar tgz simple persistent cookiejar system library home page a href path to dependency file package json path to vulnerable library node modules cookiejar package json dependency hierarchy x cookiejar tgz vulnerable library cookiejar tgz simple persistent cookiejar system library home page a href path to dependency file package json path to vulnerable library node modules faye node modules cookiejar package json dependency hierarchy faye tgz root library x cookiejar tgz vulnerable library found in head commit a href found in base branch master vulnerability details versions of the package cookiejar before are vulnerable to regular expression denial of service redos via the cookie parse function which uses an insecure regular expression publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version release date fix resolution cookiejar check this box to open an automated fix pr
0
18,473
24,550,672,026
IssuesEvent
2022-10-12 12:22:23
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] Dashboard, Locations, Admins and My account tabs are not getting displayed in the participant manager
Bug Blocker P0 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
Dashboard, Locations, Admins and My account tabs are not getting displayed in the participant manager ![BlockerPM](https://user-images.githubusercontent.com/86007179/184888884-5fe75d50-1edd-45f3-a676-c744884fc4e6.png)
3.0
[PM] Dashboard, Locations, Admins and My account tabs are not getting displayed in the participant manager - Dashboard, Locations, Admins and My account tabs are not getting displayed in the participant manager ![BlockerPM](https://user-images.githubusercontent.com/86007179/184888884-5fe75d50-1edd-45f3-a676-c744884fc4e6.png)
process
dashboard locations admins and my account tabs are not getting displayed in the participant manager dashboard locations admins and my account tabs are not getting displayed in the participant manager
1
154,161
19,710,802,148
IssuesEvent
2022-01-13 04:57:22
ChoeMinji/react-17.0.2
https://api.github.com/repos/ChoeMinji/react-17.0.2
opened
WS-2019-0541 (High) detected in macaddress-0.2.8.tgz
security vulnerability
## WS-2019-0541 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>macaddress-0.2.8.tgz</b></p></summary> <p>Get the MAC addresses (hardware addresses) of the hosts network interfaces.</p> <p>Library home page: <a href="https://registry.npmjs.org/macaddress/-/macaddress-0.2.8.tgz">https://registry.npmjs.org/macaddress/-/macaddress-0.2.8.tgz</a></p> <p>Path to dependency file: /fixtures/fiber-debugger/package.json</p> <p>Path to vulnerable library: /fixtures/fiber-debugger/node_modules/macaddress/package.json</p> <p> Dependency Hierarchy: - react-scripts-1.1.4.tgz (Root Library) - css-loader-0.28.7.tgz - cssnano-3.10.0.tgz - postcss-filter-plugins-2.0.2.tgz - uniqid-4.1.1.tgz - :x: **macaddress-0.2.8.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/react-17.0.2/commit/4669645897ed4ebcd4ee037f4dabb509ed4754c7">4669645897ed4ebcd4ee037f4dabb509ed4754c7</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Arbitrary File Read vulnerability was found in macaddress before 0.4.3. <p>Publish Date: 2019-08-20 <p>URL: <a href=https://github.com/scravy/node-macaddress/commit/ca9e24df906c9066d49fba658e35ce44584552c7>WS-2019-0541</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/scravy/node-macaddress/releases/tag/0.4.3">https://github.com/scravy/node-macaddress/releases/tag/0.4.3</a></p> <p>Release Date: 2019-08-20</p> <p>Fix Resolution: macaddress - 0.4.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2019-0541 (High) detected in macaddress-0.2.8.tgz - ## WS-2019-0541 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>macaddress-0.2.8.tgz</b></p></summary> <p>Get the MAC addresses (hardware addresses) of the hosts network interfaces.</p> <p>Library home page: <a href="https://registry.npmjs.org/macaddress/-/macaddress-0.2.8.tgz">https://registry.npmjs.org/macaddress/-/macaddress-0.2.8.tgz</a></p> <p>Path to dependency file: /fixtures/fiber-debugger/package.json</p> <p>Path to vulnerable library: /fixtures/fiber-debugger/node_modules/macaddress/package.json</p> <p> Dependency Hierarchy: - react-scripts-1.1.4.tgz (Root Library) - css-loader-0.28.7.tgz - cssnano-3.10.0.tgz - postcss-filter-plugins-2.0.2.tgz - uniqid-4.1.1.tgz - :x: **macaddress-0.2.8.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/react-17.0.2/commit/4669645897ed4ebcd4ee037f4dabb509ed4754c7">4669645897ed4ebcd4ee037f4dabb509ed4754c7</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Arbitrary File Read vulnerability was found in macaddress before 0.4.3. <p>Publish Date: 2019-08-20 <p>URL: <a href=https://github.com/scravy/node-macaddress/commit/ca9e24df906c9066d49fba658e35ce44584552c7>WS-2019-0541</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/scravy/node-macaddress/releases/tag/0.4.3">https://github.com/scravy/node-macaddress/releases/tag/0.4.3</a></p> <p>Release Date: 2019-08-20</p> <p>Fix Resolution: macaddress - 0.4.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
ws high detected in macaddress tgz ws high severity vulnerability vulnerable library macaddress tgz get the mac addresses hardware addresses of the hosts network interfaces library home page a href path to dependency file fixtures fiber debugger package json path to vulnerable library fixtures fiber debugger node modules macaddress package json dependency hierarchy react scripts tgz root library css loader tgz cssnano tgz postcss filter plugins tgz uniqid tgz x macaddress tgz vulnerable library found in head commit a href found in base branch master vulnerability details arbitrary file read vulnerability was found in macaddress before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution macaddress step up your open source security game with whitesource
0
245,614
7,888,577,619
IssuesEvent
2018-06-27 22:44:38
Marri/glowfic
https://api.github.com/repos/Marri/glowfic
opened
Increase the height of the character selector dropdown by half an item
3. medium priority 8. medium type: bug
On some platforms, the scrollbar for the character selector doesn't show. If we can modify the height of the select2 control, we should increase the height of the box by half an item. This might be problematic since the text in items wraps and could mess with the relevant heights. Example of a failure to scroll (definitely has more items below this set): ![](https://cdn.discordapp.com/attachments/380853614102577152/461661595546681364/image.png)
1.0
Increase the height of the character selector dropdown by half an item - On some platforms, the scrollbar for the character selector doesn't show. If we can modify the height of the select2 control, we should increase the height of the box by half an item. This might be problematic since the text in items wraps and could mess with the relevant heights. Example of a failure to scroll (definitely has more items below this set): ![](https://cdn.discordapp.com/attachments/380853614102577152/461661595546681364/image.png)
non_process
increase the height of the character selector dropdown by half an item on some platforms the scrollbar for the character selector doesn t show if we can modify the height of the control we should increase the height of the box by half an item this might be problematic since the text in items wraps and could mess with the relevant heights example of a failure to scroll definitely has more items below this set
0
259,140
22,393,242,483
IssuesEvent
2022-06-17 09:47:02
elastic/elasticsearch
https://api.github.com/repos/elastic/elasticsearch
opened
[CI] DocsClientYamlTestSuiteIT test {yaml=reference/transform/apis/get-transform-stats/line_275} failing
>test-failure :ml/Transform
**Build scan:** https://gradle-enterprise.elastic.co/s/7t6tvecxnp4ka/tests/:docs:yamlRestTest/org.elasticsearch.smoketest.DocsClientYamlTestSuiteIT/test%20%7Byaml=reference%2Ftransform%2Fapis%2Fget-transform-stats%2Fline_275%7D **Reproduction line:** `./gradlew ':docs:yamlRestTest' --tests "org.elasticsearch.smoketest.DocsClientYamlTestSuiteIT.test {yaml=reference/transform/apis/get-transform-stats/line_275}" -Dtests.seed=12EFD88F87A1877A -Dbuild.snapshot=false -Dtests.jvm.argline="-Dbuild.snapshot=false" -Dtests.locale=en -Dtests.timezone=America/Blanc-Sablon -Druntime.java=17` **Applicable branches:** master **Reproduces locally?:** No **Failure history:** https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.smoketest.DocsClientYamlTestSuiteIT&tests.test=test%20%7Byaml%3Dreference/transform/apis/get-transform-stats/line_275%7D **Failure excerpt:** ``` org.junit.AssumptionViolatedException: [reference/transform/apis/get-transform-stats/line_275] skipped, reason: [todo] unsupported features [default_shards, stash_in_key, stash_in_path, stash_path_replace, warnings, always_skip] at com.carrotsearch.randomizedtesting.RandomizedTest.assumeTrue(RandomizedTest.java:744) at com.carrotsearch.randomizedtesting.RandomizedTest.assumeFalse(RandomizedTest.java:752) at org.apache.lucene.tests.util.LuceneTestCase.assumeFalse(LuceneTestCase.java:898) at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:433) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:568) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45) at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375) at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831) at java.lang.Thread.run(Thread.java:833) ```
1.0
[CI] DocsClientYamlTestSuiteIT test {yaml=reference/transform/apis/get-transform-stats/line_275} failing - **Build scan:** https://gradle-enterprise.elastic.co/s/7t6tvecxnp4ka/tests/:docs:yamlRestTest/org.elasticsearch.smoketest.DocsClientYamlTestSuiteIT/test%20%7Byaml=reference%2Ftransform%2Fapis%2Fget-transform-stats%2Fline_275%7D **Reproduction line:** `./gradlew ':docs:yamlRestTest' --tests "org.elasticsearch.smoketest.DocsClientYamlTestSuiteIT.test {yaml=reference/transform/apis/get-transform-stats/line_275}" -Dtests.seed=12EFD88F87A1877A -Dbuild.snapshot=false -Dtests.jvm.argline="-Dbuild.snapshot=false" -Dtests.locale=en -Dtests.timezone=America/Blanc-Sablon -Druntime.java=17` **Applicable branches:** master **Reproduces locally?:** No **Failure history:** https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.smoketest.DocsClientYamlTestSuiteIT&tests.test=test%20%7Byaml%3Dreference/transform/apis/get-transform-stats/line_275%7D **Failure excerpt:** ``` org.junit.AssumptionViolatedException: [reference/transform/apis/get-transform-stats/line_275] skipped, reason: [todo] unsupported features [default_shards, stash_in_key, stash_in_path, stash_path_replace, warnings, always_skip] at com.carrotsearch.randomizedtesting.RandomizedTest.assumeTrue(RandomizedTest.java:744) at com.carrotsearch.randomizedtesting.RandomizedTest.assumeFalse(RandomizedTest.java:752) at org.apache.lucene.tests.util.LuceneTestCase.assumeFalse(LuceneTestCase.java:898) at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:433) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:568) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45) at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375) at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831) at java.lang.Thread.run(Thread.java:833) ```
non_process
docsclientyamltestsuiteit test yaml reference transform apis get transform stats line failing build scan reproduction line gradlew docs yamlresttest tests org elasticsearch smoketest docsclientyamltestsuiteit test yaml reference transform apis get transform stats line dtests seed dbuild snapshot false dtests jvm argline dbuild snapshot false dtests locale en dtests timezone america blanc sablon druntime java applicable branches master reproduces locally no failure history failure excerpt org junit assumptionviolatedexception skipped reason unsupported features at com carrotsearch randomizedtesting randomizedtest assumetrue randomizedtest java at com carrotsearch randomizedtesting randomizedtest assumefalse randomizedtest java at org apache lucene tests util lucenetestcase assumefalse lucenetestcase java at org elasticsearch test rest yaml esclientyamlsuitetestcase test esclientyamlsuitetestcase java at jdk internal reflect nativemethodaccessorimpl nativemethodaccessorimpl java at jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java
0
1,182
3,682,105,819
IssuesEvent
2016-02-24 08:05:42
BlesseNtumble/GalaxySpace
https://api.github.com/repos/BlesseNtumble/GalaxySpace
closed
Player movement check is wrong
bug in the process of correcting
So, it seems that in the 1.0.7 update, you (BlesseNTumble) changed something about the player movement code or similar. At first, i thought some other mod caused this problem, but now (because of another problem i had, that was my fault) that i backdated to 1.0.6.1, it works fine. Basically, when i move to fast with a Modular Powersuit Armor or any other type of armor or entity or such, i am being stopped mid air, and the console then tells me that i "moved to quickly". I had to backdate GalaxySpace due to this, so it would be nice to have this fixed as soon as possible.
1.0
Player movement check is wrong - So, it seems that in the 1.0.7 update, you (BlesseNTumble) changed something about the player movement code or similar. At first, i thought some other mod caused this problem, but now (because of another problem i had, that was my fault) that i backdated to 1.0.6.1, it works fine. Basically, when i move to fast with a Modular Powersuit Armor or any other type of armor or entity or such, i am being stopped mid air, and the console then tells me that i "moved to quickly". I had to backdate GalaxySpace due to this, so it would be nice to have this fixed as soon as possible.
process
player movement check is wrong so it seems that in the update you blessentumble changed something about the player movement code or similar at first i thought some other mod caused this problem but now because of another problem i had that was my fault that i backdated to it works fine basically when i move to fast with a modular powersuit armor or any other type of armor or entity or such i am being stopped mid air and the console then tells me that i moved to quickly i had to backdate galaxyspace due to this so it would be nice to have this fixed as soon as possible
1
287,535
31,843,713,064
IssuesEvent
2023-09-14 18:13:19
sajwanm/NodeGoat
https://api.github.com/repos/sajwanm/NodeGoat
opened
swig-1.4.2.tgz: 1 vulnerabilities (highest severity is: 7.5)
Mend: dependency security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>swig-1.4.2.tgz</b></p></summary> <p></p> <p> <p>Found in HEAD commit: <a href="https://github.com/sajwanm/NodeGoat/commit/11eb79c30e01d4d81b52acf79cd66afa7f5b7bf5">11eb79c30e01d4d81b52acf79cd66afa7f5b7bf5</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (swig version) | Remediation Possible** | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2015-8858](https://www.mend.io/vulnerability-database/CVE-2015-8858) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | uglify-js-2.4.24.tgz | Transitive | N/A* | &#10060; | <p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p><p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p> ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2015-8858</summary> ### Vulnerable Library - <b>uglify-js-2.4.24.tgz</b></p> <p>JavaScript parser, mangler/compressor and beautifier toolkit</p> <p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-2.4.24.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-2.4.24.tgz</a></p> <p> Dependency Hierarchy: - swig-1.4.2.tgz (Root Library) - :x: **uglify-js-2.4.24.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/sajwanm/NodeGoat/commit/11eb79c30e01d4d81b52acf79cd66afa7f5b7bf5">11eb79c30e01d4d81b52acf79cd66afa7f5b7bf5</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> The uglify-js package before 2.6.0 for Node.js allows attackers to cause a denial of service (CPU consumption) via crafted input in a parse call, aka a "regular expression denial of service (ReDoS)." <p>Publish Date: 2017-01-23 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-8858>CVE-2015-8858</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858</a></p> <p>Release Date: 2017-01-23</p> <p>Fix Resolution: v2.6.0</p> </p> <p></p> </details>
True
swig-1.4.2.tgz: 1 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>swig-1.4.2.tgz</b></p></summary> <p></p> <p> <p>Found in HEAD commit: <a href="https://github.com/sajwanm/NodeGoat/commit/11eb79c30e01d4d81b52acf79cd66afa7f5b7bf5">11eb79c30e01d4d81b52acf79cd66afa7f5b7bf5</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (swig version) | Remediation Possible** | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2015-8858](https://www.mend.io/vulnerability-database/CVE-2015-8858) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | uglify-js-2.4.24.tgz | Transitive | N/A* | &#10060; | <p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p><p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p> ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2015-8858</summary> ### Vulnerable Library - <b>uglify-js-2.4.24.tgz</b></p> <p>JavaScript parser, mangler/compressor and beautifier toolkit</p> <p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-2.4.24.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-2.4.24.tgz</a></p> <p> Dependency Hierarchy: - swig-1.4.2.tgz (Root Library) - :x: **uglify-js-2.4.24.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/sajwanm/NodeGoat/commit/11eb79c30e01d4d81b52acf79cd66afa7f5b7bf5">11eb79c30e01d4d81b52acf79cd66afa7f5b7bf5</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> The uglify-js package before 2.6.0 for Node.js allows attackers to cause a denial of service (CPU consumption) via crafted input in a parse call, aka a "regular expression denial of service (ReDoS)." <p>Publish Date: 2017-01-23 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-8858>CVE-2015-8858</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858</a></p> <p>Release Date: 2017-01-23</p> <p>Fix Resolution: v2.6.0</p> </p> <p></p> </details>
non_process
swig tgz vulnerabilities highest severity is vulnerable library swig tgz found in head commit a href vulnerabilities cve severity cvss dependency type fixed in swig version remediation possible high uglify js tgz transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the details section below to see if there is a version of transitive dependency where vulnerability is fixed in some cases remediation pr cannot be created automatically for a vulnerability despite the availability of remediation details cve vulnerable library uglify js tgz javascript parser mangler compressor and beautifier toolkit library home page a href dependency hierarchy swig tgz root library x uglify js tgz vulnerable library found in head commit a href found in base branch main vulnerability details the uglify js package before for node js allows attackers to cause a denial of service cpu consumption via crafted input in a parse call aka a regular expression denial of service redos publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
0
4,581
7,416,239,910
IssuesEvent
2018-03-22 00:05:09
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Cannot find the value of the field AzureADSTSEndpoint
cxp doc-bug in-process media-services triaged
Hi all, I attempted to build a streaming service that users can upload/access media files (and I want to do this via `REST`) Then I followed the tutorial https://docs.microsoft.com/en-us/azure/media-services/media-services-rest-upload-files And I tried to upload the file via postman. The section "Add connection values to your environment" states that user should fill the environment values before uploading a file. The problem I'm facing is that how can I get the value `AzureADSTSEndpoint`? Besides, in the same section, it does not specify the path of file `BigBuckBunny.mp4`. How can you upload the file with postman? In addition, is there any other docs I have to study if I want to achieve the goal as I mentioned above? Please give me some suggestion. Thanks in advance.
1.0
Cannot find the value of the field AzureADSTSEndpoint - Hi all, I attempted to build a streaming service that users can upload/access media files (and I want to do this via `REST`) Then I followed the tutorial https://docs.microsoft.com/en-us/azure/media-services/media-services-rest-upload-files And I tried to upload the file via postman. The section "Add connection values to your environment" states that user should fill the environment values before uploading a file. The problem I'm facing is that how can I get the value `AzureADSTSEndpoint`? Besides, in the same section, it does not specify the path of file `BigBuckBunny.mp4`. How can you upload the file with postman? In addition, is there any other docs I have to study if I want to achieve the goal as I mentioned above? Please give me some suggestion. Thanks in advance.
process
cannot find the value of the field azureadstsendpoint hi all i attempted to build a streaming service that users can upload access media files and i want to do this via rest then i followed the tutorial and i tried to upload the file via postman the section add connection values to your environment states that user should fill the environment values before uploading a file the problem i m facing is that how can i get the value azureadstsendpoint besides in the same section it does not specify the path of file bigbuckbunny how can you upload the file with postman in addition is there any other docs i have to study if i want to achieve the goal as i mentioned above please give me some suggestion thanks in advance
1
216,991
7,313,627,654
IssuesEvent
2018-03-01 02:10:01
wevote/WeVoteServer
https://api.github.com/repos/wevote/WeVoteServer
closed
Import Ballots: Import Data Into Ballot Item table
Priority 1
Update the "batch_action_list_create_or_update_process" for IMPORT_BALLOT_ITEM WeVoteServer/import_export_batches/views_admin.py Search for this function: batch_action_list_create_or_update_process_view You will need to add "IMPORT_BALLOT_ITEM" specific code to "import_data_from_batch_row_actions" in WeVoteServer/import_export_batches/controllers.py
1.0
Import Ballots: Import Data Into Ballot Item table - Update the "batch_action_list_create_or_update_process" for IMPORT_BALLOT_ITEM WeVoteServer/import_export_batches/views_admin.py Search for this function: batch_action_list_create_or_update_process_view You will need to add "IMPORT_BALLOT_ITEM" specific code to "import_data_from_batch_row_actions" in WeVoteServer/import_export_batches/controllers.py
non_process
import ballots import data into ballot item table update the batch action list create or update process for import ballot item wevoteserver import export batches views admin py search for this function batch action list create or update process view you will need to add import ballot item specific code to import data from batch row actions in wevoteserver import export batches controllers py
0
14,425
17,475,233,843
IssuesEvent
2021-08-08 01:46:20
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
dyld: Symbol not found: __cg_jpeg_resync_to_restart
Feedback stale Processing Bug MacOS
On Mac OS High Sierra I get the following error message using GDAL with QGIS commands, e.g. 'Clip raster by extent' ``` QGIS version: 3.16.3-Hannover QGIS code revision: 3a90ea3afd Qt version: 5.14.2 GDAL version: 3.1.2 GEOS version: 3.8.1-CAPI-1.13.3 PROJ version: Rel. 6.3.2, May 1st, 2020 Processing algorithm… Algorithm 'Clip raster by extent' starting… Input parameters: { 'DATA_TYPE' : 0, 'EXTRA' : '', 'INPUT' : '/Users/chrischi/Dropbox/kurse/Kartographie_und_GIS/2021/sandbox/aufgabe02/raster/tiff/TopoKarteSW.tif', 'NODATA' : None, 'OPTIONS' : '', 'OUTPUT' : 'TEMPORARY_OUTPUT', 'PROJWIN' : '1636709.000000000,1641686.750000000,5087358.000000000,5092501.000000000 [EPSG:3003]' } GDAL command: gdal_translate -projwin 1636709.0 5092501.0 1641686.75 5087358.0 -of GTiff /Users/chrischi/Dropbox/kurse/Kartographie_und_GIS/2021/sandbox/aufgabe02/raster/tiff/TopoKarteSW.tif /private/var/folders/mz/j9974jw56px6n25k2__07h3c0000gn/T/processing_qodvhO/80b85f59d7cb42429f753991c128489c/OUTPUT.tif GDAL command output: dyld: Symbol not found: __cg_jpeg_resync_to_restart Referenced from: /System/Library/Frameworks/ImageIO.framework/Versions/A/ImageIO Expected in: /Applications/QGIS.app/Contents/MacOS/lib/libjpeg.9.dylib in /System/Library/Frameworks/ImageIO.framework/Versions/A/ImageIO /Applications/QGIS.app/Contents/MacOS/bin/run_gdal_binary.bash: line 13: 72313 Abort trap: 6 "$THISDIR/_$ALGNAME" "$@" Execution completed in 0.24 seconds Results: {'OUTPUT': '/private/var/folders/mz/j9974jw56px6n25k2__07h3c0000gn/T/processing_qodvhO/80b85f59d7cb42429f753991c128489c/OUTPUT.tif'} Loading resulting layers The following layers were not correctly generated. • /private/var/folders/mz/j9974jw56px6n25k2__07h3c0000gn/T/processing_qodvhO/80b85f59d7cb42429f753991c128489c/OUTPUT.tif You can check the 'Log Messages Panel' in QGIS main window to find more information about the execution of the algorithm. ``` Googling around I found no such error descriptions concerning QGIS, but a lot of similar problems. Most of them are pointing out, that it could be always a problem to set the DYLD_LIBRARY_PATH. This variable is set under Settings -> Options -> Systems -> Environment with the value "/Applications/QGIS.app/Contents/MacOS/lib" and it is not possible for me to change this value. Unsetting the variable "DYLD_LIBRARY_PATH" seems to have no effect.
1.0
dyld: Symbol not found: __cg_jpeg_resync_to_restart - On Mac OS High Sierra I get the following error message using GDAL with QGIS commands, e.g. 'Clip raster by extent' ``` QGIS version: 3.16.3-Hannover QGIS code revision: 3a90ea3afd Qt version: 5.14.2 GDAL version: 3.1.2 GEOS version: 3.8.1-CAPI-1.13.3 PROJ version: Rel. 6.3.2, May 1st, 2020 Processing algorithm… Algorithm 'Clip raster by extent' starting… Input parameters: { 'DATA_TYPE' : 0, 'EXTRA' : '', 'INPUT' : '/Users/chrischi/Dropbox/kurse/Kartographie_und_GIS/2021/sandbox/aufgabe02/raster/tiff/TopoKarteSW.tif', 'NODATA' : None, 'OPTIONS' : '', 'OUTPUT' : 'TEMPORARY_OUTPUT', 'PROJWIN' : '1636709.000000000,1641686.750000000,5087358.000000000,5092501.000000000 [EPSG:3003]' } GDAL command: gdal_translate -projwin 1636709.0 5092501.0 1641686.75 5087358.0 -of GTiff /Users/chrischi/Dropbox/kurse/Kartographie_und_GIS/2021/sandbox/aufgabe02/raster/tiff/TopoKarteSW.tif /private/var/folders/mz/j9974jw56px6n25k2__07h3c0000gn/T/processing_qodvhO/80b85f59d7cb42429f753991c128489c/OUTPUT.tif GDAL command output: dyld: Symbol not found: __cg_jpeg_resync_to_restart Referenced from: /System/Library/Frameworks/ImageIO.framework/Versions/A/ImageIO Expected in: /Applications/QGIS.app/Contents/MacOS/lib/libjpeg.9.dylib in /System/Library/Frameworks/ImageIO.framework/Versions/A/ImageIO /Applications/QGIS.app/Contents/MacOS/bin/run_gdal_binary.bash: line 13: 72313 Abort trap: 6 "$THISDIR/_$ALGNAME" "$@" Execution completed in 0.24 seconds Results: {'OUTPUT': '/private/var/folders/mz/j9974jw56px6n25k2__07h3c0000gn/T/processing_qodvhO/80b85f59d7cb42429f753991c128489c/OUTPUT.tif'} Loading resulting layers The following layers were not correctly generated. • /private/var/folders/mz/j9974jw56px6n25k2__07h3c0000gn/T/processing_qodvhO/80b85f59d7cb42429f753991c128489c/OUTPUT.tif You can check the 'Log Messages Panel' in QGIS main window to find more information about the execution of the algorithm. ``` Googling around I found no such error descriptions concerning QGIS, but a lot of similar problems. Most of them are pointing out, that it could be always a problem to set the DYLD_LIBRARY_PATH. This variable is set under Settings -> Options -> Systems -> Environment with the value "/Applications/QGIS.app/Contents/MacOS/lib" and it is not possible for me to change this value. Unsetting the variable "DYLD_LIBRARY_PATH" seems to have no effect.
process
dyld symbol not found cg jpeg resync to restart on mac os high sierra i get the following error message using gdal with qgis commands e g clip raster by extent qgis version hannover qgis code revision qt version gdal version geos version capi proj version rel may processing algorithm… algorithm clip raster by extent starting… input parameters data type extra input users chrischi dropbox kurse kartographie und gis sandbox raster tiff topokartesw tif nodata none options output temporary output projwin gdal command gdal translate projwin of gtiff users chrischi dropbox kurse kartographie und gis sandbox raster tiff topokartesw tif private var folders mz t processing qodvho output tif gdal command output dyld symbol not found cg jpeg resync to restart referenced from system library frameworks imageio framework versions a imageio expected in applications qgis app contents macos lib libjpeg dylib in system library frameworks imageio framework versions a imageio applications qgis app contents macos bin run gdal binary bash line abort trap thisdir algname execution completed in seconds results output private var folders mz t processing qodvho output tif loading resulting layers the following layers were not correctly generated • private var folders mz t processing qodvho output tif you can check the log messages panel in qgis main window to find more information about the execution of the algorithm googling around i found no such error descriptions concerning qgis but a lot of similar problems most of them are pointing out that it could be always a problem to set the dyld library path this variable is set under settings options systems environment with the value applications qgis app contents macos lib and it is not possible for me to change this value unsetting the variable dyld library path seems to have no effect
1
127,948
18,024,746,313
IssuesEvent
2021-09-17 01:58:27
uniquelyparticular/serverless-oauth
https://api.github.com/repos/uniquelyparticular/serverless-oauth
opened
CVE-2021-3795 (Medium) detected in semver-regex-2.0.0.tgz
security vulnerability
## CVE-2021-3795 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>semver-regex-2.0.0.tgz</b></p></summary> <p>Regular expression for matching semver versions</p> <p>Library home page: <a href="https://registry.npmjs.org/semver-regex/-/semver-regex-2.0.0.tgz">https://registry.npmjs.org/semver-regex/-/semver-regex-2.0.0.tgz</a></p> <p>Path to dependency file: /generic-oauth/package.json</p> <p>Path to vulnerable library: /tmp/git/generic-oauth/node_modules/semver-regex/package.json</p> <p> Dependency Hierarchy: - semantic-release-15.13.14.tgz (Root Library) - find-versions-3.1.0.tgz - :x: **semver-regex-2.0.0.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> semver-regex is vulnerable to Inefficient Regular Expression Complexity <p>Publish Date: 2021-09-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3795>CVE-2021-3795</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: N/A - Attack Complexity: N/A - Privileges Required: N/A - User Interaction: N/A - Scope: N/A - Impact Metrics: - Confidentiality Impact: N/A - Integrity Impact: N/A - Availability Impact: N/A </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1">https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1</a></p> <p>Release Date: 2021-09-15</p> <p>Fix Resolution: semver-regex - 3.1.3,4.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-3795 (Medium) detected in semver-regex-2.0.0.tgz - ## CVE-2021-3795 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>semver-regex-2.0.0.tgz</b></p></summary> <p>Regular expression for matching semver versions</p> <p>Library home page: <a href="https://registry.npmjs.org/semver-regex/-/semver-regex-2.0.0.tgz">https://registry.npmjs.org/semver-regex/-/semver-regex-2.0.0.tgz</a></p> <p>Path to dependency file: /generic-oauth/package.json</p> <p>Path to vulnerable library: /tmp/git/generic-oauth/node_modules/semver-regex/package.json</p> <p> Dependency Hierarchy: - semantic-release-15.13.14.tgz (Root Library) - find-versions-3.1.0.tgz - :x: **semver-regex-2.0.0.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> semver-regex is vulnerable to Inefficient Regular Expression Complexity <p>Publish Date: 2021-09-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3795>CVE-2021-3795</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: N/A - Attack Complexity: N/A - Privileges Required: N/A - User Interaction: N/A - Scope: N/A - Impact Metrics: - Confidentiality Impact: N/A - Integrity Impact: N/A - Availability Impact: N/A </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1">https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1</a></p> <p>Release Date: 2021-09-15</p> <p>Fix Resolution: semver-regex - 3.1.3,4.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in semver regex tgz cve medium severity vulnerability vulnerable library semver regex tgz regular expression for matching semver versions library home page a href path to dependency file generic oauth package json path to vulnerable library tmp git generic oauth node modules semver regex package json dependency hierarchy semantic release tgz root library find versions tgz x semver regex tgz vulnerable library vulnerability details semver regex is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution semver regex step up your open source security game with whitesource
0
19,387
25,523,644,169
IssuesEvent
2022-11-28 23:06:10
aiidateam/aiida-core
https://api.github.com/repos/aiidateam/aiida-core
closed
Exception raised in `WorkChain.out` will cause it to be stuck in `Running`
type/bug type/duplicate topic/workflows priority/important topic/processes
This appears when trying to output an unstored data node in a workchain step for example. The exception will appear in the daemon log, but the process will never properly transition to the excepted state. Triggerable with: ``` from aiida.engine import workfunction from aiida.orm import Dict @workfunction def test(): return Dict(dict={'a': 1}) ``` causing ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-9-fbd55f77ab7c> in <module>() ----> 1 test() /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/aiida/engine/processes/functions.pyc in decorated_function(*args, **kwargs) 195 def decorated_function(*args, **kwargs): 196 """This wrapper function is the actual function that is called.""" --> 197 result, _ = run_get_node(*args, **kwargs) 198 return result 199 /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/aiida/engine/processes/functions.pyc in run_get_node(*args, **kwargs) 167 168 try: --> 169 result = process.execute() 170 finally: 171 # If the `original_handler` is set, that means the `kill_process` was bound, which needs to be reset /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/aiida/engine/processes/functions.pyc in execute(self) 377 def execute(self): 378 """Execute the process.""" --> 379 result = super(FunctionProcess, self).execute() 380 381 # FunctionProcesses can return a single value as output, and not a dictionary, so we should also return that /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in func_wrapper(self, *args, **kwargs) 86 if self._closed: 87 raise exceptions.ClosedError("Process is closed") ---> 88 return func(self, *args, **kwargs) 89 90 return func_wrapper /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in execute(self) 1061 """ 1062 if not self.has_terminated(): -> 1063 self.loop().run_sync(self.step_until_terminated) 1064 1065 return self.future().result() /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/ioloop.pyc in run_sync(self, func, timeout) 456 if not future_cell[0].done(): 457 raise TimeoutError('Operation timed out after %s seconds' % timeout) --> 458 return future_cell[0].result() 459 460 def time(self): /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/concurrent.pyc in result(self, timeout) 236 if self._exc_info is not None: 237 try: --> 238 raise_exc_info(self._exc_info) 239 finally: 240 self = None /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/gen.pyc in run(self) 1061 if exc_info is not None: 1062 try: -> 1063 yielded = self.gen.throw(*exc_info) 1064 finally: 1065 # Break up a reference to itself /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in step_until_terminated(self) 1109 def step_until_terminated(self): 1110 while not self.has_terminated(): -> 1111 yield self.step() 1112 1113 # endregion /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/gen.pyc in run(self) 1053 1054 try: -> 1055 value = future.result() 1056 except Exception: 1057 self.had_exception = True /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/concurrent.pyc in result(self, timeout) 236 if self._exc_info is not None: 237 try: --> 238 raise_exc_info(self._exc_info) 239 finally: 240 self = None /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/gen.pyc in run(self) 1067 exc_info = None 1068 else: -> 1069 yielded = self.gen.send(value) 1070 1071 if stack_context._state.contexts is not orig_stack_contexts: /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in step(self) 1100 else: 1101 # Everything nominal so transition to the next state -> 1102 self.transition_to(next_state) 1103 1104 finally: /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in transition_to(self, new_state, *args, **kwargs) 324 raise 325 self._transition_failing = True --> 326 self.transition_failed(initial_state_label, label, *sys.exc_info()[1:]) 327 finally: 328 self._transition_failing = False /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in transition_failed(self, initial_state, final_state, exception, trace) 337 :type exception: :class:`Exception` 338 """ --> 339 six.reraise(type(exception), exception, trace) 340 341 def get_debug(self): /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in transition_to(self, new_state, *args, **kwargs) 308 309 try: --> 310 self._enter_next_state(new_state) 311 except StateEntryFailed as exception: 312 new_state = exception.state /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in _enter_next_state(self, next_state) 372 next_state.do_enter() 373 self._state = next_state --> 374 self._fire_state_event(StateEventHook.ENTERED_STATE, last_state) 375 376 def _create_state_instance(self, state, *args, **kwargs): /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in _fire_state_event(self, hook, state) 286 def _fire_state_event(self, hook, state): 287 for callback in self._event_callbacks.get(hook, []): --> 288 callback(self, hook, state) 289 290 @super_check /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in <lambda>(_s, _h, from_state) 306 lambda _s, _h, state: self.on_entering(state)) 307 self.add_state_event_callback(state_machine.StateEventHook.ENTERED_STATE, --> 308 lambda _s, _h, from_state: self.on_entered(from_state)) 309 self.add_state_event_callback(state_machine.StateEventHook.EXITING_STATE, 310 lambda _s, _h, _state: self.on_exiting()) /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/aiida/engine/processes/process.pyc in on_entered(self, from_state) 342 # pylint: disable=cyclic-import 343 from aiida.engine.utils import set_process_state_change_timestamp --> 344 self.update_node_state(self._state) 345 self._save_checkpoint() 346 # Update the latest process state change timestamp /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/aiida/engine/processes/process.pyc in update_node_state(self, state) 579 580 def update_node_state(self, state): --> 581 self.update_outputs() 582 self.node.set_process_state(state.LABEL) 583 /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/aiida/engine/processes/process.pyc in update_outputs(self) 602 output.add_incoming(self.node, LinkType.CREATE, link_label) 603 elif isinstance(self.node, orm.WorkflowNode): --> 604 output.add_incoming(self.node, LinkType.RETURN, link_label) 605 606 output.store() /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/aiida/orm/nodes/node.pyc in add_incoming(self, source, link_type, link_label) 781 """ 782 self.validate_incoming(source, link_type, link_label) --> 783 source.validate_outgoing(self, link_type, link_label) 784 785 if self.is_stored and source.is_stored: /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/aiida/orm/nodes/process/workflow/workfunction.pyc in validate_outgoing(self, target, link_type, link_label) 40 :raise ValueError: if the proposed link is invalid 41 """ ---> 42 super(WorkFunctionNode, self).validate_outgoing(target, link_type, link_label) 43 if link_type is LinkType.RETURN and not target.is_stored: 44 raise ValueError( /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/aiida/orm/nodes/process/workflow/workflow.pyc in validate_outgoing(self, target, link_type, link_label) 70 'Workflow<{}> tried returning an unstored `Data` node. This likely means new `Data` is being created ' 71 'inside the workflow. In order to preserve data provenance, use a `calcfunction` to create this node ' ---> 72 'and return its output from the workflow'.format(self.process_label) 73 ) ValueError: Workflow<test> tried returning an unstored `Data` node. This likely means new `Data` is being created inside the workflow. In order to preserve data provenance, use a `calcfunction` to create this node and return its output from the workflow ```
1.0
Exception raised in `WorkChain.out` will cause it to be stuck in `Running` - This appears when trying to output an unstored data node in a workchain step for example. The exception will appear in the daemon log, but the process will never properly transition to the excepted state. Triggerable with: ``` from aiida.engine import workfunction from aiida.orm import Dict @workfunction def test(): return Dict(dict={'a': 1}) ``` causing ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-9-fbd55f77ab7c> in <module>() ----> 1 test() /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/aiida/engine/processes/functions.pyc in decorated_function(*args, **kwargs) 195 def decorated_function(*args, **kwargs): 196 """This wrapper function is the actual function that is called.""" --> 197 result, _ = run_get_node(*args, **kwargs) 198 return result 199 /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/aiida/engine/processes/functions.pyc in run_get_node(*args, **kwargs) 167 168 try: --> 169 result = process.execute() 170 finally: 171 # If the `original_handler` is set, that means the `kill_process` was bound, which needs to be reset /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/aiida/engine/processes/functions.pyc in execute(self) 377 def execute(self): 378 """Execute the process.""" --> 379 result = super(FunctionProcess, self).execute() 380 381 # FunctionProcesses can return a single value as output, and not a dictionary, so we should also return that /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in func_wrapper(self, *args, **kwargs) 86 if self._closed: 87 raise exceptions.ClosedError("Process is closed") ---> 88 return func(self, *args, **kwargs) 89 90 return func_wrapper /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in execute(self) 1061 """ 1062 if not self.has_terminated(): -> 1063 self.loop().run_sync(self.step_until_terminated) 1064 1065 return self.future().result() /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/ioloop.pyc in run_sync(self, func, timeout) 456 if not future_cell[0].done(): 457 raise TimeoutError('Operation timed out after %s seconds' % timeout) --> 458 return future_cell[0].result() 459 460 def time(self): /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/concurrent.pyc in result(self, timeout) 236 if self._exc_info is not None: 237 try: --> 238 raise_exc_info(self._exc_info) 239 finally: 240 self = None /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/gen.pyc in run(self) 1061 if exc_info is not None: 1062 try: -> 1063 yielded = self.gen.throw(*exc_info) 1064 finally: 1065 # Break up a reference to itself /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in step_until_terminated(self) 1109 def step_until_terminated(self): 1110 while not self.has_terminated(): -> 1111 yield self.step() 1112 1113 # endregion /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/gen.pyc in run(self) 1053 1054 try: -> 1055 value = future.result() 1056 except Exception: 1057 self.had_exception = True /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/concurrent.pyc in result(self, timeout) 236 if self._exc_info is not None: 237 try: --> 238 raise_exc_info(self._exc_info) 239 finally: 240 self = None /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/gen.pyc in run(self) 1067 exc_info = None 1068 else: -> 1069 yielded = self.gen.send(value) 1070 1071 if stack_context._state.contexts is not orig_stack_contexts: /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in step(self) 1100 else: 1101 # Everything nominal so transition to the next state -> 1102 self.transition_to(next_state) 1103 1104 finally: /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in transition_to(self, new_state, *args, **kwargs) 324 raise 325 self._transition_failing = True --> 326 self.transition_failed(initial_state_label, label, *sys.exc_info()[1:]) 327 finally: 328 self._transition_failing = False /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in transition_failed(self, initial_state, final_state, exception, trace) 337 :type exception: :class:`Exception` 338 """ --> 339 six.reraise(type(exception), exception, trace) 340 341 def get_debug(self): /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in transition_to(self, new_state, *args, **kwargs) 308 309 try: --> 310 self._enter_next_state(new_state) 311 except StateEntryFailed as exception: 312 new_state = exception.state /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in _enter_next_state(self, next_state) 372 next_state.do_enter() 373 self._state = next_state --> 374 self._fire_state_event(StateEventHook.ENTERED_STATE, last_state) 375 376 def _create_state_instance(self, state, *args, **kwargs): /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in _fire_state_event(self, hook, state) 286 def _fire_state_event(self, hook, state): 287 for callback in self._event_callbacks.get(hook, []): --> 288 callback(self, hook, state) 289 290 @super_check /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in <lambda>(_s, _h, from_state) 306 lambda _s, _h, state: self.on_entering(state)) 307 self.add_state_event_callback(state_machine.StateEventHook.ENTERED_STATE, --> 308 lambda _s, _h, from_state: self.on_entered(from_state)) 309 self.add_state_event_callback(state_machine.StateEventHook.EXITING_STATE, 310 lambda _s, _h, _state: self.on_exiting()) /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/aiida/engine/processes/process.pyc in on_entered(self, from_state) 342 # pylint: disable=cyclic-import 343 from aiida.engine.utils import set_process_state_change_timestamp --> 344 self.update_node_state(self._state) 345 self._save_checkpoint() 346 # Update the latest process state change timestamp /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/aiida/engine/processes/process.pyc in update_node_state(self, state) 579 580 def update_node_state(self, state): --> 581 self.update_outputs() 582 self.node.set_process_state(state.LABEL) 583 /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/aiida/engine/processes/process.pyc in update_outputs(self) 602 output.add_incoming(self.node, LinkType.CREATE, link_label) 603 elif isinstance(self.node, orm.WorkflowNode): --> 604 output.add_incoming(self.node, LinkType.RETURN, link_label) 605 606 output.store() /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/aiida/orm/nodes/node.pyc in add_incoming(self, source, link_type, link_label) 781 """ 782 self.validate_incoming(source, link_type, link_label) --> 783 source.validate_outgoing(self, link_type, link_label) 784 785 if self.is_stored and source.is_stored: /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/aiida/orm/nodes/process/workflow/workfunction.pyc in validate_outgoing(self, target, link_type, link_label) 40 :raise ValueError: if the proposed link is invalid 41 """ ---> 42 super(WorkFunctionNode, self).validate_outgoing(target, link_type, link_label) 43 if link_type is LinkType.RETURN and not target.is_stored: 44 raise ValueError( /home/sph/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/aiida/orm/nodes/process/workflow/workflow.pyc in validate_outgoing(self, target, link_type, link_label) 70 'Workflow<{}> tried returning an unstored `Data` node. This likely means new `Data` is being created ' 71 'inside the workflow. In order to preserve data provenance, use a `calcfunction` to create this node ' ---> 72 'and return its output from the workflow'.format(self.process_label) 73 ) ValueError: Workflow<test> tried returning an unstored `Data` node. This likely means new `Data` is being created inside the workflow. In order to preserve data provenance, use a `calcfunction` to create this node and return its output from the workflow ```
process
exception raised in workchain out will cause it to be stuck in running this appears when trying to output an unstored data node in a workchain step for example the exception will appear in the daemon log but the process will never properly transition to the excepted state triggerable with from aiida engine import workfunction from aiida orm import dict workfunction def test return dict dict a causing valueerror traceback most recent call last in test home sph virtualenvs aiida dev local lib site packages aiida engine processes functions pyc in decorated function args kwargs def decorated function args kwargs this wrapper function is the actual function that is called result run get node args kwargs return result home sph virtualenvs aiida dev local lib site packages aiida engine processes functions pyc in run get node args kwargs try result process execute finally if the original handler is set that means the kill process was bound which needs to be reset home sph virtualenvs aiida dev local lib site packages aiida engine processes functions pyc in execute self def execute self execute the process result super functionprocess self execute functionprocesses can return a single value as output and not a dictionary so we should also return that home sph virtualenvs aiida dev local lib site packages plumpy processes pyc in func wrapper self args kwargs if self closed raise exceptions closederror process is closed return func self args kwargs return func wrapper home sph virtualenvs aiida dev local lib site packages plumpy processes pyc in execute self if not self has terminated self loop run sync self step until terminated return self future result home sph virtualenvs aiida dev local lib site packages tornado ioloop pyc in run sync self func timeout if not future cell done raise timeouterror operation timed out after s seconds timeout return future cell result def time self home sph virtualenvs aiida dev local lib site packages tornado concurrent pyc in result self timeout if self exc info is not none try raise exc info self exc info finally self none home sph virtualenvs aiida dev local lib site packages tornado gen pyc in run self if exc info is not none try yielded self gen throw exc info finally break up a reference to itself home sph virtualenvs aiida dev local lib site packages plumpy processes pyc in step until terminated self def step until terminated self while not self has terminated yield self step endregion home sph virtualenvs aiida dev local lib site packages tornado gen pyc in run self try value future result except exception self had exception true home sph virtualenvs aiida dev local lib site packages tornado concurrent pyc in result self timeout if self exc info is not none try raise exc info self exc info finally self none home sph virtualenvs aiida dev local lib site packages tornado gen pyc in run self exc info none else yielded self gen send value if stack context state contexts is not orig stack contexts home sph virtualenvs aiida dev local lib site packages plumpy processes pyc in step self else everything nominal so transition to the next state self transition to next state finally home sph virtualenvs aiida dev local lib site packages plumpy base state machine pyc in transition to self new state args kwargs raise self transition failing true self transition failed initial state label label sys exc info finally self transition failing false home sph virtualenvs aiida dev local lib site packages plumpy base state machine pyc in transition failed self initial state final state exception trace type exception class exception six reraise type exception exception trace def get debug self home sph virtualenvs aiida dev local lib site packages plumpy base state machine pyc in transition to self new state args kwargs try self enter next state new state except stateentryfailed as exception new state exception state home sph virtualenvs aiida dev local lib site packages plumpy base state machine pyc in enter next state self next state next state do enter self state next state self fire state event stateeventhook entered state last state def create state instance self state args kwargs home sph virtualenvs aiida dev local lib site packages plumpy base state machine pyc in fire state event self hook state def fire state event self hook state for callback in self event callbacks get hook callback self hook state super check home sph virtualenvs aiida dev local lib site packages plumpy processes pyc in s h from state lambda s h state self on entering state self add state event callback state machine stateeventhook entered state lambda s h from state self on entered from state self add state event callback state machine stateeventhook exiting state lambda s h state self on exiting home sph virtualenvs aiida dev local lib site packages aiida engine processes process pyc in on entered self from state pylint disable cyclic import from aiida engine utils import set process state change timestamp self update node state self state self save checkpoint update the latest process state change timestamp home sph virtualenvs aiida dev local lib site packages aiida engine processes process pyc in update node state self state def update node state self state self update outputs self node set process state state label home sph virtualenvs aiida dev local lib site packages aiida engine processes process pyc in update outputs self output add incoming self node linktype create link label elif isinstance self node orm workflownode output add incoming self node linktype return link label output store home sph virtualenvs aiida dev local lib site packages aiida orm nodes node pyc in add incoming self source link type link label self validate incoming source link type link label source validate outgoing self link type link label if self is stored and source is stored home sph virtualenvs aiida dev local lib site packages aiida orm nodes process workflow workfunction pyc in validate outgoing self target link type link label raise valueerror if the proposed link is invalid super workfunctionnode self validate outgoing target link type link label if link type is linktype return and not target is stored raise valueerror home sph virtualenvs aiida dev local lib site packages aiida orm nodes process workflow workflow pyc in validate outgoing self target link type link label workflow tried returning an unstored data node this likely means new data is being created inside the workflow in order to preserve data provenance use a calcfunction to create this node and return its output from the workflow format self process label valueerror workflow tried returning an unstored data node this likely means new data is being created inside the workflow in order to preserve data provenance use a calcfunction to create this node and return its output from the workflow
1
296,692
25,570,060,882
IssuesEvent
2022-11-30 16:58:43
EmbarkStudios/rust-gpu
https://api.github.com/repos/EmbarkStudios/rust-gpu
opened
We should replace `rustc_codegen_spirv::linker::test` unit tests with compiletest ones.
t: enhancement a: test
The main unique aspect of these tests is they take SPIR-V assembly *as an input*, *not* Rust code, e.g.: https://github.com/EmbarkStudios/rust-gpu/blob/acb05d379982f35e6d4fbd85ff28af3e9876cf4c/crates/rustc_codegen_spirv/src/linker/test.rs#L185-L202 However, we might be able to use `module_asm!` to feed SPIR-V assembly into the compilation, and `compiletest` does have the ability to introduce dependencies to link against. The main weirdness we might need to deal with is all the definitions from e.g. `core` that we don't use, but DCE might be able to clean that up. (Or we could even use e.g. `extern "C"` FFI in Rust code to describe such situations without `module_asm!` at all!) If we can do this transition, we wouldn't have to deal with weird artificial compiler sessions and e.g.: * #956
1.0
We should replace `rustc_codegen_spirv::linker::test` unit tests with compiletest ones. - The main unique aspect of these tests is they take SPIR-V assembly *as an input*, *not* Rust code, e.g.: https://github.com/EmbarkStudios/rust-gpu/blob/acb05d379982f35e6d4fbd85ff28af3e9876cf4c/crates/rustc_codegen_spirv/src/linker/test.rs#L185-L202 However, we might be able to use `module_asm!` to feed SPIR-V assembly into the compilation, and `compiletest` does have the ability to introduce dependencies to link against. The main weirdness we might need to deal with is all the definitions from e.g. `core` that we don't use, but DCE might be able to clean that up. (Or we could even use e.g. `extern "C"` FFI in Rust code to describe such situations without `module_asm!` at all!) If we can do this transition, we wouldn't have to deal with weird artificial compiler sessions and e.g.: * #956
non_process
we should replace rustc codegen spirv linker test unit tests with compiletest ones the main unique aspect of these tests is they take spir v assembly as an input not rust code e g however we might be able to use module asm to feed spir v assembly into the compilation and compiletest does have the ability to introduce dependencies to link against the main weirdness we might need to deal with is all the definitions from e g core that we don t use but dce might be able to clean that up or we could even use e g extern c ffi in rust code to describe such situations without module asm at all if we can do this transition we wouldn t have to deal with weird artificial compiler sessions and e g
0
149,568
19,581,494,290
IssuesEvent
2022-01-04 22:02:05
sourcegraph/sourcegraph
https://api.github.com/repos/sourcegraph/sourcegraph
closed
Bug: Using Sourcegraph.com GraphQL API from other websites is broken
bug security sourcegraph.com
Problem: If you try to use `https://sourcegraph.com/.api/graphql` from another website, it is blocked due to CORS ~because we're not setting any `Content-Security-Policy` for responses from that URL, it defaults and is thus blocked:~ ~Content Security Policy: The page’s settings blocked the loading of a resource at https://sourcegraph.com/.api/graphql?SearchContexts (“default-src”).~ **Are you ready for some history?** * Based on my memory, we have always intended for the Sourcegraph.com GraphQL API to be used as broadly and accessibly as possible - including by other websites, by unauthenticated users on https://sourcegraph.com/api/console, via the `src` CLI, and literally everywhere else. * We explicitly designed the CORS handling of the GraphQL API to _only ever allow cookie-based session auth_ [_if on the same domain_ (in so-called "non-simple" CORS requests)](https://sourcegraph.com/github.com/sourcegraph/sourcegraph/-/blob/cmd/frontend/internal/session/session.go#L299-337) * This is closely related to this old issue I have open about [removing our redundant CSRF cookies #7658](https://github.com/sourcegraph/sourcegraph/issues/7658) and if you're wondering "how is that secure?" see [my detailed write-up here back in 2018](https://github.com/sourcegraph/sourcegraph/issues/227#issuecomment-426482380) which is still true today - or this more brief explanation of [OWASP: using custom request headers to prevent CSRF](https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Request_Forgery_Prevention_Cheat_Sheet.html#use-of-custom-request-headers). In short: 1. ~https://sourcegraph.com/.api/graphql should have a `Content-Security-Policy` which allows requests from any origin.~ 2. ~This should be completely safe and secure to enable, and was the original intended behavior - but **obviously** needs verification.~
True
Bug: Using Sourcegraph.com GraphQL API from other websites is broken - Problem: If you try to use `https://sourcegraph.com/.api/graphql` from another website, it is blocked due to CORS ~because we're not setting any `Content-Security-Policy` for responses from that URL, it defaults and is thus blocked:~ ~Content Security Policy: The page’s settings blocked the loading of a resource at https://sourcegraph.com/.api/graphql?SearchContexts (“default-src”).~ **Are you ready for some history?** * Based on my memory, we have always intended for the Sourcegraph.com GraphQL API to be used as broadly and accessibly as possible - including by other websites, by unauthenticated users on https://sourcegraph.com/api/console, via the `src` CLI, and literally everywhere else. * We explicitly designed the CORS handling of the GraphQL API to _only ever allow cookie-based session auth_ [_if on the same domain_ (in so-called "non-simple" CORS requests)](https://sourcegraph.com/github.com/sourcegraph/sourcegraph/-/blob/cmd/frontend/internal/session/session.go#L299-337) * This is closely related to this old issue I have open about [removing our redundant CSRF cookies #7658](https://github.com/sourcegraph/sourcegraph/issues/7658) and if you're wondering "how is that secure?" see [my detailed write-up here back in 2018](https://github.com/sourcegraph/sourcegraph/issues/227#issuecomment-426482380) which is still true today - or this more brief explanation of [OWASP: using custom request headers to prevent CSRF](https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Request_Forgery_Prevention_Cheat_Sheet.html#use-of-custom-request-headers). In short: 1. ~https://sourcegraph.com/.api/graphql should have a `Content-Security-Policy` which allows requests from any origin.~ 2. ~This should be completely safe and secure to enable, and was the original intended behavior - but **obviously** needs verification.~
non_process
bug using sourcegraph com graphql api from other websites is broken problem if you try to use from another website it is blocked due to cors because we re not setting any content security policy for responses from that url it defaults and is thus blocked content security policy the page’s settings blocked the loading of a resource at “default src” are you ready for some history based on my memory we have always intended for the sourcegraph com graphql api to be used as broadly and accessibly as possible including by other websites by unauthenticated users on via the src cli and literally everywhere else we explicitly designed the cors handling of the graphql api to only ever allow cookie based session auth this is closely related to this old issue i have open about and if you re wondering how is that secure see which is still true today or this more brief explanation of in short should have a content security policy which allows requests from any origin this should be completely safe and secure to enable and was the original intended behavior but obviously needs verification
0
147,395
23,211,211,105
IssuesEvent
2022-08-02 10:16:59
zuri-training/anima_lib_team98
https://api.github.com/repos/zuri-training/anima_lib_team98
opened
Continue designing the account screen -design
design
![WhatsApp Image 2022-08-02 at 8 09 13 AM](https://user-images.githubusercontent.com/60884987/182351010-01c0aa63-d9d1-46d9-b99c-0f3ff2ee3358.jpeg) make use of text fields to display the email and name of the user with a tertiary button that lets them edit (as seen in the image above). Also make a frame for change password.
1.0
Continue designing the account screen -design - ![WhatsApp Image 2022-08-02 at 8 09 13 AM](https://user-images.githubusercontent.com/60884987/182351010-01c0aa63-d9d1-46d9-b99c-0f3ff2ee3358.jpeg) make use of text fields to display the email and name of the user with a tertiary button that lets them edit (as seen in the image above). Also make a frame for change password.
non_process
continue designing the account screen design make use of text fields to display the email and name of the user with a tertiary button that lets them edit as seen in the image above also make a frame for change password
0
9,908
12,949,431,554
IssuesEvent
2020-07-19 09:06:01
cetic/tsorage
https://api.github.com/repos/cetic/tsorage
opened
Extend and open the Processor component with jslt
enhancement processing
Currently, aggregated and derivated values are defined as Scala code written in the Processor component. This approach is quite efficient, but it makes any adaptation relatively difficult, since the only practical way to extend the default behaviour consists in developing an external process that consumes and produces messages through Kafka topics. As an alternative, I propose to consider [jslt](https://github.com/schibsted/jslt). This Java library provides a DSL for describing transformations on JSON values. Such a description could be placed on a configuration file that could be edited by the user. The Processor module could then take into account any change on its configuration files, and apply the corresponding transformations.
1.0
Extend and open the Processor component with jslt - Currently, aggregated and derivated values are defined as Scala code written in the Processor component. This approach is quite efficient, but it makes any adaptation relatively difficult, since the only practical way to extend the default behaviour consists in developing an external process that consumes and produces messages through Kafka topics. As an alternative, I propose to consider [jslt](https://github.com/schibsted/jslt). This Java library provides a DSL for describing transformations on JSON values. Such a description could be placed on a configuration file that could be edited by the user. The Processor module could then take into account any change on its configuration files, and apply the corresponding transformations.
process
extend and open the processor component with jslt currently aggregated and derivated values are defined as scala code written in the processor component this approach is quite efficient but it makes any adaptation relatively difficult since the only practical way to extend the default behaviour consists in developing an external process that consumes and produces messages through kafka topics as an alternative i propose to consider this java library provides a dsl for describing transformations on json values such a description could be placed on a configuration file that could be edited by the user the processor module could then take into account any change on its configuration files and apply the corresponding transformations
1
11,664
14,528,454,961
IssuesEvent
2020-12-14 16:33:24
cncf/sig-security
https://api.github.com/repos/cncf/sig-security
closed
[Suggestion] Establish Criteria and Boundaries for Security Assessment
assessment-process inactive suggestion
**Description:** Provide clearer criteria and boundaries for Security Assessment process or rename it. **Impact:** The current Security Assessment process is unclear, therefore the artifacts are inconsistent. Additionally, having the project owner/lead deliver an assessment is not considered a "best practice." **Scope:** Either create a clearer SOP for the assessment process or convert into a Security Overview document. Additional info: https://github.com/cncf/sig-security/issues/395 https://github.com/cncf/sig-security/issues/394
1.0
[Suggestion] Establish Criteria and Boundaries for Security Assessment - **Description:** Provide clearer criteria and boundaries for Security Assessment process or rename it. **Impact:** The current Security Assessment process is unclear, therefore the artifacts are inconsistent. Additionally, having the project owner/lead deliver an assessment is not considered a "best practice." **Scope:** Either create a clearer SOP for the assessment process or convert into a Security Overview document. Additional info: https://github.com/cncf/sig-security/issues/395 https://github.com/cncf/sig-security/issues/394
process
establish criteria and boundaries for security assessment description provide clearer criteria and boundaries for security assessment process or rename it impact the current security assessment process is unclear therefore the artifacts are inconsistent additionally having the project owner lead deliver an assessment is not considered a best practice scope either create a clearer sop for the assessment process or convert into a security overview document additional info
1
30,056
6,000,850,702
IssuesEvent
2017-06-05 07:06:01
bridgedotnet/Bridge
https://api.github.com/repos/bridgedotnet/Bridge
closed
As<> method should prevent boxing operation
defect in progress
As<> method should prevent boxing operation ### Steps To Reproduce ```c# public class Program { public static void Main() { DateTime val1 = new DateTime(636318720000000000); Date val2 = (val1).As<Date>(); var offset = val2.GetTimezoneOffset(); } } ``` ### Actual Result JavaScript error: ``` TypeError: val2.getTimezoneOffset is not a function at Function.Main ``` ## See Also * https://forums.bridge.net/forum/bridge-net-pro/bugs/4329-system-exception-in-dev-deck-net
1.0
As<> method should prevent boxing operation - As<> method should prevent boxing operation ### Steps To Reproduce ```c# public class Program { public static void Main() { DateTime val1 = new DateTime(636318720000000000); Date val2 = (val1).As<Date>(); var offset = val2.GetTimezoneOffset(); } } ``` ### Actual Result JavaScript error: ``` TypeError: val2.getTimezoneOffset is not a function at Function.Main ``` ## See Also * https://forums.bridge.net/forum/bridge-net-pro/bugs/4329-system-exception-in-dev-deck-net
non_process
as method should prevent boxing operation as method should prevent boxing operation steps to reproduce c public class program public static void main datetime new datetime date as var offset gettimezoneoffset actual result javascript error typeerror gettimezoneoffset is not a function at function main see also
0
18,985
24,977,509,979
IssuesEvent
2022-11-02 09:06:26
sophgo/tpu-mlir
https://api.github.com/repos/sophgo/tpu-mlir
closed
Sample for xception model
task processing
Write Sample for xception, to do classification. Refer classify_inception_v3.py. ![](https://user-images.githubusercontent.com/10864766/194826376-f29f3cac-57dc-42d3-b2ce-e325efc10892.png)
1.0
Sample for xception model - Write Sample for xception, to do classification. Refer classify_inception_v3.py. ![](https://user-images.githubusercontent.com/10864766/194826376-f29f3cac-57dc-42d3-b2ce-e325efc10892.png)
process
sample for xception model write sample for xception to do classification refer classify inception py
1
5,173
7,954,689,592
IssuesEvent
2018-07-12 08:23:36
anatoliyfedorenko/bst
https://api.github.com/repos/anatoliyfedorenko/bst
closed
Configure Travic CI for the project
In process
Need to properly configure Travic CI for this project and explain all the process of configuring it here
1.0
Configure Travic CI for the project - Need to properly configure Travic CI for this project and explain all the process of configuring it here
process
configure travic ci for the project need to properly configure travic ci for this project and explain all the process of configuring it here
1
2,259
3,362,800,611
IssuesEvent
2015-11-20 08:47:46
cogneco/ooc-kean
https://api.github.com/repos/cogneco/ooc-kean
reopened
FloatMatrix: reuse allocated memory
performance
In `FloatMatrix`, instead of creating a new instance to store the result of e.g. `transpose` and freeing the original data, reuse the original data. Also, if transposing a 1-dimensional `FloatMatrix`, just swap `this size` and return.
True
FloatMatrix: reuse allocated memory - In `FloatMatrix`, instead of creating a new instance to store the result of e.g. `transpose` and freeing the original data, reuse the original data. Also, if transposing a 1-dimensional `FloatMatrix`, just swap `this size` and return.
non_process
floatmatrix reuse allocated memory in floatmatrix instead of creating a new instance to store the result of e g transpose and freeing the original data reuse the original data also if transposing a dimensional floatmatrix just swap this size and return
0
22,071
30,594,493,659
IssuesEvent
2023-07-21 20:21:38
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Open terminal/bash/console/cmd on macOS/linux using ProcessStartInfo and RedirectStandardOutput/Error
area-System.Diagnostics.Process untriaged
### Description I'm trying to open a Terminal (CMD) and execute some commands while monitoring the outputs of those commands. However I can't seem to find a way to do that on macOS. Maybe what I'm missing is a /C like in CMD but I could not find any info about it online. `/C -Carries out the command specified by string and then terminates.` On some forums people write use ScriptEditor, however that won't work for my workflow cause I want to monitor the responses and based on it execute futher commands. ### Reproduction Steps ```csharp var workingDirectory = "PLACE_HODLER"; var processStartInfo = new ProcessStartInfo("/System/Applications/Utilities/Terminal.app/Contents/MacOS/Terminal", "ls") { RedirectStandardOutput = true, RedirectStandardError = true, UseShellExecute = false, WorkingDirectory = workingDirectory, CreateNoWindow = true }; ``` ### Expected behavior Should open a terminal and list the files inside of it. ### Actual behavior Opens the terminal and dose not do anything.
1.0
Open terminal/bash/console/cmd on macOS/linux using ProcessStartInfo and RedirectStandardOutput/Error - ### Description I'm trying to open a Terminal (CMD) and execute some commands while monitoring the outputs of those commands. However I can't seem to find a way to do that on macOS. Maybe what I'm missing is a /C like in CMD but I could not find any info about it online. `/C -Carries out the command specified by string and then terminates.` On some forums people write use ScriptEditor, however that won't work for my workflow cause I want to monitor the responses and based on it execute futher commands. ### Reproduction Steps ```csharp var workingDirectory = "PLACE_HODLER"; var processStartInfo = new ProcessStartInfo("/System/Applications/Utilities/Terminal.app/Contents/MacOS/Terminal", "ls") { RedirectStandardOutput = true, RedirectStandardError = true, UseShellExecute = false, WorkingDirectory = workingDirectory, CreateNoWindow = true }; ``` ### Expected behavior Should open a terminal and list the files inside of it. ### Actual behavior Opens the terminal and dose not do anything.
process
open terminal bash console cmd on macos linux using processstartinfo and redirectstandardoutput error description i m trying to open a terminal cmd and execute some commands while monitoring the outputs of those commands however i can t seem to find a way to do that on macos maybe what i m missing is a c like in cmd but i could not find any info about it online c carries out the command specified by string and then terminates on some forums people write use scripteditor however that won t work for my workflow cause i want to monitor the responses and based on it execute futher commands reproduction steps csharp var workingdirectory place hodler var processstartinfo new processstartinfo system applications utilities terminal app contents macos terminal ls redirectstandardoutput true redirectstandarderror true useshellexecute false workingdirectory workingdirectory createnowindow true expected behavior should open a terminal and list the files inside of it actual behavior opens the terminal and dose not do anything
1
15,645
19,846,225,478
IssuesEvent
2022-01-21 06:47:28
ooi-data/CE04OSSM-SBD12-05-WAVSSA000-recovered_host-wavss_a_dcl_mean_directional_recovered
https://api.github.com/repos/ooi-data/CE04OSSM-SBD12-05-WAVSSA000-recovered_host-wavss_a_dcl_mean_directional_recovered
opened
🛑 Processing failed: ValueError
process
## Overview `ValueError` found in `processing_task` task during run ended on 2022-01-21T06:47:28.267300. ## Details Flow name: `CE04OSSM-SBD12-05-WAVSSA000-recovered_host-wavss_a_dcl_mean_directional_recovered` Task name: `processing_task` Error type: `ValueError` Error message: not enough values to unpack (expected 3, got 0) <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream append_to_zarr(mod_ds, final_store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr _append_zarr(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr existing_arr.append(var_data.values) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values return _as_array_or_item(self._data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item data = np.asarray(data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__ x = self.compute() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute (result,) = compute(self, traverse=False, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute results = schedule(dsk, keys, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get results = get_async( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async raise_exception(exc, tb) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise raise exc File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task result = _execute_task(task, data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task return func(*(_execute_task(a, cache) for a in args)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter c = np.asarray(c) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__ self._ensure_cached() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached self.array = NumpyIndexingAdapter(np.asarray(self.array)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__ return array[key.tuple] File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__ return self.get_basic_selection(selection, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection return self._get_basic_selection_nd(selection=selection, out=out, File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd return self._get_selection(indexer=indexer, out=out, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection lchunk_coords, lchunk_selection, lout_selection = zip(*indexer) ValueError: not enough values to unpack (expected 3, got 0) ``` </details>
1.0
🛑 Processing failed: ValueError - ## Overview `ValueError` found in `processing_task` task during run ended on 2022-01-21T06:47:28.267300. ## Details Flow name: `CE04OSSM-SBD12-05-WAVSSA000-recovered_host-wavss_a_dcl_mean_directional_recovered` Task name: `processing_task` Error type: `ValueError` Error message: not enough values to unpack (expected 3, got 0) <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream append_to_zarr(mod_ds, final_store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr _append_zarr(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr existing_arr.append(var_data.values) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values return _as_array_or_item(self._data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item data = np.asarray(data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__ x = self.compute() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute (result,) = compute(self, traverse=False, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute results = schedule(dsk, keys, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get results = get_async( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async raise_exception(exc, tb) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise raise exc File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task result = _execute_task(task, data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task return func(*(_execute_task(a, cache) for a in args)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter c = np.asarray(c) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__ self._ensure_cached() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached self.array = NumpyIndexingAdapter(np.asarray(self.array)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__ return array[key.tuple] File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__ return self.get_basic_selection(selection, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection return self._get_basic_selection_nd(selection=selection, out=out, File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd return self._get_selection(indexer=indexer, out=out, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection lchunk_coords, lchunk_selection, lout_selection = zip(*indexer) ValueError: not enough values to unpack (expected 3, got 0) ``` </details>
process
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name recovered host wavss a dcl mean directional recovered task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got
1
844
2,594,202,464
IssuesEvent
2015-02-20 00:40:31
BALL-Project/ball
https://api.github.com/repos/BALL-Project/ball
closed
Lines in the line model are to fat
C: VIEW P: major R: fixed T: defect
**Reported by dstoeckel on 23 Nov 38909642 14:13 UTC** The lines used for the line representation are to thick. If the structure is crowded (this is the case for every stucture one wants to view using the line model) virtually _nothing_ can be seen. A way to alleviate this would be to choose a thiner line style for drawing the model.
1.0
Lines in the line model are to fat - **Reported by dstoeckel on 23 Nov 38909642 14:13 UTC** The lines used for the line representation are to thick. If the structure is crowded (this is the case for every stucture one wants to view using the line model) virtually _nothing_ can be seen. A way to alleviate this would be to choose a thiner line style for drawing the model.
non_process
lines in the line model are to fat reported by dstoeckel on nov utc the lines used for the line representation are to thick if the structure is crowded this is the case for every stucture one wants to view using the line model virtually nothing can be seen a way to alleviate this would be to choose a thiner line style for drawing the model
0
113,747
24,481,980,903
IssuesEvent
2022-10-09 00:17:44
Keerat666/LeetCode-HacktoberFest22
https://api.github.com/repos/Keerat666/LeetCode-HacktoberFest22
opened
Solution for Removing duplicates from a sorted array
good first issue hacktoberfest hacktoberfest2022 hacktoberfest-accepted Leetcode-Easy
Write a program to remove duplicates from a sorted array Link : https://leetcode.com/problems/remove-duplicates-from-sorted-array/
1.0
Solution for Removing duplicates from a sorted array - Write a program to remove duplicates from a sorted array Link : https://leetcode.com/problems/remove-duplicates-from-sorted-array/
non_process
solution for removing duplicates from a sorted array write a program to remove duplicates from a sorted array link
0
92
2,534,816,257
IssuesEvent
2015-01-25 11:06:06
chrisalexander/Learn-Chinese-app
https://api.github.com/repos/chrisalexander/Learn-Chinese-app
closed
CurrentState should be called CurrentStatus and return IEnumerable<string>
LongRunningProcess
Also update the UI to render accordingly, plus the tests
1.0
CurrentState should be called CurrentStatus and return IEnumerable<string> - Also update the UI to render accordingly, plus the tests
process
currentstate should be called currentstatus and return ienumerable also update the ui to render accordingly plus the tests
1
412,487
12,042,918,814
IssuesEvent
2020-04-14 11:28:42
ahmedkaludi/accelerated-mobile-pages
https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages
closed
Blank pages appear when we are using cloning posts
NEXT UPDATE [Priority: HIGH] bug
Blank pages appear when we are using cloning posts ref: https://secure.helpscout.net/conversation/1117539555/118304?folderId=3575684
1.0
Blank pages appear when we are using cloning posts - Blank pages appear when we are using cloning posts ref: https://secure.helpscout.net/conversation/1117539555/118304?folderId=3575684
non_process
blank pages appear when we are using cloning posts blank pages appear when we are using cloning posts ref
0
331,711
24,322,148,355
IssuesEvent
2022-09-30 11:47:14
Interactions-as-a-Service/d1-orm
https://api.github.com/repos/Interactions-as-a-Service/d1-orm
closed
Todo: Document UPSERTing with models
documentation
The upsert guide currently points to the models guide, and vice versa Quick PR needed to just add an example in the upserting guide
1.0
Todo: Document UPSERTing with models - The upsert guide currently points to the models guide, and vice versa Quick PR needed to just add an example in the upserting guide
non_process
todo document upserting with models the upsert guide currently points to the models guide and vice versa quick pr needed to just add an example in the upserting guide
0
14,618
17,762,363,571
IssuesEvent
2021-08-29 23:15:49
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
test_multiprocessing.py RuntimeError: unable to open shared memory object in read-write mode
module: multiprocessing module: tests triaged
macOS 10.12.6 CUDA 9.0rc CUDNN 7.0 for CUDA 9.0rc Python 3.6 (anaconda) Xcode 8.3.3 Apple LLVM version 8.1.0 (clang-802.0.42) Pytorch built fine with latest update. When running test_multiprocessing.py (python test_multiprocessing.py), I got the following error and it hanged (actually not proceeded anymore. I did Control +C to quit) > s.sEsssss.E.Traceback (most recent call last): > File "/anaconda/lib/python3.6/multiprocessing/queues.py", line 241, in _feed > obj = _ForkingPickler.dumps(obj) > File "/anaconda/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps > cls(buf, protocol).dump(obj) > File "/anaconda/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 108, in reduce_storage > metadata = storage._share_filename_() > RuntimeError: unable to open shared memory object </torch_547_2991312382> in read-write mode at /Users/zafer/deeplearning/buildenv/pytorch/torch/lib/TH/THAllocator.c:230 > FEs.Traceback (most recent call last): > File "/anaconda/lib/python3.6/multiprocessing/queues.py", line 241, in _feed > obj = _ForkingPickler.dumps(obj) > File "/anaconda/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps > cls(buf, protocol).dump(obj) > File "/anaconda/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 108, in reduce_storage > metadata = storage._share_filename_() > **RuntimeError: unable to open shared memory object </torch_547_2991312382> in read-write mode at /Users/zafer/deeplearning/buildenv/pytorch/torch/lib/TH/THAllocator.c:230** >
1.0
test_multiprocessing.py RuntimeError: unable to open shared memory object in read-write mode - macOS 10.12.6 CUDA 9.0rc CUDNN 7.0 for CUDA 9.0rc Python 3.6 (anaconda) Xcode 8.3.3 Apple LLVM version 8.1.0 (clang-802.0.42) Pytorch built fine with latest update. When running test_multiprocessing.py (python test_multiprocessing.py), I got the following error and it hanged (actually not proceeded anymore. I did Control +C to quit) > s.sEsssss.E.Traceback (most recent call last): > File "/anaconda/lib/python3.6/multiprocessing/queues.py", line 241, in _feed > obj = _ForkingPickler.dumps(obj) > File "/anaconda/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps > cls(buf, protocol).dump(obj) > File "/anaconda/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 108, in reduce_storage > metadata = storage._share_filename_() > RuntimeError: unable to open shared memory object </torch_547_2991312382> in read-write mode at /Users/zafer/deeplearning/buildenv/pytorch/torch/lib/TH/THAllocator.c:230 > FEs.Traceback (most recent call last): > File "/anaconda/lib/python3.6/multiprocessing/queues.py", line 241, in _feed > obj = _ForkingPickler.dumps(obj) > File "/anaconda/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps > cls(buf, protocol).dump(obj) > File "/anaconda/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 108, in reduce_storage > metadata = storage._share_filename_() > **RuntimeError: unable to open shared memory object </torch_547_2991312382> in read-write mode at /Users/zafer/deeplearning/buildenv/pytorch/torch/lib/TH/THAllocator.c:230** >
process
test multiprocessing py runtimeerror unable to open shared memory object in read write mode macos cuda cudnn for cuda python anaconda xcode apple llvm version clang pytorch built fine with latest update when running test multiprocessing py python test multiprocessing py i got the following error and it hanged actually not proceeded anymore i did control c to quit s sesssss e traceback most recent call last file anaconda lib multiprocessing queues py line in feed obj forkingpickler dumps obj file anaconda lib multiprocessing reduction py line in dumps cls buf protocol dump obj file anaconda lib site packages torch multiprocessing reductions py line in reduce storage metadata storage share filename runtimeerror unable to open shared memory object in read write mode at users zafer deeplearning buildenv pytorch torch lib th thallocator c fes traceback most recent call last file anaconda lib multiprocessing queues py line in feed obj forkingpickler dumps obj file anaconda lib multiprocessing reduction py line in dumps cls buf protocol dump obj file anaconda lib site packages torch multiprocessing reductions py line in reduce storage metadata storage share filename runtimeerror unable to open shared memory object in read write mode at users zafer deeplearning buildenv pytorch torch lib th thallocator c
1
477,626
13,765,495,147
IssuesEvent
2020-10-07 13:29:09
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
firefox-source-docs.mozilla.org - see bug description
browser-chrome priority-important
<!-- @browser: Chrome 85.0.4183 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 6.2; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.121 Safari/537.36 --> <!-- @reported_with: unknown --> **URL**: https://firefox-source-docs.mozilla.org/setup/windows_build.html **Browser / Version**: Chrome 85.0.4183 **Operating System**: Windows 8 **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: Misspelled Word **Steps to Reproduce**: Under the section of Build Firefox. Mercurial is written as Mercuial. <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2020/10/5aaa061c-ba23-4621-a51c-6ff28c006675.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
firefox-source-docs.mozilla.org - see bug description - <!-- @browser: Chrome 85.0.4183 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 6.2; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.121 Safari/537.36 --> <!-- @reported_with: unknown --> **URL**: https://firefox-source-docs.mozilla.org/setup/windows_build.html **Browser / Version**: Chrome 85.0.4183 **Operating System**: Windows 8 **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: Misspelled Word **Steps to Reproduce**: Under the section of Build Firefox. Mercurial is written as Mercuial. <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2020/10/5aaa061c-ba23-4621-a51c-6ff28c006675.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
firefox source docs mozilla org see bug description url browser version chrome operating system windows tested another browser yes chrome problem type something else description misspelled word steps to reproduce under the section of build firefox mercurial is written as mercuial view the screenshot img alt screenshot src browser configuration none from with ❤️
0
140,650
11,354,446,859
IssuesEvent
2020-01-24 17:38:24
einsteinpy/einsteinpy
https://api.github.com/repos/einsteinpy/einsteinpy
reopened
Increase code coverage across different modules (easy ones)
good first issue tests
🐞 **Problem** The goal is 100% code coverage. It's not a very realistic goal because sometimes we lack the knowledge to do so. But I am listing some of the parts, where coverage can be increased easily. 🎯 **Goal** Try to target these files. The lines untested would be marked red [here](https://codecov.io/gh/einsteinpy/einsteinpy/tree/master/src/einsteinpy). - [ ] `coordinates/core.py` - [ ] `ijit.py` - [ ] `symbolic/vector.py` - [ ] `symbolic/tensor.py` - [ ] One is always welcome to find his own piece of code to test !! 💡 **Possible solutions** - [What is coverage](https://confluence.atlassian.com/clover/about-code-coverage-71599496.html) - tests are present in `src\einsteinpy\tests\test_<module name>\test_<file name>` 📋 **Steps to solve the problem** * Comment below about what you've started working on. * Add, commit, push your changes * Submit a pull request and add this in comments - `Addresses #<put issue number here>` * Ask for a review in comments section of pull request * Celebrate your contribution to this project 🎉 @shreyasbapat if possible, add some more!
1.0
Increase code coverage across different modules (easy ones) - 🐞 **Problem** The goal is 100% code coverage. It's not a very realistic goal because sometimes we lack the knowledge to do so. But I am listing some of the parts, where coverage can be increased easily. 🎯 **Goal** Try to target these files. The lines untested would be marked red [here](https://codecov.io/gh/einsteinpy/einsteinpy/tree/master/src/einsteinpy). - [ ] `coordinates/core.py` - [ ] `ijit.py` - [ ] `symbolic/vector.py` - [ ] `symbolic/tensor.py` - [ ] One is always welcome to find his own piece of code to test !! 💡 **Possible solutions** - [What is coverage](https://confluence.atlassian.com/clover/about-code-coverage-71599496.html) - tests are present in `src\einsteinpy\tests\test_<module name>\test_<file name>` 📋 **Steps to solve the problem** * Comment below about what you've started working on. * Add, commit, push your changes * Submit a pull request and add this in comments - `Addresses #<put issue number here>` * Ask for a review in comments section of pull request * Celebrate your contribution to this project 🎉 @shreyasbapat if possible, add some more!
non_process
increase code coverage across different modules easy ones 🐞 problem the goal is code coverage it s not a very realistic goal because sometimes we lack the knowledge to do so but i am listing some of the parts where coverage can be increased easily 🎯 goal try to target these files the lines untested would be marked red coordinates core py ijit py symbolic vector py symbolic tensor py one is always welcome to find his own piece of code to test 💡 possible solutions tests are present in src einsteinpy tests test test 📋 steps to solve the problem comment below about what you ve started working on add commit push your changes submit a pull request and add this in comments addresses ask for a review in comments section of pull request celebrate your contribution to this project 🎉 shreyasbapat if possible add some more
0
129,149
10,564,718,473
IssuesEvent
2019-10-05 04:39:15
rancher/rancher
https://api.github.com/repos/rancher/rancher
closed
[Backport] Catalog app does not show correct template version on Edit app
[zube]: To Test team/ui
Backport of rancher/rancher#23051
1.0
[Backport] Catalog app does not show correct template version on Edit app - Backport of rancher/rancher#23051
non_process
catalog app does not show correct template version on edit app backport of rancher rancher
0
17,886
23,848,007,103
IssuesEvent
2022-09-06 15:24:05
OpenDataScotland/the_od_bods
https://api.github.com/repos/OpenDataScotland/the_od_bods
opened
Fix NLS Licensing Treatment
bug good first issue data processing back end
Having removed licencing treatment in the NLS Scraper https://github.com/OpenDataScotland/the_od_bods/commit/56c70c9b63b68fc00324be633f7a086af862cda0 has knock on effects in end results. Correct this such that processing is done in merge_data.py but NLS scraper needs to return 1 licence, some assets have multiple licences. Might want to consider why an asset SHOULD have more than 1 licence. https://opendatascotland.slack.com/archives/C02HEHDL8AY/p1662122527467149?thread_ts=1662122284.974809&cid=C02HEHDL8AY ![image](https://user-images.githubusercontent.com/47697803/188674083-3aaae543-bf7e-469d-aeda-5f2528325199.png)
1.0
Fix NLS Licensing Treatment - Having removed licencing treatment in the NLS Scraper https://github.com/OpenDataScotland/the_od_bods/commit/56c70c9b63b68fc00324be633f7a086af862cda0 has knock on effects in end results. Correct this such that processing is done in merge_data.py but NLS scraper needs to return 1 licence, some assets have multiple licences. Might want to consider why an asset SHOULD have more than 1 licence. https://opendatascotland.slack.com/archives/C02HEHDL8AY/p1662122527467149?thread_ts=1662122284.974809&cid=C02HEHDL8AY ![image](https://user-images.githubusercontent.com/47697803/188674083-3aaae543-bf7e-469d-aeda-5f2528325199.png)
process
fix nls licensing treatment having removed licencing treatment in the nls scraper has knock on effects in end results correct this such that processing is done in merge data py but nls scraper needs to return licence some assets have multiple licences might want to consider why an asset should have more than licence
1
17,818
23,741,919,539
IssuesEvent
2022-08-31 13:08:37
cloudfoundry/korifi
https://api.github.com/repos/cloudfoundry/korifi
opened
[Feature]: Developer can push apps using the top-level `timeout` field in the manifest
Top-level process config
### Blockers/Dependencies _No response_ ### Background **As a** developer **I want** top-level process configuration in manifests to be supported **So that** I can use shortcut `cf push` flags like `-c`, `-i`, `-m` etc. ### Acceptance Criteria * **GIVEN** I have the following node app: ```js var http = require('http'); const DOWNTIME_SECONDS = 65; var downUntil = new Date().getTime() + DOWNTIME_SECONDS * 1000; http.createServer(function (request, response) { var now = new Date().getTime(); if (now > downUntil) { response.writeHead(200, {'Content-Type': 'text/plain'}); response.end('ok'); } else { response.writeHead(500, {'Content-Type': 'text/plain'}); response.end('no - wait for ' + (downUntil - now) / 1000 + ' seconds'); } }).listen(process.env.PORT); ``` with the following `manifest.yml`: ```yaml --- applications: - name: real-app timeout: 70 processes: - type: web health-check-type: http ``` **WHEN I** `cf push` **THEN I** see the push succeeds with an output similar to this: ``` name: test requested state: started routes: test.vcap.me last uploaded: Mon 29 Aug 16:28:36 UTC 2022 stack: cflinuxfs3 buildpacks: name version detect output buildpack name nodejs_buildpack 1.7.61 nodejs nodejs type: web sidecars: instances: 1/1 memory usage: 256M start command: npm start state since cpu memory disk details #0 running 2022-08-29T16:28:54Z 1.6% 42.3M of 256M 115.7M of 1G ``` * **GIVEN** I have the same app with the following manifest: **AND** `manifest.yml` looks like this: ```yaml --- applications: - name: my-app timeout: 50 processes: - type: web health-check-type: http timeout: 70 ``` **WHEN I** `cf push` **THEN I** see the push succeeds with the same output as above ### Dev Notes * The default `timeout` is 60 seconds - if we're not applying this default properly already, we should.
1.0
[Feature]: Developer can push apps using the top-level `timeout` field in the manifest - ### Blockers/Dependencies _No response_ ### Background **As a** developer **I want** top-level process configuration in manifests to be supported **So that** I can use shortcut `cf push` flags like `-c`, `-i`, `-m` etc. ### Acceptance Criteria * **GIVEN** I have the following node app: ```js var http = require('http'); const DOWNTIME_SECONDS = 65; var downUntil = new Date().getTime() + DOWNTIME_SECONDS * 1000; http.createServer(function (request, response) { var now = new Date().getTime(); if (now > downUntil) { response.writeHead(200, {'Content-Type': 'text/plain'}); response.end('ok'); } else { response.writeHead(500, {'Content-Type': 'text/plain'}); response.end('no - wait for ' + (downUntil - now) / 1000 + ' seconds'); } }).listen(process.env.PORT); ``` with the following `manifest.yml`: ```yaml --- applications: - name: real-app timeout: 70 processes: - type: web health-check-type: http ``` **WHEN I** `cf push` **THEN I** see the push succeeds with an output similar to this: ``` name: test requested state: started routes: test.vcap.me last uploaded: Mon 29 Aug 16:28:36 UTC 2022 stack: cflinuxfs3 buildpacks: name version detect output buildpack name nodejs_buildpack 1.7.61 nodejs nodejs type: web sidecars: instances: 1/1 memory usage: 256M start command: npm start state since cpu memory disk details #0 running 2022-08-29T16:28:54Z 1.6% 42.3M of 256M 115.7M of 1G ``` * **GIVEN** I have the same app with the following manifest: **AND** `manifest.yml` looks like this: ```yaml --- applications: - name: my-app timeout: 50 processes: - type: web health-check-type: http timeout: 70 ``` **WHEN I** `cf push` **THEN I** see the push succeeds with the same output as above ### Dev Notes * The default `timeout` is 60 seconds - if we're not applying this default properly already, we should.
process
developer can push apps using the top level timeout field in the manifest blockers dependencies no response background as a developer i want top level process configuration in manifests to be supported so that i can use shortcut cf push flags like c i m etc acceptance criteria given i have the following node app js var http require http const downtime seconds var downuntil new date gettime downtime seconds http createserver function request response var now new date gettime if now downuntil response writehead content type text plain response end ok else response writehead content type text plain response end no wait for downuntil now seconds listen process env port with the following manifest yml yaml applications name real app timeout processes type web health check type http when i cf push then i see the push succeeds with an output similar to this name test requested state started routes test vcap me last uploaded mon aug utc stack buildpacks name version detect output buildpack name nodejs buildpack nodejs nodejs type web sidecars instances memory usage start command npm start state since cpu memory disk details running of of given i have the same app with the following manifest and manifest yml looks like this yaml applications name my app timeout processes type web health check type http timeout when i cf push then i see the push succeeds with the same output as above dev notes the default timeout is seconds if we re not applying this default properly already we should
1
280,933
8,688,410,273
IssuesEvent
2018-12-03 16:03:36
k-next/kirby
https://api.github.com/repos/k-next/kirby
closed
[Panel] ‘BasicAuth => true’ results in error
priority: low-hanging fruit 🍓 type: bug 🐛
**Describe the bug** When enabling BasicAuth via `BasicAuth => true` in `config.php`, the panel throws the following error after login: ``` Argument 1 passed to Kirby\Toolkit\Str::startsWith() must be of the type string, null given, called in /Users/ros/GIT/web-vav-agentur-2018/www/vendor/getkirby/cms/config/api/authentication.php on line 14 ``` **To Reproduce** Steps to reproduce the behavior: 1. Add `BasicAuth => true` in `config.php` 2. Login to panel 3. See error **Kirby Version** 3.0.0-beta-6.22 **Console output** ``` app.js:formatted:14202 GET XXXXX/api/site?view=panel 500 (Internal Server Error) request @ app.js:formatted:14202 get @ app.js:formatted:14224 get @ app.js:formatted:14278 fetch @ app.js:formatted:13291 created @ app.js:formatted:13286 Ke @ vendor.js:39 t._init @ vendor.js:39 a @ vendor.js:39 Yn @ vendor.js:39 init @ vendor.js:39 h @ vendor.js:39 d @ vendor.js:39 S @ vendor.js:39 E @ vendor.js:39 S @ vendor.js:39 E @ vendor.js:39 (anonymous) @ vendor.js:39 He.t._update @ vendor.js:39 r @ vendor.js:39 un.get @ vendor.js:39 un.run @ vendor.js:39 nn @ vendor.js:39 (anonymous) @ vendor.js:39 ae @ vendor.js:39 app.js:formatted:14469 {status: "error", exception: "TypeError", message: "Argument 1 passed to Kirby\Toolkit\Str::startsWith…irby/cms/config/api/authentication.php on line 14", file: "dor/getkirby/cms/src/Toolkit/Str.php", line: 763, …}code: (...)exception: (...)file: (...)line: (...)message: (...)status: (...)__ob__: Et {value: {…}, dep: ht, vmCount: 0}get code: ƒ ()set code: ƒ (e)get exception: ƒ ()set exception: ƒ (e)get file: ƒ ()set file: ƒ (e)get line: ƒ ()set line: ƒ (e)get message: ƒ ()set message: ƒ (e)get status: ƒ ()set status: ƒ (e)__proto__: Object Kh.config.onError @ app.js:formatted:14469 (anonymous) @ app.js:formatted:14216 Promise.catch (async) request @ app.js:formatted:14213 get @ app.js:formatted:14224 get @ app.js:formatted:14278 fetch @ app.js:formatted:13291 created @ app.js:formatted:13286 Ke @ vendor.js:39 t._init @ vendor.js:39 a @ vendor.js:39 Yn @ vendor.js:39 init @ vendor.js:39 h @ vendor.js:39 d @ vendor.js:39 S @ vendor.js:39 E @ vendor.js:39 S @ vendor.js:39 E @ vendor.js:39 (anonymous) @ vendor.js:39 He.t._update @ vendor.js:39 r @ vendor.js:39 un.get @ vendor.js:39 un.run @ vendor.js:39 nn @ vendor.js:39 (anonymous) @ vendor.js:39 ae @ vendor.js:39 ``` **Desktop (please complete the following information):** - OS: OSX - Browser: Chrome - Version: Version 70.0.3538.102 (Official Build) (64-bit) **Additional context** Using Valet and a self-signed HTTPS certificate.
1.0
[Panel] ‘BasicAuth => true’ results in error - **Describe the bug** When enabling BasicAuth via `BasicAuth => true` in `config.php`, the panel throws the following error after login: ``` Argument 1 passed to Kirby\Toolkit\Str::startsWith() must be of the type string, null given, called in /Users/ros/GIT/web-vav-agentur-2018/www/vendor/getkirby/cms/config/api/authentication.php on line 14 ``` **To Reproduce** Steps to reproduce the behavior: 1. Add `BasicAuth => true` in `config.php` 2. Login to panel 3. See error **Kirby Version** 3.0.0-beta-6.22 **Console output** ``` app.js:formatted:14202 GET XXXXX/api/site?view=panel 500 (Internal Server Error) request @ app.js:formatted:14202 get @ app.js:formatted:14224 get @ app.js:formatted:14278 fetch @ app.js:formatted:13291 created @ app.js:formatted:13286 Ke @ vendor.js:39 t._init @ vendor.js:39 a @ vendor.js:39 Yn @ vendor.js:39 init @ vendor.js:39 h @ vendor.js:39 d @ vendor.js:39 S @ vendor.js:39 E @ vendor.js:39 S @ vendor.js:39 E @ vendor.js:39 (anonymous) @ vendor.js:39 He.t._update @ vendor.js:39 r @ vendor.js:39 un.get @ vendor.js:39 un.run @ vendor.js:39 nn @ vendor.js:39 (anonymous) @ vendor.js:39 ae @ vendor.js:39 app.js:formatted:14469 {status: "error", exception: "TypeError", message: "Argument 1 passed to Kirby\Toolkit\Str::startsWith…irby/cms/config/api/authentication.php on line 14", file: "dor/getkirby/cms/src/Toolkit/Str.php", line: 763, …}code: (...)exception: (...)file: (...)line: (...)message: (...)status: (...)__ob__: Et {value: {…}, dep: ht, vmCount: 0}get code: ƒ ()set code: ƒ (e)get exception: ƒ ()set exception: ƒ (e)get file: ƒ ()set file: ƒ (e)get line: ƒ ()set line: ƒ (e)get message: ƒ ()set message: ƒ (e)get status: ƒ ()set status: ƒ (e)__proto__: Object Kh.config.onError @ app.js:formatted:14469 (anonymous) @ app.js:formatted:14216 Promise.catch (async) request @ app.js:formatted:14213 get @ app.js:formatted:14224 get @ app.js:formatted:14278 fetch @ app.js:formatted:13291 created @ app.js:formatted:13286 Ke @ vendor.js:39 t._init @ vendor.js:39 a @ vendor.js:39 Yn @ vendor.js:39 init @ vendor.js:39 h @ vendor.js:39 d @ vendor.js:39 S @ vendor.js:39 E @ vendor.js:39 S @ vendor.js:39 E @ vendor.js:39 (anonymous) @ vendor.js:39 He.t._update @ vendor.js:39 r @ vendor.js:39 un.get @ vendor.js:39 un.run @ vendor.js:39 nn @ vendor.js:39 (anonymous) @ vendor.js:39 ae @ vendor.js:39 ``` **Desktop (please complete the following information):** - OS: OSX - Browser: Chrome - Version: Version 70.0.3538.102 (Official Build) (64-bit) **Additional context** Using Valet and a self-signed HTTPS certificate.
non_process
‘basicauth true’ results in error describe the bug when enabling basicauth via basicauth true in config php the panel throws the following error after login argument passed to kirby toolkit str startswith must be of the type string null given called in users ros git web vav agentur www vendor getkirby cms config api authentication php on line to reproduce steps to reproduce the behavior add basicauth true in config php login to panel see error kirby version beta console output app js formatted get xxxxx api site view panel internal server error request app js formatted get app js formatted get app js formatted fetch app js formatted created app js formatted ke vendor js t init vendor js a vendor js yn vendor js init vendor js h vendor js d vendor js s vendor js e vendor js s vendor js e vendor js anonymous vendor js he t update vendor js r vendor js un get vendor js un run vendor js nn vendor js anonymous vendor js ae vendor js app js formatted status error exception typeerror message argument passed to kirby toolkit str startswith…irby cms config api authentication php on line file dor getkirby cms src toolkit str php line  … code exception file line message status ob et  value … dep ht vmcount get code ƒ set code ƒ e get exception ƒ set exception ƒ e get file ƒ set file ƒ e get line ƒ set line ƒ e get message ƒ set message ƒ e get status ƒ set status ƒ e proto object kh config onerror app js formatted anonymous app js formatted promise catch async request app js formatted get app js formatted get app js formatted fetch app js formatted created app js formatted ke vendor js t init vendor js a vendor js yn vendor js init vendor js h vendor js d vendor js s vendor js e vendor js s vendor js e vendor js anonymous vendor js he t update vendor js r vendor js un get vendor js un run vendor js nn vendor js anonymous vendor js ae vendor js desktop please complete the following information os osx browser chrome version version official build bit additional context using valet and a self signed https certificate
0
27,339
13,226,814,812
IssuesEvent
2020-08-18 01:08:21
microsoft/STL
https://api.github.com/repos/microsoft/STL
closed
atomic.cpp: Spinlock powering shared_ptr atomics can lead to priority inversion
bug performance
**Describe the bug** shared_ptr's atomic functions (e.g. `std::atomic_store`) are powered by an external lock present in our separately compiled machinery, `_Lock_shared_ptr_spin_lock` and `_Unlock_shared_ptr_spin_lock`. This was written to use a plain spin lock with no mitigation to go to sleep if spinning is taking too long, nor is there any mitigation for memory bandwidth consumption. https://github.com/microsoft/STL/blob/aa0a7a3d859ade0f6f1ff13aa4ef74b3d5ce2326/stl/src/atomic.cpp#L13-L34 In single threaded scenarios this is particularly bad when a low priority thread currently holds the spinlock, and a high priority thread spins "effectively forever". In an ABI breaking release, it would be nice to reuse the low order bit of the reference count control block pointer in the shared_ptr; but even without an ABI breaking release we could do better by replacing the spinlock entirely with something like `SRWLOCK` on Vista and later, implementing exponential backoff, or relying on C++20 `std::atomic` waiting features once we have those implemented. Also tracked by Developer Community as DevCom-716238 and Microsoft-internal VSO-975564.
True
atomic.cpp: Spinlock powering shared_ptr atomics can lead to priority inversion - **Describe the bug** shared_ptr's atomic functions (e.g. `std::atomic_store`) are powered by an external lock present in our separately compiled machinery, `_Lock_shared_ptr_spin_lock` and `_Unlock_shared_ptr_spin_lock`. This was written to use a plain spin lock with no mitigation to go to sleep if spinning is taking too long, nor is there any mitigation for memory bandwidth consumption. https://github.com/microsoft/STL/blob/aa0a7a3d859ade0f6f1ff13aa4ef74b3d5ce2326/stl/src/atomic.cpp#L13-L34 In single threaded scenarios this is particularly bad when a low priority thread currently holds the spinlock, and a high priority thread spins "effectively forever". In an ABI breaking release, it would be nice to reuse the low order bit of the reference count control block pointer in the shared_ptr; but even without an ABI breaking release we could do better by replacing the spinlock entirely with something like `SRWLOCK` on Vista and later, implementing exponential backoff, or relying on C++20 `std::atomic` waiting features once we have those implemented. Also tracked by Developer Community as DevCom-716238 and Microsoft-internal VSO-975564.
non_process
atomic cpp spinlock powering shared ptr atomics can lead to priority inversion describe the bug shared ptr s atomic functions e g std atomic store are powered by an external lock present in our separately compiled machinery lock shared ptr spin lock and unlock shared ptr spin lock this was written to use a plain spin lock with no mitigation to go to sleep if spinning is taking too long nor is there any mitigation for memory bandwidth consumption in single threaded scenarios this is particularly bad when a low priority thread currently holds the spinlock and a high priority thread spins effectively forever in an abi breaking release it would be nice to reuse the low order bit of the reference count control block pointer in the shared ptr but even without an abi breaking release we could do better by replacing the spinlock entirely with something like srwlock on vista and later implementing exponential backoff or relying on c std atomic waiting features once we have those implemented also tracked by developer community as devcom and microsoft internal vso
0
2,966
5,960,645,710
IssuesEvent
2017-05-29 14:40:00
orbardugo/Hahot-Hameshulash
https://api.github.com/repos/orbardugo/Hahot-Hameshulash
closed
create a form that present the result for the queries.
difficulty 3 in process Or priorty 1 requirement
Or will create a form that present the result for the queries. on this form will be list of persons after the query.
1.0
create a form that present the result for the queries. - Or will create a form that present the result for the queries. on this form will be list of persons after the query.
process
create a form that present the result for the queries or will create a form that present the result for the queries on this form will be list of persons after the query
1
491,249
14,147,688,611
IssuesEvent
2020-11-10 21:13:43
PyTorchLightning/pytorch-lightning
https://api.github.com/repos/PyTorchLightning/pytorch-lightning
closed
Gpu memory leak with self.log on_epoch=True
Logger Priority P0 bug / fix help wanted
pl 1.0.5 Using new logging api I want to log a metric in LightningModule ``` self.log(";;;;;;;;;;;;;;;;;;;", 1, on_step=False, on_epoch=True) ``` This is a dummy example but it is sufficient to add to `LightningModule`'s `training_step` to cause a memory leak on gpu. What could go wrong? We want to log a metric which is not even a cuda tensor. How could it lead to a gpu memory leak? Well thanks to the magic of metric epoch aggregation stuff Let's dig in and take a look at here https://github.com/PyTorchLightning/pytorch-lightning/blob/b3db197b43667ccf0f67a4d0d8093fc866080637/pytorch_lightning/trainer/training_loop.py#L550-L569 Here we run batch, convert `batch_output` to `epoch_end_outputs` if `on_epoch` was set and append `epoch_end_outputs` to `epoch_output` inside `on_train_batch_end` `epoch_output` is defined here https://github.com/PyTorchLightning/pytorch-lightning/blob/b3db197b43667ccf0f67a4d0d8093fc866080637/pytorch_lightning/trainer/training_loop.py#L540 Everything seems normal, but there is a problem inside `batch_output` there is a surprise - loss value stored on gpu. ![image](https://user-images.githubusercontent.com/22998537/98406840-ba17f600-207f-11eb-9661-1535d90612a1.png) I think you can guess by now what could go wrong if we store a lot of separate cuda tensors in a long long `epoch_output` Yeah the gpu memory is going to end and you'll get a famous ``` RuntimeError: CUDA out of memory. Tried to allocate 114.00 MiB (GPU 1; 10.92 GiB total capacity; 9.39 GiB already allocated; 27.38 MiB free; 10.24 GiB reserved in total by PyTorch) ``` Where is the loss appended to output? Here https://github.com/PyTorchLightning/pytorch-lightning/blob/b3db197b43667ccf0f67a4d0d8093fc866080637/pytorch_lightning/trainer/training_loop.py#L396-L427 In the first line we get a pretty `result` without the loss in it, and in line 414 the loss get appended and we start our memory leak chain of events How is it affecting the training? It can lead to error only on the first epoch of training. If you've got enough memory to hold a list of gpu losses during the 1st epoch there won't be any exceptions as subsequent epochs will have the same list of losses, if not you'll get it somewhere in the middle of 1st epoch. And of course the more steps you have in an epoch the more memory this list of gpu losses will require as one loss is stored per step Here is the comparison for my task. My gpu could hold 2k steps before memory error With `self.log` ![image](https://user-images.githubusercontent.com/22998537/98408278-f51b2900-2081-11eb-92ae-ceeb80693753.png) Without `self.log` ![image](https://user-images.githubusercontent.com/22998537/98408336-10863400-2082-11eb-97d8-9d00f13c70ca.png) You can see how there is a rapid growth in the first minute in both as the model is loaded and feeded the 1st batch. The difference is in subsequent minutes where in the former case the list of losses eats 7gb of gpu memory and leads to crash, and in the latter nothing happens and training goes on Pretty cool how one `self.log` could eat 2 times more gpu memory more than actual training process
1.0
Gpu memory leak with self.log on_epoch=True - pl 1.0.5 Using new logging api I want to log a metric in LightningModule ``` self.log(";;;;;;;;;;;;;;;;;;;", 1, on_step=False, on_epoch=True) ``` This is a dummy example but it is sufficient to add to `LightningModule`'s `training_step` to cause a memory leak on gpu. What could go wrong? We want to log a metric which is not even a cuda tensor. How could it lead to a gpu memory leak? Well thanks to the magic of metric epoch aggregation stuff Let's dig in and take a look at here https://github.com/PyTorchLightning/pytorch-lightning/blob/b3db197b43667ccf0f67a4d0d8093fc866080637/pytorch_lightning/trainer/training_loop.py#L550-L569 Here we run batch, convert `batch_output` to `epoch_end_outputs` if `on_epoch` was set and append `epoch_end_outputs` to `epoch_output` inside `on_train_batch_end` `epoch_output` is defined here https://github.com/PyTorchLightning/pytorch-lightning/blob/b3db197b43667ccf0f67a4d0d8093fc866080637/pytorch_lightning/trainer/training_loop.py#L540 Everything seems normal, but there is a problem inside `batch_output` there is a surprise - loss value stored on gpu. ![image](https://user-images.githubusercontent.com/22998537/98406840-ba17f600-207f-11eb-9661-1535d90612a1.png) I think you can guess by now what could go wrong if we store a lot of separate cuda tensors in a long long `epoch_output` Yeah the gpu memory is going to end and you'll get a famous ``` RuntimeError: CUDA out of memory. Tried to allocate 114.00 MiB (GPU 1; 10.92 GiB total capacity; 9.39 GiB already allocated; 27.38 MiB free; 10.24 GiB reserved in total by PyTorch) ``` Where is the loss appended to output? Here https://github.com/PyTorchLightning/pytorch-lightning/blob/b3db197b43667ccf0f67a4d0d8093fc866080637/pytorch_lightning/trainer/training_loop.py#L396-L427 In the first line we get a pretty `result` without the loss in it, and in line 414 the loss get appended and we start our memory leak chain of events How is it affecting the training? It can lead to error only on the first epoch of training. If you've got enough memory to hold a list of gpu losses during the 1st epoch there won't be any exceptions as subsequent epochs will have the same list of losses, if not you'll get it somewhere in the middle of 1st epoch. And of course the more steps you have in an epoch the more memory this list of gpu losses will require as one loss is stored per step Here is the comparison for my task. My gpu could hold 2k steps before memory error With `self.log` ![image](https://user-images.githubusercontent.com/22998537/98408278-f51b2900-2081-11eb-92ae-ceeb80693753.png) Without `self.log` ![image](https://user-images.githubusercontent.com/22998537/98408336-10863400-2082-11eb-97d8-9d00f13c70ca.png) You can see how there is a rapid growth in the first minute in both as the model is loaded and feeded the 1st batch. The difference is in subsequent minutes where in the former case the list of losses eats 7gb of gpu memory and leads to crash, and in the latter nothing happens and training goes on Pretty cool how one `self.log` could eat 2 times more gpu memory more than actual training process
non_process
gpu memory leak with self log on epoch true pl using new logging api i want to log a metric in lightningmodule self log on step false on epoch true this is a dummy example but it is sufficient to add to lightningmodule s training step to cause a memory leak on gpu what could go wrong we want to log a metric which is not even a cuda tensor how could it lead to a gpu memory leak well thanks to the magic of metric epoch aggregation stuff let s dig in and take a look at here here we run batch convert batch output to epoch end outputs if on epoch was set and append epoch end outputs to epoch output inside on train batch end epoch output is defined here everything seems normal but there is a problem inside batch output there is a surprise loss value stored on gpu i think you can guess by now what could go wrong if we store a lot of separate cuda tensors in a long long epoch output yeah the gpu memory is going to end and you ll get a famous runtimeerror cuda out of memory tried to allocate mib gpu gib total capacity gib already allocated mib free gib reserved in total by pytorch where is the loss appended to output here in the first line we get a pretty result without the loss in it and in line the loss get appended and we start our memory leak chain of events how is it affecting the training it can lead to error only on the first epoch of training if you ve got enough memory to hold a list of gpu losses during the epoch there won t be any exceptions as subsequent epochs will have the same list of losses if not you ll get it somewhere in the middle of epoch and of course the more steps you have in an epoch the more memory this list of gpu losses will require as one loss is stored per step here is the comparison for my task my gpu could hold steps before memory error with self log without self log you can see how there is a rapid growth in the first minute in both as the model is loaded and feeded the batch the difference is in subsequent minutes where in the former case the list of losses eats of gpu memory and leads to crash and in the latter nothing happens and training goes on pretty cool how one self log could eat times more gpu memory more than actual training process
0