Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
6,737
9,872,901,333
IssuesEvent
2019-06-22 09:16:11
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Can't define a RASTER_LAYER input parameter type when defining a new @decorator style processing algoritm
Bug Feedback Processing
Author Name: **enrico ferreguti** (@enricofer) Original Redmine Issue: [21789](https://issues.qgis.org/issues/21789) Affected QGIS version: 3.6.0 Redmine category:processing/core --- as from https://github.com/qgis/QGIS-Documentation/blob/c1c228eb8a3ab4c5f72ef10f323070f64dd23056/source/docs/user_manual/processing/scripts.rst Defining a raster input parameter: @alg.input(type=alg.RASTER_LAYER, name='DTM_LAYER', label="Layer DTM") and running the Algorithm I get the following exception: > Processing: Traceback (most recent call last): File "C:/OSGEO4~1/apps/qgis/./python\qgis\processing\algfactory.py", line 193, in _create_param make_func = input_type_mapping[type] KeyError: ('RASTER_LAYER',) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:/OSGEO4~1/apps/qgis/./python/plugins\processing\script\ScriptEditorDialog.py", line 228, in runAlgorithm exec(self.editor.text(), _locals) File "", line 16, in File "C:/OSGEO4~1/apps/qgis/./python\qgis\processing\algfactory.py", line 456, in input self.current.add_input(type, *args, **kwargs) File "C:/OSGEO4~1/apps/qgis/./python\qgis\processing\algfactory.py", line 157, in add_input parm = self._create_param(type, **kwargs) File "C:/OSGEO4~1/apps/qgis/./python\qgis\processing\algfactory.py", line 195, in _create_param raise ProcessingAlgFactoryException("{} is a invalid input type".format(type)) qgis.processing.algfactory.ProcessingAlgFactoryException: ('RASTER_LAYER',) is a invalid input type >
1.0
Can't define a RASTER_LAYER input parameter type when defining a new @decorator style processing algoritm - Author Name: **enrico ferreguti** (@enricofer) Original Redmine Issue: [21789](https://issues.qgis.org/issues/21789) Affected QGIS version: 3.6.0 Redmine category:processing/core --- as from https://github.com/qgis/QGIS-Documentation/blob/c1c228eb8a3ab4c5f72ef10f323070f64dd23056/source/docs/user_manual/processing/scripts.rst Defining a raster input parameter: @alg.input(type=alg.RASTER_LAYER, name='DTM_LAYER', label="Layer DTM") and running the Algorithm I get the following exception: > Processing: Traceback (most recent call last): File "C:/OSGEO4~1/apps/qgis/./python\qgis\processing\algfactory.py", line 193, in _create_param make_func = input_type_mapping[type] KeyError: ('RASTER_LAYER',) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:/OSGEO4~1/apps/qgis/./python/plugins\processing\script\ScriptEditorDialog.py", line 228, in runAlgorithm exec(self.editor.text(), _locals) File "", line 16, in File "C:/OSGEO4~1/apps/qgis/./python\qgis\processing\algfactory.py", line 456, in input self.current.add_input(type, *args, **kwargs) File "C:/OSGEO4~1/apps/qgis/./python\qgis\processing\algfactory.py", line 157, in add_input parm = self._create_param(type, **kwargs) File "C:/OSGEO4~1/apps/qgis/./python\qgis\processing\algfactory.py", line 195, in _create_param raise ProcessingAlgFactoryException("{} is a invalid input type".format(type)) qgis.processing.algfactory.ProcessingAlgFactoryException: ('RASTER_LAYER',) is a invalid input type >
process
can t define a raster layer input parameter type when defining a new decorator style processing algoritm author name enrico ferreguti enricofer original redmine issue affected qgis version redmine category processing core as from defining a raster input parameter alg input type alg raster layer name dtm layer label layer dtm and running the algorithm i get the following exception processing traceback most recent call last file c apps qgis python qgis processing algfactory py line in create param make func input type mapping keyerror raster layer during handling of the above exception another exception occurred traceback most recent call last file c apps qgis python plugins processing script scripteditordialog py line in runalgorithm exec self editor text locals file line in file c apps qgis python qgis processing algfactory py line in input self current add input type args kwargs file c apps qgis python qgis processing algfactory py line in add input parm self create param type kwargs file c apps qgis python qgis processing algfactory py line in create param raise processingalgfactoryexception is a invalid input type format type qgis processing algfactory processingalgfactoryexception raster layer is a invalid input type
1
58,981
24,617,645,331
IssuesEvent
2022-10-15 14:27:16
twitter-dart/twitter-api-v2
https://api.github.com/repos/twitter-dart/twitter-api-v2
closed
Deprecate `connectVolumeStream` and rename to `connectSampleStream`
enhancement service/Tweets
<!-- When reporting a improvement, please read this complete template and fill all the questions in order to get a better response --> # 1. What could be improved <!-- What part of the code/functionality could be improved? --> The name `Volume Stream` does not describe the endpoint, but rather the characteristics of the Stream. In fact, if future additions such as `Sample10Stream`, which retrieves 10% of the tweet data, are added, the name `Volume Stream` seems inappropriate. # 2. Why should this be improved <!-- Why is this necessary to be improved? --> # 3. Any risks? <!-- Are there any risks in improving this? Will the API change? Will other functionality change? --> # 4. More information Related to #473 <!-- Do you have any other useful information about this improvement report? Please write it down here --> <!-- Possible helpful information: references to other sites/repositories --> <!-- Are you interested in working on a PR for this? -->
1.0
Deprecate `connectVolumeStream` and rename to `connectSampleStream` - <!-- When reporting a improvement, please read this complete template and fill all the questions in order to get a better response --> # 1. What could be improved <!-- What part of the code/functionality could be improved? --> The name `Volume Stream` does not describe the endpoint, but rather the characteristics of the Stream. In fact, if future additions such as `Sample10Stream`, which retrieves 10% of the tweet data, are added, the name `Volume Stream` seems inappropriate. # 2. Why should this be improved <!-- Why is this necessary to be improved? --> # 3. Any risks? <!-- Are there any risks in improving this? Will the API change? Will other functionality change? --> # 4. More information Related to #473 <!-- Do you have any other useful information about this improvement report? Please write it down here --> <!-- Possible helpful information: references to other sites/repositories --> <!-- Are you interested in working on a PR for this? -->
non_process
deprecate connectvolumestream and rename to connectsamplestream what could be improved the name volume stream does not describe the endpoint but rather the characteristics of the stream in fact if future additions such as which retrieves of the tweet data are added the name volume stream seems inappropriate why should this be improved any risks more information related to
0
19,014
25,014,905,557
IssuesEvent
2022-11-03 17:57:07
python/cpython
https://api.github.com/repos/python/cpython
closed
multiprocessing: Process.join() should emit a warning if the process is killed by a signal
stdlib 3.9 expert-multiprocessing
BPO | [39941](https://bugs.python.org/issue39941) --- | :--- Nosy | @vstinner <sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup> <details><summary>Show more details</summary><p> GitHub fields: ```python assignee = None closed_at = None created_at = <Date 2020-03-11.23:10:56.414> labels = ['library', '3.9'] title = 'multiprocessing: Process.join() should emit a warning if the process is killed by a signal' updated_at = <Date 2020-03-11.23:10:56.414> user = 'https://github.com/vstinner' ``` bugs.python.org fields: ```python activity = <Date 2020-03-11.23:10:56.414> actor = 'vstinner' assignee = 'none' closed = False closed_date = None closer = None components = ['Library (Lib)'] creation = <Date 2020-03-11.23:10:56.414> creator = 'vstinner' dependencies = [] files = [] hgrepos = [] issue_num = 39941 keywords = [] message_count = 1.0 messages = ['363980'] nosy_count = 1.0 nosy_names = ['vstinner'] pr_nums = [] priority = 'normal' resolution = None stage = None status = 'open' superseder = None type = None url = 'https://bugs.python.org/issue39941' versions = ['Python 3.9'] ``` </p></details>
1.0
multiprocessing: Process.join() should emit a warning if the process is killed by a signal - BPO | [39941](https://bugs.python.org/issue39941) --- | :--- Nosy | @vstinner <sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup> <details><summary>Show more details</summary><p> GitHub fields: ```python assignee = None closed_at = None created_at = <Date 2020-03-11.23:10:56.414> labels = ['library', '3.9'] title = 'multiprocessing: Process.join() should emit a warning if the process is killed by a signal' updated_at = <Date 2020-03-11.23:10:56.414> user = 'https://github.com/vstinner' ``` bugs.python.org fields: ```python activity = <Date 2020-03-11.23:10:56.414> actor = 'vstinner' assignee = 'none' closed = False closed_date = None closer = None components = ['Library (Lib)'] creation = <Date 2020-03-11.23:10:56.414> creator = 'vstinner' dependencies = [] files = [] hgrepos = [] issue_num = 39941 keywords = [] message_count = 1.0 messages = ['363980'] nosy_count = 1.0 nosy_names = ['vstinner'] pr_nums = [] priority = 'normal' resolution = None stage = None status = 'open' superseder = None type = None url = 'https://bugs.python.org/issue39941' versions = ['Python 3.9'] ``` </p></details>
process
multiprocessing process join should emit a warning if the process is killed by a signal bpo nosy vstinner note these values reflect the state of the issue at the time it was migrated and might not reflect the current state show more details github fields python assignee none closed at none created at labels title multiprocessing process join should emit a warning if the process is killed by a signal updated at user bugs python org fields python activity actor vstinner assignee none closed false closed date none closer none components creation creator vstinner dependencies files hgrepos issue num keywords message count messages nosy count nosy names pr nums priority normal resolution none stage none status open superseder none type none url versions
1
2,393
5,190,384,424
IssuesEvent
2017-01-21 08:09:41
AllenFang/react-bootstrap-table
https://api.github.com/repos/AllenFang/react-bootstrap-table
closed
Expand row broken if configure selectRow and expandBy column
bug inprocess
Following code will fail ```js render() { const options = { expandRowBgColor: 'rgb(242, 255, 163)', expandBy: 'column' // Currently, available value is row and column, default is row }; const selectRow = { mode: 'checkbox', clickToSelect: false, // click to select, default is false clickToExpand: true // click to expand row, default is false }; return ( <BootstrapTable data={ products } options={ options } selectRow={ selectRow } expandableRow={ this.isExpandableRow } expandComponent={ this.expandComponent } search> <TableHeaderColumn dataField='id' isKey={ true }>Product ID</TableHeaderColumn> <TableHeaderColumn dataField='name' expandable={ false }>Product Name</TableHeaderColumn> <TableHeaderColumn dataField='price' expandable={ false }>Product Price</TableHeaderColumn> </BootstrapTable> ); } ```
1.0
Expand row broken if configure selectRow and expandBy column - Following code will fail ```js render() { const options = { expandRowBgColor: 'rgb(242, 255, 163)', expandBy: 'column' // Currently, available value is row and column, default is row }; const selectRow = { mode: 'checkbox', clickToSelect: false, // click to select, default is false clickToExpand: true // click to expand row, default is false }; return ( <BootstrapTable data={ products } options={ options } selectRow={ selectRow } expandableRow={ this.isExpandableRow } expandComponent={ this.expandComponent } search> <TableHeaderColumn dataField='id' isKey={ true }>Product ID</TableHeaderColumn> <TableHeaderColumn dataField='name' expandable={ false }>Product Name</TableHeaderColumn> <TableHeaderColumn dataField='price' expandable={ false }>Product Price</TableHeaderColumn> </BootstrapTable> ); } ```
process
expand row broken if configure selectrow and expandby column following code will fail js render const options expandrowbgcolor rgb expandby column currently available value is row and column default is row const selectrow mode checkbox clicktoselect false click to select default is false clicktoexpand true click to expand row default is false return bootstraptable data products options options selectrow selectrow expandablerow this isexpandablerow expandcomponent this expandcomponent search product id product name product price
1
20,532
4,565,851,261
IssuesEvent
2016-09-15 03:02:37
twosigma/beaker-notebook
https://api.github.com/repos/twosigma/beaker-notebook
closed
document module loader in p5.js tutorial
Documentation Enhancement
explain where the JS comes from in the initial html cell.
1.0
document module loader in p5.js tutorial - explain where the JS comes from in the initial html cell.
non_process
document module loader in js tutorial explain where the js comes from in the initial html cell
0
19,522
25,833,044,826
IssuesEvent
2022-12-12 17:24:49
scverse/spatialdata
https://api.github.com/repos/scverse/spatialdata
opened
Remove need for editable install on CI
enhancement CI dev process
We fixed collection of coverage data by codecov in #76 by making the package install in editable mode on CI. Ideally we wouldn't need to do that. To reproduce the failure: ``` pip install . pytest --cov >/dev/null # Redirecting stdout so we only see stderr ``` ``` Users/isaac/miniconda3/envs/spatialdata/lib/python3.10/site-packages/coverage/control.py:801: CoverageWarning: No data was collected. (no-data-collected) self._warn("No data was collected.", slug="no-data-collected") ``` For some reason we don't need to do this for anndata, though I vaguely recall it being a problem.
1.0
Remove need for editable install on CI - We fixed collection of coverage data by codecov in #76 by making the package install in editable mode on CI. Ideally we wouldn't need to do that. To reproduce the failure: ``` pip install . pytest --cov >/dev/null # Redirecting stdout so we only see stderr ``` ``` Users/isaac/miniconda3/envs/spatialdata/lib/python3.10/site-packages/coverage/control.py:801: CoverageWarning: No data was collected. (no-data-collected) self._warn("No data was collected.", slug="no-data-collected") ``` For some reason we don't need to do this for anndata, though I vaguely recall it being a problem.
process
remove need for editable install on ci we fixed collection of coverage data by codecov in by making the package install in editable mode on ci ideally we wouldn t need to do that to reproduce the failure pip install pytest cov dev null redirecting stdout so we only see stderr users isaac envs spatialdata lib site packages coverage control py coveragewarning no data was collected no data collected self warn no data was collected slug no data collected for some reason we don t need to do this for anndata though i vaguely recall it being a problem
1
10,797
13,609,208,796
IssuesEvent
2020-09-23 04:36:13
googleapis/java-logging
https://api.github.com/repos/googleapis/java-logging
closed
Dependency Dashboard
api: logging type: process
This issue contains a list of Renovate updates and their statuses. ## Open These updates have all been created already. Click a checkbox below to force a retry/rebase of any. - [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-logging-1.x -->chore(deps): update dependency com.google.cloud:google-cloud-logging to v1.102.0 - [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-logging-parent-1.x -->chore(deps): update dependency com.google.cloud:google-cloud-logging-parent to v1.102.0 - [ ] <!-- rebase-branch=renovate/com.google.cloud-libraries-bom-10.x -->chore(deps): update dependency com.google.cloud:libraries-bom to v10.1.0 - [ ] <!-- rebase-branch=renovate/com.google.api.grpc-grpc-google-cloud-logging-v2-0.x -->deps: update dependency com.google.api.grpc:grpc-google-cloud-logging-v2 to v0.85.0 - [ ] <!-- rebase-branch=renovate/com.google.api.grpc-proto-google-cloud-logging-v2-0.x -->deps: update dependency com.google.api.grpc:proto-google-cloud-logging-v2 to v0.85.0 - [ ] <!-- rebase-branch=renovate/org.easymock-easymock-4.x -->deps: update dependency org.easymock:easymock to v4 - [ ] <!-- rebase-branch=renovate/org.objenesis-objenesis-3.x -->deps: update dependency org.objenesis:objenesis to v3 - [ ] <!-- rebase-all-open-prs -->**Check this option to rebase all the above open PRs at once** --- - [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
1.0
Dependency Dashboard - This issue contains a list of Renovate updates and their statuses. ## Open These updates have all been created already. Click a checkbox below to force a retry/rebase of any. - [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-logging-1.x -->chore(deps): update dependency com.google.cloud:google-cloud-logging to v1.102.0 - [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-logging-parent-1.x -->chore(deps): update dependency com.google.cloud:google-cloud-logging-parent to v1.102.0 - [ ] <!-- rebase-branch=renovate/com.google.cloud-libraries-bom-10.x -->chore(deps): update dependency com.google.cloud:libraries-bom to v10.1.0 - [ ] <!-- rebase-branch=renovate/com.google.api.grpc-grpc-google-cloud-logging-v2-0.x -->deps: update dependency com.google.api.grpc:grpc-google-cloud-logging-v2 to v0.85.0 - [ ] <!-- rebase-branch=renovate/com.google.api.grpc-proto-google-cloud-logging-v2-0.x -->deps: update dependency com.google.api.grpc:proto-google-cloud-logging-v2 to v0.85.0 - [ ] <!-- rebase-branch=renovate/org.easymock-easymock-4.x -->deps: update dependency org.easymock:easymock to v4 - [ ] <!-- rebase-branch=renovate/org.objenesis-objenesis-3.x -->deps: update dependency org.objenesis:objenesis to v3 - [ ] <!-- rebase-all-open-prs -->**Check this option to rebase all the above open PRs at once** --- - [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
process
dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any chore deps update dependency com google cloud google cloud logging to chore deps update dependency com google cloud google cloud logging parent to chore deps update dependency com google cloud libraries bom to deps update dependency com google api grpc grpc google cloud logging to deps update dependency com google api grpc proto google cloud logging to deps update dependency org easymock easymock to deps update dependency org objenesis objenesis to check this option to rebase all the above open prs at once check this box to trigger a request for renovate to run again on this repository
1
294,127
25,347,577,210
IssuesEvent
2022-11-19 11:35:50
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
sql/parser: TestParseDatadriven failed
C-test-failure O-robot branch-release-22.2
sql/parser.TestParseDatadriven [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/7600198?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/7600198?buildTab=artifacts#/) on release-22.2 @ [022d19ab6c1c6f6dd477d5120b4c47a46f744238](https://github.com/cockroachdb/cockroach/commits/022d19ab6c1c6f6dd477d5120b4c47a46f744238): ``` /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/709/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/sql/parser/parser_test_/parser_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/parser/testdata/create_function:216: error [0 args] CREATE OR REPLACE FUNCTION f(INOUT a int = 7) RETURNS INT AS 'SELECT 1' LANGUAGE SQL ---- at or near "inout": syntax error: unimplemented: this syntax DETAIL: source SQL: CREATE OR REPLACE FUNCTION f(INOUT a int = 7) RETURNS INT AS 'SELECT 1' LANGUAGE SQL ^ HINT: You have attempted to use a feature that is not yet implemented. Please check the public issue tracker to check whether this problem is already tracked. If you cannot find it there, please report the error with details by creating a new issue. If you would rather not post publicly, please contact us directly using the support form. We appreciate your feedback. parse_test.go:39: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/709/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/sql/parser/parser_test_/parser_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/parser/testdata/create_function:237: error [0 args] CREATE OR REPLACE FUNCTION f(IN OUT a int = 7) RETURNS INT AS 'SELECT 1' LANGUAGE SQL ---- at or near "out": syntax error: unimplemented: this syntax DETAIL: source SQL: CREATE OR REPLACE FUNCTION f(IN OUT a int = 7) RETURNS INT AS 'SELECT 1' LANGUAGE SQL ^ HINT: You have attempted to use a feature that is not yet implemented. Please check the public issue tracker to check whether this problem is already tracked. If you cannot find it there, please report the error with details by creating a new issue. If you would rather not post publicly, please contact us directly using the support form. We appreciate your feedback. parse_test.go:39: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/709/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/sql/parser/parser_test_/parser_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/parser/testdata/create_function:258: CREATE OR REPLACE FUNCTION f(VARIADIC a int = 7) RETURNS INT AS 'SELECT 1' LANGUAGE SQL output didn't match expected: @@ -1,7 +1,7 @@ at or near "variadic": syntax error: unimplemented: this syntax DETAIL: source SQL: CREATE OR REPLACE FUNCTION f(VARIADIC a int = 7) RETURNS INT AS 'SELECT 1' LANGUAGE SQL ^ HINT: You have attempted to use a feature that is not yet implemented. -See: https://go.crdb.dev/issue-v/88947/v22.2 +See: https://go.crdb.dev/issue-v/88947/dev --- FAIL: TestParseDatadriven/create_function (0.00s) ``` <p>Parameters: <code>TAGS=bazel,gss,deadlock</code> </p> <details><summary>Help</summary> <p> See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM) </p> </details> <details><summary>Same failure on other branches</summary> <p> - #92053 sql/parser: TestParseDatadriven failed [C-test-failure O-robot T-sql-experience branch-master] </p> </details> /cc @cockroachdb/sql-experience <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestParseDatadriven.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
1.0
sql/parser: TestParseDatadriven failed - sql/parser.TestParseDatadriven [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/7600198?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/7600198?buildTab=artifacts#/) on release-22.2 @ [022d19ab6c1c6f6dd477d5120b4c47a46f744238](https://github.com/cockroachdb/cockroach/commits/022d19ab6c1c6f6dd477d5120b4c47a46f744238): ``` /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/709/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/sql/parser/parser_test_/parser_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/parser/testdata/create_function:216: error [0 args] CREATE OR REPLACE FUNCTION f(INOUT a int = 7) RETURNS INT AS 'SELECT 1' LANGUAGE SQL ---- at or near "inout": syntax error: unimplemented: this syntax DETAIL: source SQL: CREATE OR REPLACE FUNCTION f(INOUT a int = 7) RETURNS INT AS 'SELECT 1' LANGUAGE SQL ^ HINT: You have attempted to use a feature that is not yet implemented. Please check the public issue tracker to check whether this problem is already tracked. If you cannot find it there, please report the error with details by creating a new issue. If you would rather not post publicly, please contact us directly using the support form. We appreciate your feedback. parse_test.go:39: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/709/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/sql/parser/parser_test_/parser_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/parser/testdata/create_function:237: error [0 args] CREATE OR REPLACE FUNCTION f(IN OUT a int = 7) RETURNS INT AS 'SELECT 1' LANGUAGE SQL ---- at or near "out": syntax error: unimplemented: this syntax DETAIL: source SQL: CREATE OR REPLACE FUNCTION f(IN OUT a int = 7) RETURNS INT AS 'SELECT 1' LANGUAGE SQL ^ HINT: You have attempted to use a feature that is not yet implemented. Please check the public issue tracker to check whether this problem is already tracked. If you cannot find it there, please report the error with details by creating a new issue. If you would rather not post publicly, please contact us directly using the support form. We appreciate your feedback. parse_test.go:39: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/709/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/sql/parser/parser_test_/parser_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/parser/testdata/create_function:258: CREATE OR REPLACE FUNCTION f(VARIADIC a int = 7) RETURNS INT AS 'SELECT 1' LANGUAGE SQL output didn't match expected: @@ -1,7 +1,7 @@ at or near "variadic": syntax error: unimplemented: this syntax DETAIL: source SQL: CREATE OR REPLACE FUNCTION f(VARIADIC a int = 7) RETURNS INT AS 'SELECT 1' LANGUAGE SQL ^ HINT: You have attempted to use a feature that is not yet implemented. -See: https://go.crdb.dev/issue-v/88947/v22.2 +See: https://go.crdb.dev/issue-v/88947/dev --- FAIL: TestParseDatadriven/create_function (0.00s) ``` <p>Parameters: <code>TAGS=bazel,gss,deadlock</code> </p> <details><summary>Help</summary> <p> See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM) </p> </details> <details><summary>Same failure on other branches</summary> <p> - #92053 sql/parser: TestParseDatadriven failed [C-test-failure O-robot T-sql-experience branch-master] </p> </details> /cc @cockroachdb/sql-experience <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestParseDatadriven.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
non_process
sql parser testparsedatadriven failed sql parser testparsedatadriven with on release home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out fastbuild bin pkg sql parser parser test parser test runfiles com github cockroachdb cockroach pkg sql parser testdata create function error create or replace function f inout a int returns int as select language sql at or near inout syntax error unimplemented this syntax detail source sql create or replace function f inout a int returns int as select language sql hint you have attempted to use a feature that is not yet implemented please check the public issue tracker to check whether this problem is already tracked if you cannot find it there please report the error with details by creating a new issue if you would rather not post publicly please contact us directly using the support form we appreciate your feedback parse test go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out fastbuild bin pkg sql parser parser test parser test runfiles com github cockroachdb cockroach pkg sql parser testdata create function error create or replace function f in out a int returns int as select language sql at or near out syntax error unimplemented this syntax detail source sql create or replace function f in out a int returns int as select language sql hint you have attempted to use a feature that is not yet implemented please check the public issue tracker to check whether this problem is already tracked if you cannot find it there please report the error with details by creating a new issue if you would rather not post publicly please contact us directly using the support form we appreciate your feedback parse test go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out fastbuild bin pkg sql parser parser test parser test runfiles com github cockroachdb cockroach pkg sql parser testdata create function create or replace function f variadic a int returns int as select language sql output didn t match expected at or near variadic syntax error unimplemented this syntax detail source sql create or replace function f variadic a int returns int as select language sql hint you have attempted to use a feature that is not yet implemented see see fail testparsedatadriven create function parameters tags bazel gss deadlock help see also same failure on other branches sql parser testparsedatadriven failed cc cockroachdb sql experience
0
12,720
15,093,587,076
IssuesEvent
2021-02-07 01:24:18
Maximus5/ConEmu
https://api.github.com/repos/Maximus5/ConEmu
closed
Start-Process with "-new_console" throws an error
processes
### Versions ConEmu build: 210202 x64 OS version: Windows 10 x64 Used shell version: powershell ### Problem description Calling "Start-Process" from powershell throws an error if "-new_console" argument presents **210128**: problem is not reproduced, everything works as excepted **210202**: problem is reproduced as described below ### Steps to reproduce 1. Create new console with the command "powershell -NoLogo -NoExit -ExecutionPolicy Bypass" 2. Command `Start-Process "ping" -ArgumentList @("-n", "1", "127.0.0.1")` 3. Command `Start-Process "ping" -ArgumentList @("-n", "1", "127.0.0.1", "-new_console")` ### Actual results 1. Standard powershell prompt displayed 2. New console opened with this output: ```Pinging 127.0.0.1 with 32 bytes of data: Reply from 127.0.0.1: bytes=32 time<1ms TTL=128 Ping statistics for 127.0.0.1: Packets: Sent = 1, Received = 1, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms Press Enter or Esc to exit... ``` 3. New console opened with this output: ```Can't create process, ErrCode=0x00000057, Description: The parameter is incorrect. Current directory: C:\Users\Anmiles Command to be executed: ""C:\WINDOWS\system32\PING.EXE" -n 127.0.0.1 Press Enter or Esc to close console, or wait... ``` ### Expected results 1. Standard powershell prompt displayed 2. New console opened with this output: ```Pinging 127.0.0.1 with 32 bytes of data: Reply from 127.0.0.1: bytes=32 time<1ms TTL=128 Ping statistics for 127.0.0.1: Packets: Sent = 1, Received = 1, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms Press Enter or Esc to exit... ``` 3. New console opened with this output: ```Pinging 127.0.0.1 with 32 bytes of data: Reply from 127.0.0.1: bytes=32 time<1ms TTL=128 Ping statistics for 127.0.0.1: Packets: Sent = 1, Received = 1, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms Press Enter or Esc to exit... ``` ### Additional files [Settings](https://github.com/Maximus5/ConEmu/files/5917217/ConEmu.xml.txt)
1.0
Start-Process with "-new_console" throws an error - ### Versions ConEmu build: 210202 x64 OS version: Windows 10 x64 Used shell version: powershell ### Problem description Calling "Start-Process" from powershell throws an error if "-new_console" argument presents **210128**: problem is not reproduced, everything works as excepted **210202**: problem is reproduced as described below ### Steps to reproduce 1. Create new console with the command "powershell -NoLogo -NoExit -ExecutionPolicy Bypass" 2. Command `Start-Process "ping" -ArgumentList @("-n", "1", "127.0.0.1")` 3. Command `Start-Process "ping" -ArgumentList @("-n", "1", "127.0.0.1", "-new_console")` ### Actual results 1. Standard powershell prompt displayed 2. New console opened with this output: ```Pinging 127.0.0.1 with 32 bytes of data: Reply from 127.0.0.1: bytes=32 time<1ms TTL=128 Ping statistics for 127.0.0.1: Packets: Sent = 1, Received = 1, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms Press Enter or Esc to exit... ``` 3. New console opened with this output: ```Can't create process, ErrCode=0x00000057, Description: The parameter is incorrect. Current directory: C:\Users\Anmiles Command to be executed: ""C:\WINDOWS\system32\PING.EXE" -n 127.0.0.1 Press Enter or Esc to close console, or wait... ``` ### Expected results 1. Standard powershell prompt displayed 2. New console opened with this output: ```Pinging 127.0.0.1 with 32 bytes of data: Reply from 127.0.0.1: bytes=32 time<1ms TTL=128 Ping statistics for 127.0.0.1: Packets: Sent = 1, Received = 1, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms Press Enter or Esc to exit... ``` 3. New console opened with this output: ```Pinging 127.0.0.1 with 32 bytes of data: Reply from 127.0.0.1: bytes=32 time<1ms TTL=128 Ping statistics for 127.0.0.1: Packets: Sent = 1, Received = 1, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms Press Enter or Esc to exit... ``` ### Additional files [Settings](https://github.com/Maximus5/ConEmu/files/5917217/ConEmu.xml.txt)
process
start process with new console throws an error versions conemu build os version windows used shell version powershell problem description calling start process from powershell throws an error if new console argument presents problem is not reproduced everything works as excepted problem is reproduced as described below steps to reproduce create new console with the command powershell nologo noexit executionpolicy bypass command start process ping argumentlist n command start process ping argumentlist n new console actual results standard powershell prompt displayed new console opened with this output pinging with bytes of data reply from bytes time ttl ping statistics for packets sent received lost loss approximate round trip times in milli seconds minimum maximum average press enter or esc to exit new console opened with this output can t create process errcode description the parameter is incorrect current directory c users anmiles command to be executed c windows ping exe n press enter or esc to close console or wait expected results standard powershell prompt displayed new console opened with this output pinging with bytes of data reply from bytes time ttl ping statistics for packets sent received lost loss approximate round trip times in milli seconds minimum maximum average press enter or esc to exit new console opened with this output pinging with bytes of data reply from bytes time ttl ping statistics for packets sent received lost loss approximate round trip times in milli seconds minimum maximum average press enter or esc to exit additional files
1
46
2,513,878,249
IssuesEvent
2015-01-15 04:33:35
GsDevKit/zinc
https://api.github.com/repos/GsDevKit/zinc
closed
review server stack for GemStone/Seaside
inprocess task
The server stack starts with a fork in [ZnSingleThreadedServer>>start][1], then [ZnSingleThreadedServer>>listenLoop][2] initiates an infinite loop for serving requests calling [ZnSingleThreadedServer>>serveConnectionOn:](https://github.com/glassdb/zinc/blob/gemstone2.4/repository/Zinc-HTTP.package/ZnSingleThreadedServer.class/instance/serveConnectionOn..st) which forks off a separate thread to serve each request. Presumably the Seaside is adaptor is called during the request handling ... where the transaction boundaries are managed. There are basically four issues to consider: 1. Need to make sure that there is no persistent state being modified in the server stack above the Seaside transaction boundaries. 2. Install error handlers so that server errors can be logged and threads cleanly shutdown. Any unhandled errors in the server stack will terminate the gems. 3. Make [ZnSingleThreadedServer>>start][1] a blocking call. We want the server to be block so the outmost fork in [ZnSingleThreadedServer>>start][1] is unnecessary ... the repeat block in [ZnSingleThreadedServer>>listenLoop][2] is an effective blocking scheme. 4. With FastCGI, we discovered that at high request rates we could blow out memory with unrestricted accept/forks, so we need to probably introduce a limiter of some sort there. Under load testing we should be able to flush out any other issues. [1]: https://github.com/glassdb/zinc/blob/gemstone2.4/repository/Zinc-HTTP.package/ZnSingleThreadedServer.class/instance/start.st [2]: https://github.com/glassdb/zinc/blob/gemstone2.4/repository/Zinc-HTTP.package/ZnSingleThreadedServer.class/instance/listenLoop.st
1.0
review server stack for GemStone/Seaside - The server stack starts with a fork in [ZnSingleThreadedServer>>start][1], then [ZnSingleThreadedServer>>listenLoop][2] initiates an infinite loop for serving requests calling [ZnSingleThreadedServer>>serveConnectionOn:](https://github.com/glassdb/zinc/blob/gemstone2.4/repository/Zinc-HTTP.package/ZnSingleThreadedServer.class/instance/serveConnectionOn..st) which forks off a separate thread to serve each request. Presumably the Seaside is adaptor is called during the request handling ... where the transaction boundaries are managed. There are basically four issues to consider: 1. Need to make sure that there is no persistent state being modified in the server stack above the Seaside transaction boundaries. 2. Install error handlers so that server errors can be logged and threads cleanly shutdown. Any unhandled errors in the server stack will terminate the gems. 3. Make [ZnSingleThreadedServer>>start][1] a blocking call. We want the server to be block so the outmost fork in [ZnSingleThreadedServer>>start][1] is unnecessary ... the repeat block in [ZnSingleThreadedServer>>listenLoop][2] is an effective blocking scheme. 4. With FastCGI, we discovered that at high request rates we could blow out memory with unrestricted accept/forks, so we need to probably introduce a limiter of some sort there. Under load testing we should be able to flush out any other issues. [1]: https://github.com/glassdb/zinc/blob/gemstone2.4/repository/Zinc-HTTP.package/ZnSingleThreadedServer.class/instance/start.st [2]: https://github.com/glassdb/zinc/blob/gemstone2.4/repository/Zinc-HTTP.package/ZnSingleThreadedServer.class/instance/listenLoop.st
process
review server stack for gemstone seaside the server stack starts with a fork in then initiates an infinite loop for serving requests calling which forks off a separate thread to serve each request presumably the seaside is adaptor is called during the request handling where the transaction boundaries are managed there are basically four issues to consider need to make sure that there is no persistent state being modified in the server stack above the seaside transaction boundaries install error handlers so that server errors can be logged and threads cleanly shutdown any unhandled errors in the server stack will terminate the gems make a blocking call we want the server to be block so the outmost fork in is unnecessary the repeat block in is an effective blocking scheme with fastcgi we discovered that at high request rates we could blow out memory with unrestricted accept forks so we need to probably introduce a limiter of some sort there under load testing we should be able to flush out any other issues
1
17,884
23,840,880,895
IssuesEvent
2022-09-06 10:06:41
python/cpython
https://api.github.com/repos/python/cpython
closed
multiprocessing sentinel resource leak
performance stdlib 3.7 pending expert-multiprocessing
BPO | [26732](https://bugs.python.org/issue26732) --- | :--- Nosy | @pitrou, @vstinner, @applio PRs | <li>python/cpython#2813</li> <sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup> <details><summary>Show more details</summary><p> GitHub fields: ```python assignee = None closed_at = None created_at = <Date 2016-04-10.21:58:21.521> labels = ['3.7', 'library', 'performance'] title = 'multiprocessing sentinel resource leak' updated_at = <Date 2017-07-31.00:12:15.278> user = 'https://bugs.python.org/quick-b' ``` bugs.python.org fields: ```python activity = <Date 2017-07-31.00:12:15.278> actor = 'quick-b' assignee = 'none' closed = False closed_date = None closer = None components = ['Library (Lib)'] creation = <Date 2016-04-10.21:58:21.521> creator = 'quick-b' dependencies = [] files = [] hgrepos = [] issue_num = 26732 keywords = [] message_count = 13.0 messages = ['263153', '263154', '263282', '298844', '298846', '299051', '299052', '299157', '299174', '299418', '299419', '299424', '299531'] nosy_count = 8.0 nosy_names = ['holdenweb', 'pitrou', 'vstinner', 'jnoller', 'sbt', 'davin', 'Winterflower', 'quick-b'] pr_nums = ['2813'] priority = 'normal' resolution = None stage = 'resolved' status = 'open' superseder = None type = 'resource usage' url = 'https://bugs.python.org/issue26732' versions = ['Python 3.6', 'Python 3.7'] ``` </p></details>
1.0
multiprocessing sentinel resource leak - BPO | [26732](https://bugs.python.org/issue26732) --- | :--- Nosy | @pitrou, @vstinner, @applio PRs | <li>python/cpython#2813</li> <sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup> <details><summary>Show more details</summary><p> GitHub fields: ```python assignee = None closed_at = None created_at = <Date 2016-04-10.21:58:21.521> labels = ['3.7', 'library', 'performance'] title = 'multiprocessing sentinel resource leak' updated_at = <Date 2017-07-31.00:12:15.278> user = 'https://bugs.python.org/quick-b' ``` bugs.python.org fields: ```python activity = <Date 2017-07-31.00:12:15.278> actor = 'quick-b' assignee = 'none' closed = False closed_date = None closer = None components = ['Library (Lib)'] creation = <Date 2016-04-10.21:58:21.521> creator = 'quick-b' dependencies = [] files = [] hgrepos = [] issue_num = 26732 keywords = [] message_count = 13.0 messages = ['263153', '263154', '263282', '298844', '298846', '299051', '299052', '299157', '299174', '299418', '299419', '299424', '299531'] nosy_count = 8.0 nosy_names = ['holdenweb', 'pitrou', 'vstinner', 'jnoller', 'sbt', 'davin', 'Winterflower', 'quick-b'] pr_nums = ['2813'] priority = 'normal' resolution = None stage = 'resolved' status = 'open' superseder = None type = 'resource usage' url = 'https://bugs.python.org/issue26732' versions = ['Python 3.6', 'Python 3.7'] ``` </p></details>
process
multiprocessing sentinel resource leak bpo nosy pitrou vstinner applio prs python cpython note these values reflect the state of the issue at the time it was migrated and might not reflect the current state show more details github fields python assignee none closed at none created at labels title multiprocessing sentinel resource leak updated at user bugs python org fields python activity actor quick b assignee none closed false closed date none closer none components creation creator quick b dependencies files hgrepos issue num keywords message count messages nosy count nosy names pr nums priority normal resolution none stage resolved status open superseder none type resource usage url versions
1
12,455
7,883,545,862
IssuesEvent
2018-06-27 05:46:31
surveyjs/editor
https://api.github.com/repos/surveyjs/editor
closed
DefaultValue for radiogroup not removable
fixed usability issue
### Are you requesting a feature, reporting a bug or ask a question? Feature ### What is the current behavior? If you set the defaultValue for a radiogroup, it seems to be impossible to remove it again. You can only remove it in the JSON Editor. ### What is the expected behavior? I think there should be an option to remove the defaultValue on the modal. ### How would you reproduce the current behavior (if this is a bug)? Set the defaultValue for a radiogroup and try to remove it. #### Provide the test code and the tested page URL (if applicable) Tested page URL: https://surveyjs.io/Survey/Builder/ ### Specify your * browser: Chrome 67 * editor version: 1.0.28 ![image](https://user-images.githubusercontent.com/32272176/41718648-d6a9c49e-755d-11e8-953a-3b95947dd0b0.png)
True
DefaultValue for radiogroup not removable - ### Are you requesting a feature, reporting a bug or ask a question? Feature ### What is the current behavior? If you set the defaultValue for a radiogroup, it seems to be impossible to remove it again. You can only remove it in the JSON Editor. ### What is the expected behavior? I think there should be an option to remove the defaultValue on the modal. ### How would you reproduce the current behavior (if this is a bug)? Set the defaultValue for a radiogroup and try to remove it. #### Provide the test code and the tested page URL (if applicable) Tested page URL: https://surveyjs.io/Survey/Builder/ ### Specify your * browser: Chrome 67 * editor version: 1.0.28 ![image](https://user-images.githubusercontent.com/32272176/41718648-d6a9c49e-755d-11e8-953a-3b95947dd0b0.png)
non_process
defaultvalue for radiogroup not removable are you requesting a feature reporting a bug or ask a question feature what is the current behavior if you set the defaultvalue for a radiogroup it seems to be impossible to remove it again you can only remove it in the json editor what is the expected behavior i think there should be an option to remove the defaultvalue on the modal how would you reproduce the current behavior if this is a bug set the defaultvalue for a radiogroup and try to remove it provide the test code and the tested page url if applicable tested page url specify your browser chrome editor version
0
44,567
11,462,044,439
IssuesEvent
2020-02-07 13:21:32
google/or-tools
https://api.github.com/repos/google/or-tools
closed
Can't apply zlib patch when add_subdirectory(or-tools)
Bug Build: CMake
I'm on ubuntu 18.04LTS cmake version 3.10.2 git version 2.17.1 Steps to reproduce: ```sh git clone https://github.com/google/or-tools.git cd or-tools mkdir build && cd build cmake .. ``` The error I get is the following: ``` -- The CXX compiler identification is GNU 7.4.0 -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- version: 7.3.7089 -- Build C++ library: ON -- Build Python: OFF -- Build Java: OFF -- Build .Net: OFF -- Build all dependencies: OFF -- Build ZLIB: OFF -- Build abseil-cpp: OFF -- Build gflags: OFF -- Build glog: OFF -- Build protobuf: OFF -- Build CoinUtils: OFF -- Build Osi: OFF -- Build Clp: OFF -- Build Cgl: OFF -- Build Cbc: OFF -- Looking for C++ include pthread.h -- Looking for C++ include pthread.h - found -- Looking for pthread_create -- Looking for pthread_create - not found -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - found -- Found Threads: TRUE -- Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found version "1.2.11") CMake Error at cmake/cpp.cmake:11 (find_package): Could not find a package configuration file provided by "absl" with any of the following names: abslConfig.cmake absl-config.cmake Add the installation prefix of "absl" to CMAKE_PREFIX_PATH or set "absl_DIR" to a directory containing one of the above files. If "absl" provides a separate development package or SDK, be sure it has been installed. Call Stack (most recent call first): CMakeLists.txt:88 (include) -- Configuring incomplete, errors occurred! ```
1.0
Can't apply zlib patch when add_subdirectory(or-tools) - I'm on ubuntu 18.04LTS cmake version 3.10.2 git version 2.17.1 Steps to reproduce: ```sh git clone https://github.com/google/or-tools.git cd or-tools mkdir build && cd build cmake .. ``` The error I get is the following: ``` -- The CXX compiler identification is GNU 7.4.0 -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- version: 7.3.7089 -- Build C++ library: ON -- Build Python: OFF -- Build Java: OFF -- Build .Net: OFF -- Build all dependencies: OFF -- Build ZLIB: OFF -- Build abseil-cpp: OFF -- Build gflags: OFF -- Build glog: OFF -- Build protobuf: OFF -- Build CoinUtils: OFF -- Build Osi: OFF -- Build Clp: OFF -- Build Cgl: OFF -- Build Cbc: OFF -- Looking for C++ include pthread.h -- Looking for C++ include pthread.h - found -- Looking for pthread_create -- Looking for pthread_create - not found -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - found -- Found Threads: TRUE -- Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found version "1.2.11") CMake Error at cmake/cpp.cmake:11 (find_package): Could not find a package configuration file provided by "absl" with any of the following names: abslConfig.cmake absl-config.cmake Add the installation prefix of "absl" to CMAKE_PREFIX_PATH or set "absl_DIR" to a directory containing one of the above files. If "absl" provides a separate development package or SDK, be sure it has been installed. Call Stack (most recent call first): CMakeLists.txt:88 (include) -- Configuring incomplete, errors occurred! ```
non_process
can t apply zlib patch when add subdirectory or tools i m on ubuntu cmake version git version steps to reproduce sh git clone cd or tools mkdir build cd build cmake the error i get is the following the cxx compiler identification is gnu check for working cxx compiler usr bin c check for working cxx compiler usr bin c works detecting cxx compiler abi info detecting cxx compiler abi info done detecting cxx compile features detecting cxx compile features done version build c library on build python off build java off build net off build all dependencies off build zlib off build abseil cpp off build gflags off build glog off build protobuf off build coinutils off build osi off build clp off build cgl off build cbc off looking for c include pthread h looking for c include pthread h found looking for pthread create looking for pthread create not found looking for pthread create in pthreads looking for pthread create in pthreads not found looking for pthread create in pthread looking for pthread create in pthread found found threads true found zlib usr lib linux gnu libz so found version cmake error at cmake cpp cmake find package could not find a package configuration file provided by absl with any of the following names abslconfig cmake absl config cmake add the installation prefix of absl to cmake prefix path or set absl dir to a directory containing one of the above files if absl provides a separate development package or sdk be sure it has been installed call stack most recent call first cmakelists txt include configuring incomplete errors occurred
0
222,832
7,439,459,601
IssuesEvent
2018-03-27 06:38:35
intel-analytics/BigDL
https://api.github.com/repos/intel-analytics/BigDL
closed
Python ImageFrame get more attributes easily
API medium priority python
Python ImageFrame currently can only reture image/label/predict, may need to add API to get more attributes easily
1.0
Python ImageFrame get more attributes easily - Python ImageFrame currently can only reture image/label/predict, may need to add API to get more attributes easily
non_process
python imageframe get more attributes easily python imageframe currently can only reture image label predict may need to add api to get more attributes easily
0
478,769
13,785,184,807
IssuesEvent
2020-10-08 22:18:11
certbot/certbot
https://api.github.com/repos/certbot/certbot
closed
Automatically move Certbot (and DNS snaps) to the stable channel
area: pkging area: snaps area: tooling has pr priority: significant
If we don't want to publish stable DNS snaps yet, I think we should just figure out the mechanism we want to use here and implement it for the Certbot snap. I do not think we accomplish this issue by storing the necessary credentials in CI. I think storing these credentials in CI greatly increases the likelihood they accidentally leak and the potential downside of leaking credentials that can be used to push updates to our users which will be automatically installed and will have unconfined root access to the user's system is far too great in my opinion. Instead, I think we should write a script that does this for both Certbot and the DNS plugin snaps that we can run locally. I'm imagining this building on the script used for https://github.com/certbot/certbot/issues/8049. The script could download the snaps from Azure and reupload them to the snap store, but it's 500MB which can be quite slow to download and reupload. Instead, I think we should let Azure publish the snaps to the edge/beta channel and then locally move them to the stable channel. `snapcraft` subcommands which may be useful here are `status`, `promote`, and `release`. If the Azure build passes, I think we may be guaranteed the snaps have been successfully published but I'm not sure. If not, we could use `snapcraft status` to get the revision numbers, making sure they are for the version of the project you expect, and then using `snapcraft release` to move them to the channel we want. I'm open to other ideas too.
1.0
Automatically move Certbot (and DNS snaps) to the stable channel - If we don't want to publish stable DNS snaps yet, I think we should just figure out the mechanism we want to use here and implement it for the Certbot snap. I do not think we accomplish this issue by storing the necessary credentials in CI. I think storing these credentials in CI greatly increases the likelihood they accidentally leak and the potential downside of leaking credentials that can be used to push updates to our users which will be automatically installed and will have unconfined root access to the user's system is far too great in my opinion. Instead, I think we should write a script that does this for both Certbot and the DNS plugin snaps that we can run locally. I'm imagining this building on the script used for https://github.com/certbot/certbot/issues/8049. The script could download the snaps from Azure and reupload them to the snap store, but it's 500MB which can be quite slow to download and reupload. Instead, I think we should let Azure publish the snaps to the edge/beta channel and then locally move them to the stable channel. `snapcraft` subcommands which may be useful here are `status`, `promote`, and `release`. If the Azure build passes, I think we may be guaranteed the snaps have been successfully published but I'm not sure. If not, we could use `snapcraft status` to get the revision numbers, making sure they are for the version of the project you expect, and then using `snapcraft release` to move them to the channel we want. I'm open to other ideas too.
non_process
automatically move certbot and dns snaps to the stable channel if we don t want to publish stable dns snaps yet i think we should just figure out the mechanism we want to use here and implement it for the certbot snap i do not think we accomplish this issue by storing the necessary credentials in ci i think storing these credentials in ci greatly increases the likelihood they accidentally leak and the potential downside of leaking credentials that can be used to push updates to our users which will be automatically installed and will have unconfined root access to the user s system is far too great in my opinion instead i think we should write a script that does this for both certbot and the dns plugin snaps that we can run locally i m imagining this building on the script used for the script could download the snaps from azure and reupload them to the snap store but it s which can be quite slow to download and reupload instead i think we should let azure publish the snaps to the edge beta channel and then locally move them to the stable channel snapcraft subcommands which may be useful here are status promote and release if the azure build passes i think we may be guaranteed the snaps have been successfully published but i m not sure if not we could use snapcraft status to get the revision numbers making sure they are for the version of the project you expect and then using snapcraft release to move them to the channel we want i m open to other ideas too
0
8,331
4,226,973,775
IssuesEvent
2016-07-02 21:05:03
RicinApp/Ricin
https://api.github.com/repos/RicinApp/Ricin
closed
symbol lookup error: ./lib/libgtk-3.so.0: undefined symbol: g_list_model_get_type
bug build-appimage
I'm on the newest Debien Jessie with the newest installation of tox and Ricin. Installation as instructed on this github page (below Download) On the last step (./ricin-0.1.1.app) I get an error: /tmp/.mount_E462Ym/usr/bin/ricin.wrapper zenity, kdialog, Xdialog missing. Skipping ./bin//ricin.wrapper. /tmp/.mount_E462Ym/usr/bin/ricin: symbol lookup error: ./lib/libgtk-3.so.0: undefined symbol: g_list_model_get_type
1.0
symbol lookup error: ./lib/libgtk-3.so.0: undefined symbol: g_list_model_get_type - I'm on the newest Debien Jessie with the newest installation of tox and Ricin. Installation as instructed on this github page (below Download) On the last step (./ricin-0.1.1.app) I get an error: /tmp/.mount_E462Ym/usr/bin/ricin.wrapper zenity, kdialog, Xdialog missing. Skipping ./bin//ricin.wrapper. /tmp/.mount_E462Ym/usr/bin/ricin: symbol lookup error: ./lib/libgtk-3.so.0: undefined symbol: g_list_model_get_type
non_process
symbol lookup error lib libgtk so undefined symbol g list model get type i m on the newest debien jessie with the newest installation of tox and ricin installation as instructed on this github page below download on the last step ricin app i get an error tmp mount usr bin ricin wrapper zenity kdialog xdialog missing skipping bin ricin wrapper tmp mount usr bin ricin symbol lookup error lib libgtk so undefined symbol g list model get type
0
6,083
7,471,336,713
IssuesEvent
2018-04-03 08:57:10
hyperledger/composer
https://api.github.com/repos/hyperledger/composer
closed
Go log points should include transaction Id
P3 bug runtime serviceability v0.16.x
If you start the fabric network with `CORE_CHAINCODE_LOGGING_LEVEL=DEBUG` on the peer then you are able to control composer runtime logging through the `composer network loglevel` command. However if you don't have this environment variable set or set to debug, then `composer network loglevel` only changes the logging level of Go chaincode and doesn't affect the composer javascript runtime meaning you cannot get any debug about from the main runtime code dynamically. Also we should really include a short TxId, and optionally the composer index in the output so it's possible to tie the flows together within the single log output.
1.0
Go log points should include transaction Id - If you start the fabric network with `CORE_CHAINCODE_LOGGING_LEVEL=DEBUG` on the peer then you are able to control composer runtime logging through the `composer network loglevel` command. However if you don't have this environment variable set or set to debug, then `composer network loglevel` only changes the logging level of Go chaincode and doesn't affect the composer javascript runtime meaning you cannot get any debug about from the main runtime code dynamically. Also we should really include a short TxId, and optionally the composer index in the output so it's possible to tie the flows together within the single log output.
non_process
go log points should include transaction id if you start the fabric network with core chaincode logging level debug on the peer then you are able to control composer runtime logging through the composer network loglevel command however if you don t have this environment variable set or set to debug then composer network loglevel only changes the logging level of go chaincode and doesn t affect the composer javascript runtime meaning you cannot get any debug about from the main runtime code dynamically also we should really include a short txid and optionally the composer index in the output so it s possible to tie the flows together within the single log output
0
6,000
8,808,906,493
IssuesEvent
2018-12-27 16:50:07
linnovate/root
https://api.github.com/repos/linnovate/root
closed
documents tab, office documents from tasks bugs
2.0.6 Fixed Process bug critical
when theres an office document in a task and you try to access it through the documents tab -> office documents from tasks tab it shows the document for a split second and then disappears when you dont have an office document in a task, it just doesnt show a list of documents you can create ![image](https://user-images.githubusercontent.com/38312178/49867332-d74fd580-fe12-11e8-99f5-102456b38dcf.png) but when you access the tab using tasks-> documents->manage documents its shows it just fine in addition, when you click on a document from task from the list it shows the previous document that you used before clicking this one in this example i clicked on "rerewsrewsrs" before clicking on "dagfdshgdsh" ![image](https://user-images.githubusercontent.com/38312178/49867491-60670c80-fe13-11e8-9fd1-f392b9a99249.png)
1.0
documents tab, office documents from tasks bugs - when theres an office document in a task and you try to access it through the documents tab -> office documents from tasks tab it shows the document for a split second and then disappears when you dont have an office document in a task, it just doesnt show a list of documents you can create ![image](https://user-images.githubusercontent.com/38312178/49867332-d74fd580-fe12-11e8-99f5-102456b38dcf.png) but when you access the tab using tasks-> documents->manage documents its shows it just fine in addition, when you click on a document from task from the list it shows the previous document that you used before clicking this one in this example i clicked on "rerewsrewsrs" before clicking on "dagfdshgdsh" ![image](https://user-images.githubusercontent.com/38312178/49867491-60670c80-fe13-11e8-9fd1-f392b9a99249.png)
process
documents tab office documents from tasks bugs when theres an office document in a task and you try to access it through the documents tab office documents from tasks tab it shows the document for a split second and then disappears when you dont have an office document in a task it just doesnt show a list of documents you can create but when you access the tab using tasks documents manage documents its shows it just fine in addition when you click on a document from task from the list it shows the previous document that you used before clicking this one in this example i clicked on rerewsrewsrs before clicking on dagfdshgdsh
1
887
3,350,845,745
IssuesEvent
2015-11-17 16:14:36
g4gaurang/bcbsmaissuestracker
https://api.github.com/repos/g4gaurang/bcbsmaissuestracker
opened
Enhancement Request: 'Recommended Attributes' denotation to be available during Bulk Editing of attributes - BPMS-I-20
Priority-Medium Status- In-Process Type-Enhancement
BCBS would like to request that the 'Recommended Attributes' be applied to bulk editing of attributes as well, as their 'Recommended Attributes' are applied to all doc types. This would be in the Tools > Edit Attributes function. Also related to another request, BPMS-I-18, the ability to apply attributes to multiple docs at once (the same attributes) when uploading new docs. We'd want the Recommended Attributes functionality on this popup as well for bulk loading. Opened request today 11/17. Will update item when this has been included in new sprint for consideration and LOE estimate.
1.0
Enhancement Request: 'Recommended Attributes' denotation to be available during Bulk Editing of attributes - BPMS-I-20 - BCBS would like to request that the 'Recommended Attributes' be applied to bulk editing of attributes as well, as their 'Recommended Attributes' are applied to all doc types. This would be in the Tools > Edit Attributes function. Also related to another request, BPMS-I-18, the ability to apply attributes to multiple docs at once (the same attributes) when uploading new docs. We'd want the Recommended Attributes functionality on this popup as well for bulk loading. Opened request today 11/17. Will update item when this has been included in new sprint for consideration and LOE estimate.
process
enhancement request recommended attributes denotation to be available during bulk editing of attributes bpms i bcbs would like to request that the recommended attributes be applied to bulk editing of attributes as well as their recommended attributes are applied to all doc types this would be in the tools edit attributes function also related to another request bpms i the ability to apply attributes to multiple docs at once the same attributes when uploading new docs we d want the recommended attributes functionality on this popup as well for bulk loading opened request today will update item when this has been included in new sprint for consideration and loe estimate
1
686
3,172,246,054
IssuesEvent
2015-09-23 06:40:34
rakhimov/scram
https://api.github.com/repos/rakhimov/scram
closed
Fault tree preprocessing for BDD
BDD preprocessing ready
The BDD-based fault tree analysis needs a different set and order of application of preprocessing techniques than MOCUS does. The preprocessing strategy should focus on memory and performance considerations of the BDD based approach.
1.0
Fault tree preprocessing for BDD - The BDD-based fault tree analysis needs a different set and order of application of preprocessing techniques than MOCUS does. The preprocessing strategy should focus on memory and performance considerations of the BDD based approach.
process
fault tree preprocessing for bdd the bdd based fault tree analysis needs a different set and order of application of preprocessing techniques than mocus does the preprocessing strategy should focus on memory and performance considerations of the bdd based approach
1
306,270
9,383,193,405
IssuesEvent
2019-04-05 02:06:50
SilleBille/web-portfolio
https://api.github.com/repos/SilleBille/web-portfolio
closed
Fix scroll issue
High Priority bug
When a header is clicked, it navigates to the right section. However, the section loaded is little off, causing the previous section to be highlighted instead of the selected section.
1.0
Fix scroll issue - When a header is clicked, it navigates to the right section. However, the section loaded is little off, causing the previous section to be highlighted instead of the selected section.
non_process
fix scroll issue when a header is clicked it navigates to the right section however the section loaded is little off causing the previous section to be highlighted instead of the selected section
0
16,370
21,082,112,159
IssuesEvent
2022-04-03 03:25:53
vmware-tanzu/sonobuoy-plugins
https://api.github.com/repos/vmware-tanzu/sonobuoy-plugins
closed
Design Doc
Post-Processor
Ideas and brainstorming in #87 but I want to do a better job of planning it all out before starting coding since I don't want this to just be a one off for reach feature. I want there to eventually be a few (not necessarily just one) well scoped values to choose from.
1.0
Design Doc - Ideas and brainstorming in #87 but I want to do a better job of planning it all out before starting coding since I don't want this to just be a one off for reach feature. I want there to eventually be a few (not necessarily just one) well scoped values to choose from.
process
design doc ideas and brainstorming in but i want to do a better job of planning it all out before starting coding since i don t want this to just be a one off for reach feature i want there to eventually be a few not necessarily just one well scoped values to choose from
1
318,779
23,740,494,048
IssuesEvent
2022-08-31 12:01:31
jmosbacher/pydantic-panel
https://api.github.com/repos/jmosbacher/pydantic-panel
closed
Improve docstrings to help users
documentation enhancement good first issue
Docstrings can help users or contributors like me working in VS Code use the package much more efficiently. ## Examples https://github.com/jmosbacher/pydantic-panel/blob/040a55adf788591904b8a117a9cb1ab183169057/pydantic_panel/pane.py#L39 ![image](https://user-images.githubusercontent.com/42288570/181936956-f3c9f85f-f556-44fd-a3c9-ba9dd1887cbd.png) https://github.com/jmosbacher/pydantic-panel/blob/master/pydantic_panel/__init__.py ![image](https://user-images.githubusercontent.com/42288570/181937002-7ef77cc2-fae3-418d-a7a4-2bdde03e5d61.png)
1.0
Improve docstrings to help users - Docstrings can help users or contributors like me working in VS Code use the package much more efficiently. ## Examples https://github.com/jmosbacher/pydantic-panel/blob/040a55adf788591904b8a117a9cb1ab183169057/pydantic_panel/pane.py#L39 ![image](https://user-images.githubusercontent.com/42288570/181936956-f3c9f85f-f556-44fd-a3c9-ba9dd1887cbd.png) https://github.com/jmosbacher/pydantic-panel/blob/master/pydantic_panel/__init__.py ![image](https://user-images.githubusercontent.com/42288570/181937002-7ef77cc2-fae3-418d-a7a4-2bdde03e5d61.png)
non_process
improve docstrings to help users docstrings can help users or contributors like me working in vs code use the package much more efficiently examples
0
3,021
6,027,099,098
IssuesEvent
2017-06-08 13:03:21
openvstorage/framework
https://api.github.com/repos/openvstorage/framework
closed
Extend cluster abm failed
process_wontfix
Hi all. After change parameter to fix error in [https://github.com/openvstorage/framework/issues/1554](url) (should update document install cluster), i run extend cluster. It has error I > n [1]: from ovs.extensions.db.arakoon.ArakoonInstaller import ArakoonInstaller > > In [2]: from subprocess import check_output > > In [3]: > > In [3]: cluster_name = "arakoon_performance_cluster" > > In [4]: master_ip = "10.3.105.137" > > In [5]: new_ip = "10.3.105.138" > > In [6]: base_dir = "/opt/OpenvStorage/db" > > In [7]: current_ips = ["10.3.105.137", "10.3.105.138"] > > In [8]: > > In [8]: ArakoonInstaller.extend_cluster(master_ip, new_ip, cluster_name, base_dir, locked=False) > --------------------------------------------------------------------------- > ArakoonNotFound Traceback (most recent call last) > <ipython-input-8-e52dd84b7965> in <module>() > ----> 1 ArakoonInstaller.extend_cluster(master_ip, new_ip, cluster_name, base_dir, locked=False) > > /opt/OpenvStorage/ovs/extensions/db/arakoon/ArakoonInstaller.pyc in extend_cluster(master_ip, new_ip, cluster_name, base_dir, locked, filesystem, ports) > 434 config = ArakoonClusterConfig(cluster_name, filesystem) > 435 config.load_config(master_ip) > --> 436 cluster_type = ArakoonInstaller.get_arakoon_metadata_by_cluster_name(cluster_name=config.cluster_id, filesystem=filesystem, ip=master_ip)['cluster_type'] > 437 > 438 client = SSHClient(new_ip, username=ArakoonInstaller.SSHCLIENT_USER) > > /opt/OpenvStorage/ovs/extensions/db/arakoon/ArakoonInstaller.pyc in get_arakoon_metadata_by_cluster_name(cluster_name, filesystem, ip) > 617 config.load_config(ip) > 618 arakoon_client = ArakoonInstaller.build_client(config) > --> 619 return json.loads(arakoon_client.get(ArakoonInstaller.METADATA_KEY)) > 620 > 621 @staticmethod > > /opt/OpenvStorage/ovs/extensions/db/arakoon/pyrakoon/client.pyc in new_function(self, *args, **kw) > 42 """ > 43 with self._lock: > ---> 44 return f(self, *args, **kw) > 45 return new_function > 46 return wrap > > /opt/OpenvStorage/ovs/extensions/db/arakoon/pyrakoon/client.pyc in get(self, key, consistency) > 74 Retrieves a certain value for a given key > 75 """ > ---> 76 return PyrakoonClient._try(self._identifier, self._client.get, key, consistency) > 77 > 78 @locked() > > /opt/OpenvStorage/ovs/extensions/db/arakoon/pyrakoon/client.pyc in _try(identifier, method, *args, **kwargs) > 211 start = time.time() > 212 try: > --> 213 return_value = method(*args, **kwargs) > 214 except (ArakoonSockNotReadable, ArakoonSockReadNoBytes, ArakoonSockSendError): > 215 PyrakoonClient._logger.debug('Error during arakoon call {0}, retry'.format(method.__name__)) > > <update_argspec> in get(self, key, consistency) > > /opt/OpenvStorage/ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.pyc in wrapped(*args, **kwargs) > 150 raise > 151 > --> 152 raise new_exception > 153 > 154 return wrapped > > ArakoonNotFound: 'Not_found' > Thanks.
1.0
Extend cluster abm failed - Hi all. After change parameter to fix error in [https://github.com/openvstorage/framework/issues/1554](url) (should update document install cluster), i run extend cluster. It has error I > n [1]: from ovs.extensions.db.arakoon.ArakoonInstaller import ArakoonInstaller > > In [2]: from subprocess import check_output > > In [3]: > > In [3]: cluster_name = "arakoon_performance_cluster" > > In [4]: master_ip = "10.3.105.137" > > In [5]: new_ip = "10.3.105.138" > > In [6]: base_dir = "/opt/OpenvStorage/db" > > In [7]: current_ips = ["10.3.105.137", "10.3.105.138"] > > In [8]: > > In [8]: ArakoonInstaller.extend_cluster(master_ip, new_ip, cluster_name, base_dir, locked=False) > --------------------------------------------------------------------------- > ArakoonNotFound Traceback (most recent call last) > <ipython-input-8-e52dd84b7965> in <module>() > ----> 1 ArakoonInstaller.extend_cluster(master_ip, new_ip, cluster_name, base_dir, locked=False) > > /opt/OpenvStorage/ovs/extensions/db/arakoon/ArakoonInstaller.pyc in extend_cluster(master_ip, new_ip, cluster_name, base_dir, locked, filesystem, ports) > 434 config = ArakoonClusterConfig(cluster_name, filesystem) > 435 config.load_config(master_ip) > --> 436 cluster_type = ArakoonInstaller.get_arakoon_metadata_by_cluster_name(cluster_name=config.cluster_id, filesystem=filesystem, ip=master_ip)['cluster_type'] > 437 > 438 client = SSHClient(new_ip, username=ArakoonInstaller.SSHCLIENT_USER) > > /opt/OpenvStorage/ovs/extensions/db/arakoon/ArakoonInstaller.pyc in get_arakoon_metadata_by_cluster_name(cluster_name, filesystem, ip) > 617 config.load_config(ip) > 618 arakoon_client = ArakoonInstaller.build_client(config) > --> 619 return json.loads(arakoon_client.get(ArakoonInstaller.METADATA_KEY)) > 620 > 621 @staticmethod > > /opt/OpenvStorage/ovs/extensions/db/arakoon/pyrakoon/client.pyc in new_function(self, *args, **kw) > 42 """ > 43 with self._lock: > ---> 44 return f(self, *args, **kw) > 45 return new_function > 46 return wrap > > /opt/OpenvStorage/ovs/extensions/db/arakoon/pyrakoon/client.pyc in get(self, key, consistency) > 74 Retrieves a certain value for a given key > 75 """ > ---> 76 return PyrakoonClient._try(self._identifier, self._client.get, key, consistency) > 77 > 78 @locked() > > /opt/OpenvStorage/ovs/extensions/db/arakoon/pyrakoon/client.pyc in _try(identifier, method, *args, **kwargs) > 211 start = time.time() > 212 try: > --> 213 return_value = method(*args, **kwargs) > 214 except (ArakoonSockNotReadable, ArakoonSockReadNoBytes, ArakoonSockSendError): > 215 PyrakoonClient._logger.debug('Error during arakoon call {0}, retry'.format(method.__name__)) > > <update_argspec> in get(self, key, consistency) > > /opt/OpenvStorage/ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.pyc in wrapped(*args, **kwargs) > 150 raise > 151 > --> 152 raise new_exception > 153 > 154 return wrapped > > ArakoonNotFound: 'Not_found' > Thanks.
process
extend cluster abm failed hi all after change parameter to fix error in url should update document install cluster i run extend cluster it has error i n from ovs extensions db arakoon arakooninstaller import arakooninstaller in from subprocess import check output in in cluster name arakoon performance cluster in master ip in new ip in base dir opt openvstorage db in current ips in in arakooninstaller extend cluster master ip new ip cluster name base dir locked false arakoonnotfound traceback most recent call last in arakooninstaller extend cluster master ip new ip cluster name base dir locked false opt openvstorage ovs extensions db arakoon arakooninstaller pyc in extend cluster master ip new ip cluster name base dir locked filesystem ports config arakoonclusterconfig cluster name filesystem config load config master ip cluster type arakooninstaller get arakoon metadata by cluster name cluster name config cluster id filesystem filesystem ip master ip client sshclient new ip username arakooninstaller sshclient user opt openvstorage ovs extensions db arakoon arakooninstaller pyc in get arakoon metadata by cluster name cluster name filesystem ip config load config ip arakoon client arakooninstaller build client config return json loads arakoon client get arakooninstaller metadata key staticmethod opt openvstorage ovs extensions db arakoon pyrakoon client pyc in new function self args kw with self lock return f self args kw return new function return wrap opt openvstorage ovs extensions db arakoon pyrakoon client pyc in get self key consistency retrieves a certain value for a given key return pyrakoonclient try self identifier self client get key consistency locked opt openvstorage ovs extensions db arakoon pyrakoon client pyc in try identifier method args kwargs start time time try return value method args kwargs except arakoonsocknotreadable arakoonsockreadnobytes arakoonsocksenderror pyrakoonclient logger debug error during arakoon call retry format method name in get self key consistency opt openvstorage ovs extensions db arakoon pyrakoon pyrakoon compat pyc in wrapped args kwargs raise raise new exception return wrapped arakoonnotfound not found thanks
1
16,171
20,611,706,737
IssuesEvent
2022-03-07 09:17:47
oasis-tcs/csaf
https://api.github.com/repos/oasis-tcs/csaf
opened
Motion to publish current editor draft as CSD 02
email oasis_tc_process
In release https://github.com/oasis-tcs/csaf/releases/tag/csd-02-20220219-rc1 we provide all artifacts needed and a link to the email on the TC maing list that moves to publish the current editor draft as CSD 02.
1.0
Motion to publish current editor draft as CSD 02 - In release https://github.com/oasis-tcs/csaf/releases/tag/csd-02-20220219-rc1 we provide all artifacts needed and a link to the email on the TC maing list that moves to publish the current editor draft as CSD 02.
process
motion to publish current editor draft as csd in release we provide all artifacts needed and a link to the email on the tc maing list that moves to publish the current editor draft as csd
1
5,934
8,756,779,645
IssuesEvent
2018-12-14 18:54:14
googleapis/google-cloud-node
https://api.github.com/repos/googleapis/google-cloud-node
closed
Eliminate usage of the `async` npm module
help wanted type: process
We almost never have a good reason to be using the `async` npm module, as node 8 includes native async/await support. We need to go through the following repos, and drop the dependency: - [ ] [googleapis/gce-images](https://github.com/googleapis/gce-images) - [ ] [googleapis/nodejs-bigtable](https://github.com/googleapis/nodejs-bigtable) - [ ] [googleapis/nodejs-compute](https://github.com/googleapis/nodejs-compute) - [ ] [googleapis/nodejs-datastore](https://github.com/googleapis/nodejs-datastore) - [ ] [googleapis/nodejs-proto-files](https://github.com/googleapis/nodejs-proto-files) - [ ] [googleapis/nodejs-spanner](https://github.com/googleapis/nodejs-spanner) - [x] [googleapis/nodejs-text-to-speech](https://github.com/googleapis/nodejs-text-to-speech) - [ ] [googleapis/nodejs-vision](https://github.com/googleapis/nodejs-vision)
1.0
Eliminate usage of the `async` npm module - We almost never have a good reason to be using the `async` npm module, as node 8 includes native async/await support. We need to go through the following repos, and drop the dependency: - [ ] [googleapis/gce-images](https://github.com/googleapis/gce-images) - [ ] [googleapis/nodejs-bigtable](https://github.com/googleapis/nodejs-bigtable) - [ ] [googleapis/nodejs-compute](https://github.com/googleapis/nodejs-compute) - [ ] [googleapis/nodejs-datastore](https://github.com/googleapis/nodejs-datastore) - [ ] [googleapis/nodejs-proto-files](https://github.com/googleapis/nodejs-proto-files) - [ ] [googleapis/nodejs-spanner](https://github.com/googleapis/nodejs-spanner) - [x] [googleapis/nodejs-text-to-speech](https://github.com/googleapis/nodejs-text-to-speech) - [ ] [googleapis/nodejs-vision](https://github.com/googleapis/nodejs-vision)
process
eliminate usage of the async npm module we almost never have a good reason to be using the async npm module as node includes native async await support we need to go through the following repos and drop the dependency
1
15,597
19,722,687,561
IssuesEvent
2022-01-13 16:47:39
damb/scdetect
https://api.github.com/repos/damb/scdetect
closed
Alternative magnitude implementation: amplitude ratio between template and detected event
enhancement magnitude processing amplitude
### Discussed in https://github.com/damb/scdetect/discussions/55 <div type='discussions-op-text'> <sup>Originally posted by **mmesim** November 4, 2021</sup> Ideas for magnitude calculation based on what is already available and used in the literature (etc). - **Amplitude ratio between template and detected event** e.g https://www.nature.com/articles/ngeo697 [...]</div> **Details**: - use the approach as an alternative to compute a station magnitude - consider only those streams which are used for detection - the time window length is equal to the length of the template waveform - **Processing**: + the linear trend is removed + the mean is removed + a filter is applied + finally the waveform is trimmed. - > The process is to find the maximum amplitude for each template and the corresponding detection for each station and channel. Then compute the ratio for all these pairs, and take the median.
1.0
Alternative magnitude implementation: amplitude ratio between template and detected event - ### Discussed in https://github.com/damb/scdetect/discussions/55 <div type='discussions-op-text'> <sup>Originally posted by **mmesim** November 4, 2021</sup> Ideas for magnitude calculation based on what is already available and used in the literature (etc). - **Amplitude ratio between template and detected event** e.g https://www.nature.com/articles/ngeo697 [...]</div> **Details**: - use the approach as an alternative to compute a station magnitude - consider only those streams which are used for detection - the time window length is equal to the length of the template waveform - **Processing**: + the linear trend is removed + the mean is removed + a filter is applied + finally the waveform is trimmed. - > The process is to find the maximum amplitude for each template and the corresponding detection for each station and channel. Then compute the ratio for all these pairs, and take the median.
process
alternative magnitude implementation amplitude ratio between template and detected event discussed in originally posted by mmesim november ideas for magnitude calculation based on what is already available and used in the literature etc amplitude ratio between template and detected event e g details use the approach as an alternative to compute a station magnitude consider only those streams which are used for detection the time window length is equal to the length of the template waveform processing the linear trend is removed the mean is removed a filter is applied finally the waveform is trimmed the process is to find the maximum amplitude for each template and the corresponding detection for each station and channel then compute the ratio for all these pairs and take the median
1
96,016
12,077,059,770
IssuesEvent
2020-04-17 08:47:37
witnet/sheikah
https://api.github.com/repos/witnet/sheikah
closed
Design transaction details
design 🖼️ duplicate 👬
With the output of #960 we should be able to design a `transaction` view including the new transaction information. This issue will be resolved when we have sketch a detailed design in Figma.
1.0
Design transaction details - With the output of #960 we should be able to design a `transaction` view including the new transaction information. This issue will be resolved when we have sketch a detailed design in Figma.
non_process
design transaction details with the output of we should be able to design a transaction view including the new transaction information this issue will be resolved when we have sketch a detailed design in figma
0
2,146
2,524,081,785
IssuesEvent
2015-01-20 15:37:53
TranxCraft/iTranxCraft
https://api.github.com/repos/TranxCraft/iTranxCraft
opened
Colour MOTD
Enhancement Low Priority
Like a lot of plugins out there, this needs to be able to colourize the MOTD of the server.
1.0
Colour MOTD - Like a lot of plugins out there, this needs to be able to colourize the MOTD of the server.
non_process
colour motd like a lot of plugins out there this needs to be able to colourize the motd of the server
0
7,872
11,045,177,135
IssuesEvent
2019-12-09 14:40:58
deepset-ai/FARM
https://api.github.com/repos/deepset-ai/FARM
closed
Make it easier to use own metrics, log more metrics
enhancement help wanted part: evaluator part: processor
Currently the metric (for TextClassificationProcessor) needs to get specified as a string and picked from one of the pre-defined ones. That metric will then also get logged during training. It would be very useful if it was possible to specify my own metric definition instead by allowing a function to get passed which takes preds/labels and returns the name and value of the metric. Also it would be very useful to specify a list of metrics so that several metrics can be logged (e.g. log the per-label f1 for classification)
1.0
Make it easier to use own metrics, log more metrics - Currently the metric (for TextClassificationProcessor) needs to get specified as a string and picked from one of the pre-defined ones. That metric will then also get logged during training. It would be very useful if it was possible to specify my own metric definition instead by allowing a function to get passed which takes preds/labels and returns the name and value of the metric. Also it would be very useful to specify a list of metrics so that several metrics can be logged (e.g. log the per-label f1 for classification)
process
make it easier to use own metrics log more metrics currently the metric for textclassificationprocessor needs to get specified as a string and picked from one of the pre defined ones that metric will then also get logged during training it would be very useful if it was possible to specify my own metric definition instead by allowing a function to get passed which takes preds labels and returns the name and value of the metric also it would be very useful to specify a list of metrics so that several metrics can be logged e g log the per label for classification
1
206,744
15,771,956,504
IssuesEvent
2021-03-31 21:10:48
sympy/sympy
https://api.github.com/repos/sympy/sympy
opened
Improve doctest skipping in rst files
Testing
Right now if a doctest needs to be conditionally skipped (because some dependency isn't installed) in an rst file, it is done by adding it to a blacklist in sympy/testing/runtests.py. We should instead use a Sphinx directive to do this. For instance, https://www.sphinx-doc.org/en/master/usage/extensions/doctest.html#skipping-tests-conditionally, or, if that cannot be applied directly, we can make our own Sphinx directive to do it. Another issue is that blacklisted rst files aren't shown as skipped when the doctest runner runs. They just aren't printed at all. This is misleading. Blacklisted files should still be shown as skipped, with a reason, so that we can be aware of which doctests aren't actually being run.
1.0
Improve doctest skipping in rst files - Right now if a doctest needs to be conditionally skipped (because some dependency isn't installed) in an rst file, it is done by adding it to a blacklist in sympy/testing/runtests.py. We should instead use a Sphinx directive to do this. For instance, https://www.sphinx-doc.org/en/master/usage/extensions/doctest.html#skipping-tests-conditionally, or, if that cannot be applied directly, we can make our own Sphinx directive to do it. Another issue is that blacklisted rst files aren't shown as skipped when the doctest runner runs. They just aren't printed at all. This is misleading. Blacklisted files should still be shown as skipped, with a reason, so that we can be aware of which doctests aren't actually being run.
non_process
improve doctest skipping in rst files right now if a doctest needs to be conditionally skipped because some dependency isn t installed in an rst file it is done by adding it to a blacklist in sympy testing runtests py we should instead use a sphinx directive to do this for instance or if that cannot be applied directly we can make our own sphinx directive to do it another issue is that blacklisted rst files aren t shown as skipped when the doctest runner runs they just aren t printed at all this is misleading blacklisted files should still be shown as skipped with a reason so that we can be aware of which doctests aren t actually being run
0
8,915
12,017,601,098
IssuesEvent
2020-04-10 18:46:51
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Multiple Pipelines with same name?
Pri1 devops-cicd-process/tech devops/prod
How do we handle picking the right pipeline if we have more than one with the same name? Is the folder structure supported in the source parameter? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 86285f72-9e28-da97-59bb-c29eb60f627d * Version Independent ID: 18d5a591-a7d3-c261-6bff-8808ae433f54 * Content: [Configure pipeline triggers - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/pipeline-triggers?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/pipeline-triggers.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/pipeline-triggers.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @ashkir * Microsoft Alias: **ashkir**
1.0
Multiple Pipelines with same name? - How do we handle picking the right pipeline if we have more than one with the same name? Is the folder structure supported in the source parameter? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 86285f72-9e28-da97-59bb-c29eb60f627d * Version Independent ID: 18d5a591-a7d3-c261-6bff-8808ae433f54 * Content: [Configure pipeline triggers - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/pipeline-triggers?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/pipeline-triggers.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/pipeline-triggers.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @ashkir * Microsoft Alias: **ashkir**
process
multiple pipelines with same name how do we handle picking the right pipeline if we have more than one with the same name is the folder structure supported in the source parameter document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login ashkir microsoft alias ashkir
1
16,172
20,615,833,259
IssuesEvent
2022-03-07 13:10:04
celo-org/celo-monorepo
https://api.github.com/repos/celo-org/celo-monorepo
closed
Investigate and prototype cLab's wide solution for CICD
release-process ci ODIS Component: Identity identity: needs review
The goal is to provide a standard across cLabs for merge and release automation and validation. Output should be a design doc and prototype that is reviewed with the team. Instructions should be provided to allow other teams to easily adopt the pattern for their service or components.
1.0
Investigate and prototype cLab's wide solution for CICD - The goal is to provide a standard across cLabs for merge and release automation and validation. Output should be a design doc and prototype that is reviewed with the team. Instructions should be provided to allow other teams to easily adopt the pattern for their service or components.
process
investigate and prototype clab s wide solution for cicd the goal is to provide a standard across clabs for merge and release automation and validation output should be a design doc and prototype that is reviewed with the team instructions should be provided to allow other teams to easily adopt the pattern for their service or components
1
2,878
8,461,748,442
IssuesEvent
2018-10-22 23:01:16
poanetwork/blockscout
https://api.github.com/repos/poanetwork/blockscout
opened
Geth Clique blocks always mined by 0x000
enhancement priority: high team: architecture
In order to find the miner for Geth Clique blocks, we must use `clique.getSignersAtHash('0x86837f148f096f1ce53463ecb4acf3d99bae0b2f0297dacbb6d33fcfdb7c04d9')`.
1.0
Geth Clique blocks always mined by 0x000 - In order to find the miner for Geth Clique blocks, we must use `clique.getSignersAtHash('0x86837f148f096f1ce53463ecb4acf3d99bae0b2f0297dacbb6d33fcfdb7c04d9')`.
non_process
geth clique blocks always mined by in order to find the miner for geth clique blocks we must use clique getsignersathash
0
73,780
8,925,177,858
IssuesEvent
2019-01-21 21:34:26
Pitastic/TeacherTab
https://api.github.com/repos/Pitastic/TeacherTab
closed
Mobile Design: Buttons Klick
design
Es gibt noch kein optisches Feedback, wenn Buttons in Popups geklickt werden. Gerade bei den Credentials hinderlich, wegen dem leichten Delay...
1.0
Mobile Design: Buttons Klick - Es gibt noch kein optisches Feedback, wenn Buttons in Popups geklickt werden. Gerade bei den Credentials hinderlich, wegen dem leichten Delay...
non_process
mobile design buttons klick es gibt noch kein optisches feedback wenn buttons in popups geklickt werden gerade bei den credentials hinderlich wegen dem leichten delay
0
14,123
16,986,219,849
IssuesEvent
2021-06-30 14:39:52
ankidroid/Anki-Android
https://api.github.com/repos/ankidroid/Anki-Android
closed
Statistics: Review Heatmap
Anki Ecosystem Compatibility Blocked by dependency Keep Open Statistics
Anki 2.1.28 includes a 'Calendar' in the statistics section. We should port this.
True
Statistics: Review Heatmap - Anki 2.1.28 includes a 'Calendar' in the statistics section. We should port this.
non_process
statistics review heatmap anki includes a calendar in the statistics section we should port this
0
55,252
30,655,559,238
IssuesEvent
2023-07-25 11:54:17
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Regressions in System.Collections.Tests.Perf_PriorityQueue<String, String>
arch-arm64 area-System.Collections os-windows tenet-performance tenet-performance-benchmarks
### Run Information Architecture | arm64 -- | -- OS | Windows 10.0.19041 Baseline | [1c442fcb36e73eb3d8be10423a45389ce2a468b9](https://github.com/dotnet/runtime/commit/1c442fcb36e73eb3d8be10423a45389ce2a468b9) Compare | [00c60e41ac459b005338329cd9b5fd5d6652777c](https://github.com/dotnet/runtime/commit/00c60e41ac459b005338329cd9b5fd5d6652777c) Diff | [Diff](https://github.com/dotnet/runtime/compare/1c442fcb36e73eb3d8be10423a45389ce2a468b9...00c60e41ac459b005338329cd9b5fd5d6652777c) ### Regressions in System.Collections.Tests.Perf_PriorityQueue&lt;String, String&gt; Benchmark | Baseline | Test | Test/Base | Test Quality | Edge Detector | Baseline IR | Compare IR | IR Ratio | Baseline ETL | Compare ETL -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- [K_Max_Elements - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Collections.Tests.Perf_PriorityQueue(String%2c%20String).K_Max_Elements(Size%3a%201000).html>) | 70.17 μs | 82.63 μs | 1.18 | 0.02 | False | | | [Dequeue_And_Enqueue - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Collections.Tests.Perf_PriorityQueue(String%2c%20String).Dequeue_And_Enqueue(Size%3a%201000).html>) | 2.85 ms | 3.29 ms | 1.15 | 0.02 | False | | | [K_Max_Elements - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Collections.Tests.Perf_PriorityQueue(String%2c%20String).K_Max_Elements(Size%3a%20100).html>) | 10.35 μs | 11.54 μs | 1.12 | 0.01 | False | | | [HeapSort - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Collections.Tests.Perf_PriorityQueue(String%2c%20String).HeapSort(Size%3a%201000).html>) | 1.27 ms | 1.45 ms | 1.15 | 0.02 | False | | | [HeapSort - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Collections.Tests.Perf_PriorityQueue(String%2c%20String).HeapSort(Size%3a%20100).html>) | 72.07 μs | 87.23 μs | 1.21 | 0.02 | False | | | ![graph](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/02_02_2023/refs/heads/main_arm64_Windows%2010.0.19041_Regression/System.Collections.Tests.Perf_PriorityQueue(String,%20String)_1.png>) ![graph](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/02_02_2023/refs/heads/main_arm64_Windows%2010.0.19041_Regression/System.Collections.Tests.Perf_PriorityQueue(String,%20String)_2.png>) ![graph](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/02_02_2023/refs/heads/main_arm64_Windows%2010.0.19041_Regression/System.Collections.Tests.Perf_PriorityQueue(String,%20String)_3.png>) ![graph](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/02_02_2023/refs/heads/main_arm64_Windows%2010.0.19041_Regression/System.Collections.Tests.Perf_PriorityQueue(String,%20String)_4.png>) ![graph](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/02_02_2023/refs/heads/main_arm64_Windows%2010.0.19041_Regression/System.Collections.Tests.Perf_PriorityQueue(String,%20String)_5.png>) [Test Report](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/02_02_2023/refs/heads/main_arm64_Windows%2010.0.19041_Regression/System.Collections.Tests.Perf_PriorityQueue(String,%20String).html>) ### Repro ```cmd git clone https://github.com/dotnet/performance.git py .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'System.Collections.Tests.Perf_PriorityQueue&lt;String, String&gt;*' ``` <details> ### Payloads [Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-0451ef7a-9ec1-4934-b223-b84302234506f12022744c24d538e/9542f7dd-da0e-4049-ba8a-7df4066dc8b8.zip?sv=2021-08-06&se=2023-03-02T09%3A54%3A54Z&sr=c&sp=rl&sig=HVgiB2qkObQ4pfGygKmPmVMAwb4Vwoh9Qd3Qv9YJBoE%3D>) [Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-b1d7d280-e912-416a-9084-33ffad3034da7114e904b37469d9e/bce8fae4-8eae-42d7-82f4-54b03ad0be0a.zip?sv=2021-08-06&se=2023-03-03T03%3A11%3A16Z&sr=c&sp=rl&sig=O9caZcFqy3Wg2eCmcHJ6JM5%2FRuUEJEn3u3DxI7x9mjM%3D>) ### Histogram #### System.Collections.Tests.Perf_PriorityQueue&lt;String, String&gt;.K_Max_Elements(Size: 1000) ```log ``` ### Description of detection logic ```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsRegressionWindowed: Marked as regression because 82.62969568062829 > 73.71343345327863. IsChangePoint: Marked as a change because one of 11/9/2022 4:06:23 PM, 11/11/2022 5:53:56 PM, 1/27/2023 3:43:27 AM, 1/28/2023 1:22:52 PM, 1/31/2023 1:54:46 PM, 2/2/2023 2:17:49 AM falls between 1/24/2023 11:54:43 AM and 2/2/2023 2:17:49 AM. IsRegressionStdDev: Marked as regression because -7.234137013079913 (T) = (0 -82369.57625049516) / Math.Sqrt((54869160.877810605 / (35)) + (180256.54069327304 / (3))) is less than -2.028094000977961 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (35) + (3) - 2, .025) and -0.12619128326253404 = (73139.95186667897 - 82369.57625049516) / 73139.95186667897 is less than -0.05. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so. ```#### System.Collections.Tests.Perf_PriorityQueue&lt;String, String&gt;.Dequeue_And_Enqueue(Size: 1000) ```log ``` ### Description of detection logic ```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsRegressionWindowed: Marked as regression because 3.285729464285714 > 2.896056976963141. IsChangePoint: Marked as a change because one of 11/9/2022 4:06:23 PM, 11/11/2022 5:53:56 PM, 1/31/2023 7:06:58 PM, 2/2/2023 2:17:49 AM falls between 1/24/2023 11:54:43 AM and 2/2/2023 2:17:49 AM. IsRegressionStdDev: Marked as regression because -30.52488199142272 (T) = (0 -3296149.866071428) / Math.Sqrt((6300529812.300653 / (37)) + (217169546.75143635 / (2))) is less than -2.026192463026769 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (37) + (2) - 2, .025) and -0.1829407419136677 = (2786403.19779601 - 3296149.866071428) / 2786403.19779601 is less than -0.05. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so. ```#### System.Collections.Tests.Perf_PriorityQueue&lt;String, String&gt;.K_Max_Elements(Size: 100) ```log ``` ### Description of detection logic ```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsRegressionWindowed: Marked as regression because 11.542046740467406 > 10.860039658991148. IsChangePoint: Marked as a change because one of 11/9/2022 4:06:23 PM, 11/11/2022 5:53:56 PM, 1/31/2023 1:54:46 PM, 2/2/2023 2:17:49 AM falls between 1/24/2023 11:54:43 AM and 2/2/2023 2:17:49 AM. IsRegressionStdDev: Marked as regression because -90.68697555313808 (T) = (0 -11555.279937217854) / Math.Sqrt((2902.2537207759206 / (35)) + (288.9071036088506 / (3))) is less than -2.028094000977961 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (35) + (3) - 2, .025) and -0.11740090084789681 = (10341.212297618136 - 11555.279937217854) / 10341.212297618136 is less than -0.05. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so. ```#### System.Collections.Tests.Perf_PriorityQueue&lt;String, String&gt;.HeapSort(Size: 1000) ```log ``` ### Description of detection logic ```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsRegressionWindowed: Marked as regression because 1.4514014583333332 > 1.3290847425110948. IsChangePoint: Marked as a change because one of 11/9/2022 4:06:23 PM, 11/11/2022 5:53:56 PM, 1/27/2023 2:26:56 PM, 1/28/2023 1:22:52 PM, 1/31/2023 1:54:46 PM, 2/2/2023 2:17:49 AM falls between 1/24/2023 11:54:43 AM and 2/2/2023 2:17:49 AM. IsRegressionStdDev: Marked as regression because -20.410784635641335 (T) = (0 -1457461.322150072) / Math.Sqrt((2324897270.9125023 / (35)) + (29162175.036984984 / (3))) is less than -2.028094000977961 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (35) + (3) - 2, .025) and -0.13921752068444956 = (1279352.9731481127 - 1457461.322150072) / 1279352.9731481127 is less than -0.05. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so. ```#### System.Collections.Tests.Perf_PriorityQueue&lt;String, String&gt;.HeapSort(Size: 100) ```log ``` ### Description of detection logic ```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsRegressionWindowed: Marked as regression because 87.2326541300878 > 75.6770635080645. IsChangePoint: Marked as a change because one of 11/9/2022 4:06:23 PM, 11/11/2022 5:53:56 PM, 1/31/2023 1:54:46 PM, 2/2/2023 2:17:49 AM falls between 1/24/2023 11:54:43 AM and 2/2/2023 2:17:49 AM. IsRegressionStdDev: Marked as regression because -107.64995727089786 (T) = (0 -87085.49901354982) / Math.Sqrt((477391.2996941021 / (36)) + (16703.107219445024 / (3))) is less than -2.026192463026769 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (36) + (3) - 2, .025) and -0.20426772617066333 = (72314.06864191631 - 87085.49901354982) / 72314.06864191631 is less than -0.05. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so. ``` ### Docs [Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md) [Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md) </details>
True
Regressions in System.Collections.Tests.Perf_PriorityQueue<String, String> - ### Run Information Architecture | arm64 -- | -- OS | Windows 10.0.19041 Baseline | [1c442fcb36e73eb3d8be10423a45389ce2a468b9](https://github.com/dotnet/runtime/commit/1c442fcb36e73eb3d8be10423a45389ce2a468b9) Compare | [00c60e41ac459b005338329cd9b5fd5d6652777c](https://github.com/dotnet/runtime/commit/00c60e41ac459b005338329cd9b5fd5d6652777c) Diff | [Diff](https://github.com/dotnet/runtime/compare/1c442fcb36e73eb3d8be10423a45389ce2a468b9...00c60e41ac459b005338329cd9b5fd5d6652777c) ### Regressions in System.Collections.Tests.Perf_PriorityQueue&lt;String, String&gt; Benchmark | Baseline | Test | Test/Base | Test Quality | Edge Detector | Baseline IR | Compare IR | IR Ratio | Baseline ETL | Compare ETL -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- [K_Max_Elements - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Collections.Tests.Perf_PriorityQueue(String%2c%20String).K_Max_Elements(Size%3a%201000).html>) | 70.17 μs | 82.63 μs | 1.18 | 0.02 | False | | | [Dequeue_And_Enqueue - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Collections.Tests.Perf_PriorityQueue(String%2c%20String).Dequeue_And_Enqueue(Size%3a%201000).html>) | 2.85 ms | 3.29 ms | 1.15 | 0.02 | False | | | [K_Max_Elements - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Collections.Tests.Perf_PriorityQueue(String%2c%20String).K_Max_Elements(Size%3a%20100).html>) | 10.35 μs | 11.54 μs | 1.12 | 0.01 | False | | | [HeapSort - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Collections.Tests.Perf_PriorityQueue(String%2c%20String).HeapSort(Size%3a%201000).html>) | 1.27 ms | 1.45 ms | 1.15 | 0.02 | False | | | [HeapSort - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Collections.Tests.Perf_PriorityQueue(String%2c%20String).HeapSort(Size%3a%20100).html>) | 72.07 μs | 87.23 μs | 1.21 | 0.02 | False | | | ![graph](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/02_02_2023/refs/heads/main_arm64_Windows%2010.0.19041_Regression/System.Collections.Tests.Perf_PriorityQueue(String,%20String)_1.png>) ![graph](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/02_02_2023/refs/heads/main_arm64_Windows%2010.0.19041_Regression/System.Collections.Tests.Perf_PriorityQueue(String,%20String)_2.png>) ![graph](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/02_02_2023/refs/heads/main_arm64_Windows%2010.0.19041_Regression/System.Collections.Tests.Perf_PriorityQueue(String,%20String)_3.png>) ![graph](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/02_02_2023/refs/heads/main_arm64_Windows%2010.0.19041_Regression/System.Collections.Tests.Perf_PriorityQueue(String,%20String)_4.png>) ![graph](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/02_02_2023/refs/heads/main_arm64_Windows%2010.0.19041_Regression/System.Collections.Tests.Perf_PriorityQueue(String,%20String)_5.png>) [Test Report](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/02_02_2023/refs/heads/main_arm64_Windows%2010.0.19041_Regression/System.Collections.Tests.Perf_PriorityQueue(String,%20String).html>) ### Repro ```cmd git clone https://github.com/dotnet/performance.git py .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'System.Collections.Tests.Perf_PriorityQueue&lt;String, String&gt;*' ``` <details> ### Payloads [Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-0451ef7a-9ec1-4934-b223-b84302234506f12022744c24d538e/9542f7dd-da0e-4049-ba8a-7df4066dc8b8.zip?sv=2021-08-06&se=2023-03-02T09%3A54%3A54Z&sr=c&sp=rl&sig=HVgiB2qkObQ4pfGygKmPmVMAwb4Vwoh9Qd3Qv9YJBoE%3D>) [Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-b1d7d280-e912-416a-9084-33ffad3034da7114e904b37469d9e/bce8fae4-8eae-42d7-82f4-54b03ad0be0a.zip?sv=2021-08-06&se=2023-03-03T03%3A11%3A16Z&sr=c&sp=rl&sig=O9caZcFqy3Wg2eCmcHJ6JM5%2FRuUEJEn3u3DxI7x9mjM%3D>) ### Histogram #### System.Collections.Tests.Perf_PriorityQueue&lt;String, String&gt;.K_Max_Elements(Size: 1000) ```log ``` ### Description of detection logic ```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsRegressionWindowed: Marked as regression because 82.62969568062829 > 73.71343345327863. IsChangePoint: Marked as a change because one of 11/9/2022 4:06:23 PM, 11/11/2022 5:53:56 PM, 1/27/2023 3:43:27 AM, 1/28/2023 1:22:52 PM, 1/31/2023 1:54:46 PM, 2/2/2023 2:17:49 AM falls between 1/24/2023 11:54:43 AM and 2/2/2023 2:17:49 AM. IsRegressionStdDev: Marked as regression because -7.234137013079913 (T) = (0 -82369.57625049516) / Math.Sqrt((54869160.877810605 / (35)) + (180256.54069327304 / (3))) is less than -2.028094000977961 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (35) + (3) - 2, .025) and -0.12619128326253404 = (73139.95186667897 - 82369.57625049516) / 73139.95186667897 is less than -0.05. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so. ```#### System.Collections.Tests.Perf_PriorityQueue&lt;String, String&gt;.Dequeue_And_Enqueue(Size: 1000) ```log ``` ### Description of detection logic ```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsRegressionWindowed: Marked as regression because 3.285729464285714 > 2.896056976963141. IsChangePoint: Marked as a change because one of 11/9/2022 4:06:23 PM, 11/11/2022 5:53:56 PM, 1/31/2023 7:06:58 PM, 2/2/2023 2:17:49 AM falls between 1/24/2023 11:54:43 AM and 2/2/2023 2:17:49 AM. IsRegressionStdDev: Marked as regression because -30.52488199142272 (T) = (0 -3296149.866071428) / Math.Sqrt((6300529812.300653 / (37)) + (217169546.75143635 / (2))) is less than -2.026192463026769 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (37) + (2) - 2, .025) and -0.1829407419136677 = (2786403.19779601 - 3296149.866071428) / 2786403.19779601 is less than -0.05. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so. ```#### System.Collections.Tests.Perf_PriorityQueue&lt;String, String&gt;.K_Max_Elements(Size: 100) ```log ``` ### Description of detection logic ```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsRegressionWindowed: Marked as regression because 11.542046740467406 > 10.860039658991148. IsChangePoint: Marked as a change because one of 11/9/2022 4:06:23 PM, 11/11/2022 5:53:56 PM, 1/31/2023 1:54:46 PM, 2/2/2023 2:17:49 AM falls between 1/24/2023 11:54:43 AM and 2/2/2023 2:17:49 AM. IsRegressionStdDev: Marked as regression because -90.68697555313808 (T) = (0 -11555.279937217854) / Math.Sqrt((2902.2537207759206 / (35)) + (288.9071036088506 / (3))) is less than -2.028094000977961 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (35) + (3) - 2, .025) and -0.11740090084789681 = (10341.212297618136 - 11555.279937217854) / 10341.212297618136 is less than -0.05. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so. ```#### System.Collections.Tests.Perf_PriorityQueue&lt;String, String&gt;.HeapSort(Size: 1000) ```log ``` ### Description of detection logic ```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsRegressionWindowed: Marked as regression because 1.4514014583333332 > 1.3290847425110948. IsChangePoint: Marked as a change because one of 11/9/2022 4:06:23 PM, 11/11/2022 5:53:56 PM, 1/27/2023 2:26:56 PM, 1/28/2023 1:22:52 PM, 1/31/2023 1:54:46 PM, 2/2/2023 2:17:49 AM falls between 1/24/2023 11:54:43 AM and 2/2/2023 2:17:49 AM. IsRegressionStdDev: Marked as regression because -20.410784635641335 (T) = (0 -1457461.322150072) / Math.Sqrt((2324897270.9125023 / (35)) + (29162175.036984984 / (3))) is less than -2.028094000977961 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (35) + (3) - 2, .025) and -0.13921752068444956 = (1279352.9731481127 - 1457461.322150072) / 1279352.9731481127 is less than -0.05. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so. ```#### System.Collections.Tests.Perf_PriorityQueue&lt;String, String&gt;.HeapSort(Size: 100) ```log ``` ### Description of detection logic ```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small. IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline. IsRegressionWindowed: Marked as regression because 87.2326541300878 > 75.6770635080645. IsChangePoint: Marked as a change because one of 11/9/2022 4:06:23 PM, 11/11/2022 5:53:56 PM, 1/31/2023 1:54:46 PM, 2/2/2023 2:17:49 AM falls between 1/24/2023 11:54:43 AM and 2/2/2023 2:17:49 AM. IsRegressionStdDev: Marked as regression because -107.64995727089786 (T) = (0 -87085.49901354982) / Math.Sqrt((477391.2996941021 / (36)) + (16703.107219445024 / (3))) is less than -2.026192463026769 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (36) + (3) - 2, .025) and -0.20426772617066333 = (72314.06864191631 - 87085.49901354982) / 72314.06864191631 is less than -0.05. IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small. IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so. ``` ### Docs [Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md) [Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md) </details>
non_process
regressions in system collections tests perf priorityqueue run information architecture os windows baseline compare diff regressions in system collections tests perf priorityqueue lt string string gt benchmark baseline test test base test quality edge detector baseline ir compare ir ir ratio baseline etl compare etl μs μs false ms ms false μs μs false ms ms false μs μs false repro cmd git clone py performance scripts benchmarks ci py f filter system collections tests perf priorityqueue lt string string gt payloads histogram system collections tests perf priorityqueue lt string string gt k max elements size log description of detection logic isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isregressionwindowed marked as regression because ischangepoint marked as a change because one of pm pm am pm pm am falls between am and am isregressionstddev marked as regression because t math sqrt is less than mathnet numerics distributions studentt invcdf and is less than isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small ischangeedgedetector marked not as a regression because edge detector said so system collections tests perf priorityqueue lt string string gt dequeue and enqueue size log description of detection logic isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isregressionwindowed marked as regression because ischangepoint marked as a change because one of pm pm pm am falls between am and am isregressionstddev marked as regression because t math sqrt is less than mathnet numerics distributions studentt invcdf and is less than isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small ischangeedgedetector marked not as a regression because edge detector said so system collections tests perf priorityqueue lt string string gt k max elements size log description of detection logic isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isregressionwindowed marked as regression because ischangepoint marked as a change because one of pm pm pm am falls between am and am isregressionstddev marked as regression because t math sqrt is less than mathnet numerics distributions studentt invcdf and is less than isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small ischangeedgedetector marked not as a regression because edge detector said so system collections tests perf priorityqueue lt string string gt heapsort size log description of detection logic isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isregressionwindowed marked as regression because ischangepoint marked as a change because one of pm pm pm pm pm am falls between am and am isregressionstddev marked as regression because t math sqrt is less than mathnet numerics distributions studentt invcdf and is less than isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small ischangeedgedetector marked not as a regression because edge detector said so system collections tests perf priorityqueue lt string string gt heapsort size log description of detection logic isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isregressionwindowed marked as regression because ischangepoint marked as a change because one of pm pm pm am falls between am and am isregressionstddev marked as regression because t math sqrt is less than mathnet numerics distributions studentt invcdf and is less than isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small ischangeedgedetector marked not as a regression because edge detector said so docs
0
12,941
15,306,589,626
IssuesEvent
2021-02-24 19:41:31
retaildevcrews/ngsa
https://api.github.com/repos/retaildevcrews/ngsa
opened
Sprint 2 tracking
Process Sharing
## Sprint 2 ### 👩🏾‍💻👩🏻‍💻 POINT DEV 👩🏻‍💻 👩🏾‍💻 - Kushal & Gerardo ### Sprint tracking PnP spike - [x] `FC:Pnp` - [ ] [Create a private network](https://github.com/retaildevcrews/ngsa/issues/383) - [ ] [Delta: aks-secure-baseline <> NGSA](https://github.com/retaildevcrews/ngsa/issues/178) - [ ] [Design Cluster Monitoring](https://github.com/retaildevcrews/ngsa/issues/318) - [ ] [Deploy NGSA apps on PnP architecture](https://github.com/retaildevcrews/ngsa/issues/473) - [ ] [infrastructure update checklist #433](https://github.com/retaildevcrews/ngsa/issues/433) - [ ] [AKS Secure Baseline Design Review #461](https://github.com/retaildevcrews/ngsa/issues/461) Scaling Spike - [x] `FC:Scaling` - [ ] https://github.com/retaildevcrews/ngsa/issues/411 - [ ] https://github.com/retaildevcrews/ngsa/issues/416 - [x] [Loderunner 503s](https://github.com/retaildevcrews/ngsa/issues/434) GitOps & Arc - [x] `FC:GitOps-Arc` - [ ] [GitOps & Arc](https://github.com/retaildevcrews/ngsa/issues/478) AKDC - [ ] [akdc template #6]() - [ ] [add docs on helm #29]() - [ ] [codespaces - permissioned denied #16]() - [ ] [workspace issues #20]() Sharing & Docs - [ ] [publish akdc-kind #28]() - [ ] [update k8s-quickstart content #465]() - [ ] [Sharing: Kubernetes components (03/02) #470]() - [ ] [Sharing: Grafana and Prometheus (03/09) #469]() - [ ] [Update repo docs and YAML to use ghcr.io #471]() - [ ] [infrastructure update checklist #433]() - [ ] [Update AKS Readme: use charts in ./gitops, since `ngsa-cd` repo is removed #496]() ### Sprint Adjustments - [ ] Update SSH encryption https://github.com/retaildevcrews/ngsa/issues/489 - Justification: Security alert ### Notes - Redback Crew joins Sprint Febuary 16th
1.0
Sprint 2 tracking - ## Sprint 2 ### 👩🏾‍💻👩🏻‍💻 POINT DEV 👩🏻‍💻 👩🏾‍💻 - Kushal & Gerardo ### Sprint tracking PnP spike - [x] `FC:Pnp` - [ ] [Create a private network](https://github.com/retaildevcrews/ngsa/issues/383) - [ ] [Delta: aks-secure-baseline <> NGSA](https://github.com/retaildevcrews/ngsa/issues/178) - [ ] [Design Cluster Monitoring](https://github.com/retaildevcrews/ngsa/issues/318) - [ ] [Deploy NGSA apps on PnP architecture](https://github.com/retaildevcrews/ngsa/issues/473) - [ ] [infrastructure update checklist #433](https://github.com/retaildevcrews/ngsa/issues/433) - [ ] [AKS Secure Baseline Design Review #461](https://github.com/retaildevcrews/ngsa/issues/461) Scaling Spike - [x] `FC:Scaling` - [ ] https://github.com/retaildevcrews/ngsa/issues/411 - [ ] https://github.com/retaildevcrews/ngsa/issues/416 - [x] [Loderunner 503s](https://github.com/retaildevcrews/ngsa/issues/434) GitOps & Arc - [x] `FC:GitOps-Arc` - [ ] [GitOps & Arc](https://github.com/retaildevcrews/ngsa/issues/478) AKDC - [ ] [akdc template #6]() - [ ] [add docs on helm #29]() - [ ] [codespaces - permissioned denied #16]() - [ ] [workspace issues #20]() Sharing & Docs - [ ] [publish akdc-kind #28]() - [ ] [update k8s-quickstart content #465]() - [ ] [Sharing: Kubernetes components (03/02) #470]() - [ ] [Sharing: Grafana and Prometheus (03/09) #469]() - [ ] [Update repo docs and YAML to use ghcr.io #471]() - [ ] [infrastructure update checklist #433]() - [ ] [Update AKS Readme: use charts in ./gitops, since `ngsa-cd` repo is removed #496]() ### Sprint Adjustments - [ ] Update SSH encryption https://github.com/retaildevcrews/ngsa/issues/489 - Justification: Security alert ### Notes - Redback Crew joins Sprint Febuary 16th
process
sprint tracking sprint 👩🏾‍💻👩🏻‍💻 point dev 👩🏻‍💻 👩🏾‍💻 kushal gerardo sprint tracking pnp spike fc pnp scaling spike fc scaling gitops arc fc gitops arc akdc sharing docs sprint adjustments update ssh encryption justification security alert notes redback crew joins sprint febuary
1
237,909
19,684,385,387
IssuesEvent
2022-01-11 20:16:51
dotnet/machinelearning-modelbuilder
https://api.github.com/repos/dotnet/machinelearning-modelbuilder
closed
Azure portal link on Train page doesn't work.
Priority:2 Test Team Stale
**System Information (please complete the following information):** - Windows OS: Windows 11 Enterprise 21H2 - Model Builder Version (available in Manage Extensions dialog): 16.9.1.2155301 (Main) - Microsoft Visual Studio Enterprise 2019: 16.11.5 - Microsoft Visual Studio Enterprise 2022 RC (64-bit): Version 17.0.0 RC3 **Describe the bug** - On which step of the process did you run into an issue: Azure portal link on Train page - Clear description of the problem: After set Image classification>Local (CPU)>Input dataset, back to change the environment to Azure and not change the Dataset, then start to train, click the Azure portal link on Train page, it doesn't work. **To Reproduce** Steps to reproduce the behavior: 1. Select Create a new project from the Visual Studio 2019 start window; 2. Choose the C# Console App (.NET Core) project template with .Net 5.0; 3. Add model builder by right click on the project; 4. Select Image classification>Local (CPU)>Input dataset; 5. back to change the environment to Azure and not change the Dataset; 6. Navigate to Train page, start to train, click the Azure portal link on Train page, find the link doesn't work. **Expected behavior** Azure portal link on Train page works. **Screenshots** If applicable, add screenshots to help explain your problem. ![azure portal link not work](https://user-images.githubusercontent.com/81727020/140465795-d44364ee-f4ae-4019-b347-4b8053ee4969.gif)
1.0
Azure portal link on Train page doesn't work. - **System Information (please complete the following information):** - Windows OS: Windows 11 Enterprise 21H2 - Model Builder Version (available in Manage Extensions dialog): 16.9.1.2155301 (Main) - Microsoft Visual Studio Enterprise 2019: 16.11.5 - Microsoft Visual Studio Enterprise 2022 RC (64-bit): Version 17.0.0 RC3 **Describe the bug** - On which step of the process did you run into an issue: Azure portal link on Train page - Clear description of the problem: After set Image classification>Local (CPU)>Input dataset, back to change the environment to Azure and not change the Dataset, then start to train, click the Azure portal link on Train page, it doesn't work. **To Reproduce** Steps to reproduce the behavior: 1. Select Create a new project from the Visual Studio 2019 start window; 2. Choose the C# Console App (.NET Core) project template with .Net 5.0; 3. Add model builder by right click on the project; 4. Select Image classification>Local (CPU)>Input dataset; 5. back to change the environment to Azure and not change the Dataset; 6. Navigate to Train page, start to train, click the Azure portal link on Train page, find the link doesn't work. **Expected behavior** Azure portal link on Train page works. **Screenshots** If applicable, add screenshots to help explain your problem. ![azure portal link not work](https://user-images.githubusercontent.com/81727020/140465795-d44364ee-f4ae-4019-b347-4b8053ee4969.gif)
non_process
azure portal link on train page doesn t work system information please complete the following information windows os windows enterprise model builder version available in manage extensions dialog main microsoft visual studio enterprise microsoft visual studio enterprise rc bit version describe the bug on which step of the process did you run into an issue azure portal link on train page clear description of the problem after set image classification local cpu input dataset back to change the environment to azure and not change the dataset then start to train click the azure portal link on train page it doesn t work to reproduce steps to reproduce the behavior select create a new project from the visual studio start window choose the c console app net core project template with net add model builder by right click on the project select image classification local cpu input dataset back to change the environment to azure and not change the dataset navigate to train page start to train click the azure portal link on train page find the link doesn t work expected behavior azure portal link on train page works screenshots if applicable add screenshots to help explain your problem
0
8,867
2,891,482,245
IssuesEvent
2015-06-15 05:55:16
mozilla/togetherjs
https://api.github.com/repos/mozilla/togetherjs
closed
person who creates the session can kick out another participant from their Participant window
Design
Admin controls: When they click on a Participant avatar, the Participant window adds a "Kick out" or remove from session button at the bottom of the window.
1.0
person who creates the session can kick out another participant from their Participant window - Admin controls: When they click on a Participant avatar, the Participant window adds a "Kick out" or remove from session button at the bottom of the window.
non_process
person who creates the session can kick out another participant from their participant window admin controls when they click on a participant avatar the participant window adds a kick out or remove from session button at the bottom of the window
0
55,749
11,463,280,624
IssuesEvent
2020-02-07 15:43:15
canonical-web-and-design/tutorials.ubuntu.com
https://api.github.com/repos/canonical-web-and-design/tutorials.ubuntu.com
closed
Tutorial Wanted - How to debug a snap
Google Code In Tutorials Content Type: Tutorial Request
Snaps are Linux packages you can install on a wide range of Linux distributions. They are installed from a centralized store and run in a confined sandbox. The trouble with sandboxed apps is that they can be harder to debug that non-sandboxed (traditionally packaged) apps. This tutorial will take readers through common steps to debug a malfunctioning snap. Resources on this topic can be found on the [snapcraft forum](https://forum.snapcraft.io) and in the [snapcraft docs](https://docs.snapcraft.io).
1.0
Tutorial Wanted - How to debug a snap - Snaps are Linux packages you can install on a wide range of Linux distributions. They are installed from a centralized store and run in a confined sandbox. The trouble with sandboxed apps is that they can be harder to debug that non-sandboxed (traditionally packaged) apps. This tutorial will take readers through common steps to debug a malfunctioning snap. Resources on this topic can be found on the [snapcraft forum](https://forum.snapcraft.io) and in the [snapcraft docs](https://docs.snapcraft.io).
non_process
tutorial wanted how to debug a snap snaps are linux packages you can install on a wide range of linux distributions they are installed from a centralized store and run in a confined sandbox the trouble with sandboxed apps is that they can be harder to debug that non sandboxed traditionally packaged apps this tutorial will take readers through common steps to debug a malfunctioning snap resources on this topic can be found on the and in the
0
9,406
12,405,387,650
IssuesEvent
2020-05-21 17:11:05
googleapis/google-api-python-client
https://api.github.com/repos/googleapis/google-api-python-client
closed
Snyk is unable to find django/requirements.txt
type: process
The Snyk security check fails because the django requirements.txt has been removed. See #663 for an example.
1.0
Snyk is unable to find django/requirements.txt - The Snyk security check fails because the django requirements.txt has been removed. See #663 for an example.
process
snyk is unable to find django requirements txt the snyk security check fails because the django requirements txt has been removed see for an example
1
34,365
4,918,324,273
IssuesEvent
2016-11-24 08:27:25
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
github.com/cockroachdb/cockroach/vendor/github.com/coreos/etcd/e2e: (unknown) failed under stress
Robot test-failure
SHA: https://github.com/cockroachdb/cockroach/commits/b54490b2cf70c155ec2b7af5133276ffe24dc02c Parameters: ``` COCKROACH_PROPOSER_EVALUATED_KV=true TAGS=stress GOFLAGS= ``` Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=58192&tab=buildLog ``` go list -tags 'stress' -f 'go test -v -tags '\''stress'\'' -ldflags '\'''\'' -i -c {{.ImportPath}} -o {{.Dir}}/stress.test && (cd {{.Dir}} && if [ -f stress.test ]; then stress -maxtime 15m -maxfails 1 -stderr ./stress.test -test.run '\''.'\'' -test.timeout 30m -test.v; fi)' github.com/cockroachdb/cockroach/vendor/github.com/coreos/etcd/e2e | /bin/bash vendor/github.com/coreos/etcd/clientv3/client.go:27:2: cannot find package "github.com/grpc-ecosystem/go-grpc-prometheus" in any of: /go/src/github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/go-grpc-prometheus (vendor tree) /usr/local/go/src/github.com/grpc-ecosystem/go-grpc-prometheus (from $GOROOT) /go/src/github.com/grpc-ecosystem/go-grpc-prometheus (from $GOPATH) vendor/github.com/coreos/etcd/pkg/expect/expect.go:26:2: cannot find package "github.com/kr/pty" in any of: /go/src/github.com/cockroachdb/cockroach/vendor/github.com/kr/pty (vendor tree) /usr/local/go/src/github.com/kr/pty (from $GOROOT) /go/src/github.com/kr/pty (from $GOPATH) make: *** [stress] Error 1 Makefile:138: recipe for target 'stress' failed ```
1.0
github.com/cockroachdb/cockroach/vendor/github.com/coreos/etcd/e2e: (unknown) failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/b54490b2cf70c155ec2b7af5133276ffe24dc02c Parameters: ``` COCKROACH_PROPOSER_EVALUATED_KV=true TAGS=stress GOFLAGS= ``` Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=58192&tab=buildLog ``` go list -tags 'stress' -f 'go test -v -tags '\''stress'\'' -ldflags '\'''\'' -i -c {{.ImportPath}} -o {{.Dir}}/stress.test && (cd {{.Dir}} && if [ -f stress.test ]; then stress -maxtime 15m -maxfails 1 -stderr ./stress.test -test.run '\''.'\'' -test.timeout 30m -test.v; fi)' github.com/cockroachdb/cockroach/vendor/github.com/coreos/etcd/e2e | /bin/bash vendor/github.com/coreos/etcd/clientv3/client.go:27:2: cannot find package "github.com/grpc-ecosystem/go-grpc-prometheus" in any of: /go/src/github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/go-grpc-prometheus (vendor tree) /usr/local/go/src/github.com/grpc-ecosystem/go-grpc-prometheus (from $GOROOT) /go/src/github.com/grpc-ecosystem/go-grpc-prometheus (from $GOPATH) vendor/github.com/coreos/etcd/pkg/expect/expect.go:26:2: cannot find package "github.com/kr/pty" in any of: /go/src/github.com/cockroachdb/cockroach/vendor/github.com/kr/pty (vendor tree) /usr/local/go/src/github.com/kr/pty (from $GOROOT) /go/src/github.com/kr/pty (from $GOPATH) make: *** [stress] Error 1 Makefile:138: recipe for target 'stress' failed ```
non_process
github com cockroachdb cockroach vendor github com coreos etcd unknown failed under stress sha parameters cockroach proposer evaluated kv true tags stress goflags stress build found a failed test go list tags stress f go test v tags stress ldflags i c importpath o dir stress test cd dir if then stress maxtime maxfails stderr stress test test run test timeout test v fi github com cockroachdb cockroach vendor github com coreos etcd bin bash vendor github com coreos etcd client go cannot find package github com grpc ecosystem go grpc prometheus in any of go src github com cockroachdb cockroach vendor github com grpc ecosystem go grpc prometheus vendor tree usr local go src github com grpc ecosystem go grpc prometheus from goroot go src github com grpc ecosystem go grpc prometheus from gopath vendor github com coreos etcd pkg expect expect go cannot find package github com kr pty in any of go src github com cockroachdb cockroach vendor github com kr pty vendor tree usr local go src github com kr pty from goroot go src github com kr pty from gopath make error makefile recipe for target stress failed
0
93,145
26,872,405,359
IssuesEvent
2023-02-04 16:33:02
mRemoteNG/mRemoteNG
https://api.github.com/repos/mRemoteNG/mRemoteNG
closed
Build installer in Debug mode
Ready Improvement required 1.77.3 Build
<!--- Provide a general summary of the issue in the Title above --> ## Expected Behavior As a developer, I want to build the installer in debug mode ## Current Behavior Installer fails to build due to missing CustomActions dependency. If that is resolved, the post buils powershell script fails due to hardcoding of the BuildConfiguration in paths. ## Working Solution 1. Add CustomActions project as project reference in Installer project 2. By default building in Debug mode does not build the installer so a new build configuration was introduced: `Debug Installer` similar to existing `Release Installer` that builds the Installer and required projects in `Debug` mode. 3. Fix hardcodings in pwsh scripts by passing the BuildConfiguration, see details in my comment [here](https://github.com/mRemoteNG/mRemoteNG/issues/2327#issuecomment-1411616191)
1.0
Build installer in Debug mode - <!--- Provide a general summary of the issue in the Title above --> ## Expected Behavior As a developer, I want to build the installer in debug mode ## Current Behavior Installer fails to build due to missing CustomActions dependency. If that is resolved, the post buils powershell script fails due to hardcoding of the BuildConfiguration in paths. ## Working Solution 1. Add CustomActions project as project reference in Installer project 2. By default building in Debug mode does not build the installer so a new build configuration was introduced: `Debug Installer` similar to existing `Release Installer` that builds the Installer and required projects in `Debug` mode. 3. Fix hardcodings in pwsh scripts by passing the BuildConfiguration, see details in my comment [here](https://github.com/mRemoteNG/mRemoteNG/issues/2327#issuecomment-1411616191)
non_process
build installer in debug mode expected behavior as a developer i want to build the installer in debug mode current behavior installer fails to build due to missing customactions dependency if that is resolved the post buils powershell script fails due to hardcoding of the buildconfiguration in paths working solution add customactions project as project reference in installer project by default building in debug mode does not build the installer so a new build configuration was introduced debug installer similar to existing release installer that builds the installer and required projects in debug mode fix hardcodings in pwsh scripts by passing the buildconfiguration see details in my comment
0
16,074
20,248,709,727
IssuesEvent
2022-02-14 15:56:21
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
cannot find Scrt1.o: No such file or directory
type: support / not a bug (process) team-Rules-CPP untriaged
I got this error while cross-compile for aarch64. while compile i got error: cannot find Scrt1.o: No such file or directory. but this object is exist. below is my bazel config. ## BUILD this is `BUILD` file ``` load(":cc_toolchain_config.bzl", "cc_toolchain_config") package(default_visibility = ["//visibility:public"]) filegroup(name = "empty") filegroup( name = "all", srcs = glob([ "**", ]), ) cc_toolchain_config(name = "s32g_toolchain_config") cc_toolchain( name = "s32g_toolchain", all_files = ":all", compiler_files = ":all", dwp_files = ":all", linker_files = ":all", objcopy_files = ":all", strip_files = ":all", toolchain_config = ":s32g_toolchain_config", toolchain_identifier = "s32g-toolchain", ) toolchain_type(name = "toolchain_type") toolchain( name = "aarch64_linux_toolchain", exec_compatible_with = [ "@platforms//os:linux", "@platforms//cpu:x86_64", ], target_compatible_with = [ "@platforms//os:linux", "@platforms//cpu:aarch64", ], toolchain = ":s32g_toolchain", toolchain_type = "@bazel_tools//tools/cpp:toolchain_type", ) ``` ## cc_toolchain_config.bzl this is `cc_toolchain_config.bzl` ``` load("@bazel_tools//tools/build_defs/cc:action_names.bzl", "ACTION_NAMES") load( "@bazel_tools//tools/cpp:cc_toolchain_config_lib.bzl", "feature", "flag_group", "flag_set", "tool_path", ) all_link_actions = [ ACTION_NAMES.cpp_link_executable, ACTION_NAMES.cpp_link_dynamic_library, ACTION_NAMES.cpp_link_nodeps_dynamic_library, ] all_compile_actions = [ ACTION_NAMES.assemble, ACTION_NAMES.c_compile, ACTION_NAMES.clif_match, ACTION_NAMES.cpp_compile, ACTION_NAMES.cpp_header_parsing, ACTION_NAMES.cpp_module_codegen, ACTION_NAMES.cpp_module_compile, ACTION_NAMES.linkstamp_compile, ACTION_NAMES.lto_backend, ACTION_NAMES.preprocess_assemble, ] def _impl(ctx): tool_paths = [ tool_path( name = "ar", path = "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/bin/aarch64-fsl-linux/aarch64-fsl-linux-ar", ), tool_path( name = "cpp", path = "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/bin/aarch64-fsl-linux/aarch64-fsl-linux-cpp", ), tool_path( name = "gcc", path = "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/bin/aarch64-fsl-linux/aarch64-fsl-linux-gcc", ), tool_path( name = "gcov", path = "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/bin/aarch64-fsl-linux/aarch64-fsl-linux-gcov", ), tool_path( name = "ld", path = "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/bin/aarch64-fsl-linux/aarch64-fsl-linux-ld", ), tool_path( name = "nm", path = "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/bin/aarch64-fsl-linux/aarch64-fsl-linux-nm", ), tool_path( name = "objdump", path = "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/bin/aarch64-fsl-linux/aarch64-fsl-linux-objdump", ), tool_path( name = "strip", path = "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/bin/aarch64-fsl-linux/aarch64-fsl-linux-strip", ), ] default_compiler_flags = feature( name = "default_compiler_flags", enabled = True, flag_sets = [ flag_set( actions = all_compile_actions, flag_groups = [ flag_group( flags = [ "--sysroot=/opt/xcu-fsl-auto/1.0/sysroots/aarch64-fsl-linux", "-no-canonical-prefixes", "-fno-canonical-system-headers", "-Wno-builtin-macro-redefined", ], ), ], ), ], ) default_linker_flags = feature( name = "default_linker_flags", enabled = True, flag_sets = [ flag_set( actions = all_link_actions, flag_groups = ([ flag_group( flags = [ "-L/opt/xcu-fsl-auto/1.0/sysroots/aarch64-fsl-linux", "-L/opt/xcu-fsl-auto/1.0/sysroots/aarch64-fsl-linux/lib", "-L/opt/xcu-fsl-auto/1.0/sysroots/aarch64-fsl-linux/usr/lib", "-L/opt/xcu-fsl-auto/1.0/sysroots/aarch64-fsl-linux/usr/lib/aarch64-fsl-linux/10.2.0", ], ), ]), ), ], ) features = [ default_compiler_flags, default_linker_flags, ] return cc_common.create_cc_toolchain_config_info( ctx = ctx, cxx_builtin_include_directories = [ "/opt/xcu-fsl-auto/1.0/sysroots/aarch64-fsl-linux/usr/include", "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/lib/aarch64-fsl-linux/gcc/aarch64-fsl-linux/10.2.0", "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/lib/aarch64-fsl-linux/gcc/aarch64-fsl-linux/10.2.0/include", ], features = features, toolchain_identifier = "s32g-toolchain", host_system_name = "local", target_system_name = "unknown", target_cpu = "unknown", target_libc = "unknown", compiler = "unknown", abi_version = "unknown", abi_libc_version = "unknown", tool_paths = tool_paths, ) cc_toolchain_config = rule( implementation = _impl, attrs = {}, provides = [CcToolchainConfigInfo], ) ``` ## error info this is the error info. ``` Use --sandbox_debug to see verbose messages from the sandbox /opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/libexec/aarch64-fsl-linux/gcc/aarch64-fsl-linux/10.2.0/real-ld: cannot find Scrt1.o: No such file or directory /opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/libexec/aarch64-fsl-linux/gcc/aarch64-fsl-linux/10.2.0/real-ld: cannot find crti.o: No such file or directory /opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/libexec/aarch64-fsl-linux/gcc/aarch64-fsl-linux/10.2.0/real-ld: cannot find crtbeginS.o: No such file or directory ``` according to this link `https://github.com/bazelbuild/bazel/issues/3844`. maybe I should declare those objects in cc_toolchain, but I do not understand how to declare them.
1.0
cannot find Scrt1.o: No such file or directory - I got this error while cross-compile for aarch64. while compile i got error: cannot find Scrt1.o: No such file or directory. but this object is exist. below is my bazel config. ## BUILD this is `BUILD` file ``` load(":cc_toolchain_config.bzl", "cc_toolchain_config") package(default_visibility = ["//visibility:public"]) filegroup(name = "empty") filegroup( name = "all", srcs = glob([ "**", ]), ) cc_toolchain_config(name = "s32g_toolchain_config") cc_toolchain( name = "s32g_toolchain", all_files = ":all", compiler_files = ":all", dwp_files = ":all", linker_files = ":all", objcopy_files = ":all", strip_files = ":all", toolchain_config = ":s32g_toolchain_config", toolchain_identifier = "s32g-toolchain", ) toolchain_type(name = "toolchain_type") toolchain( name = "aarch64_linux_toolchain", exec_compatible_with = [ "@platforms//os:linux", "@platforms//cpu:x86_64", ], target_compatible_with = [ "@platforms//os:linux", "@platforms//cpu:aarch64", ], toolchain = ":s32g_toolchain", toolchain_type = "@bazel_tools//tools/cpp:toolchain_type", ) ``` ## cc_toolchain_config.bzl this is `cc_toolchain_config.bzl` ``` load("@bazel_tools//tools/build_defs/cc:action_names.bzl", "ACTION_NAMES") load( "@bazel_tools//tools/cpp:cc_toolchain_config_lib.bzl", "feature", "flag_group", "flag_set", "tool_path", ) all_link_actions = [ ACTION_NAMES.cpp_link_executable, ACTION_NAMES.cpp_link_dynamic_library, ACTION_NAMES.cpp_link_nodeps_dynamic_library, ] all_compile_actions = [ ACTION_NAMES.assemble, ACTION_NAMES.c_compile, ACTION_NAMES.clif_match, ACTION_NAMES.cpp_compile, ACTION_NAMES.cpp_header_parsing, ACTION_NAMES.cpp_module_codegen, ACTION_NAMES.cpp_module_compile, ACTION_NAMES.linkstamp_compile, ACTION_NAMES.lto_backend, ACTION_NAMES.preprocess_assemble, ] def _impl(ctx): tool_paths = [ tool_path( name = "ar", path = "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/bin/aarch64-fsl-linux/aarch64-fsl-linux-ar", ), tool_path( name = "cpp", path = "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/bin/aarch64-fsl-linux/aarch64-fsl-linux-cpp", ), tool_path( name = "gcc", path = "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/bin/aarch64-fsl-linux/aarch64-fsl-linux-gcc", ), tool_path( name = "gcov", path = "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/bin/aarch64-fsl-linux/aarch64-fsl-linux-gcov", ), tool_path( name = "ld", path = "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/bin/aarch64-fsl-linux/aarch64-fsl-linux-ld", ), tool_path( name = "nm", path = "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/bin/aarch64-fsl-linux/aarch64-fsl-linux-nm", ), tool_path( name = "objdump", path = "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/bin/aarch64-fsl-linux/aarch64-fsl-linux-objdump", ), tool_path( name = "strip", path = "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/bin/aarch64-fsl-linux/aarch64-fsl-linux-strip", ), ] default_compiler_flags = feature( name = "default_compiler_flags", enabled = True, flag_sets = [ flag_set( actions = all_compile_actions, flag_groups = [ flag_group( flags = [ "--sysroot=/opt/xcu-fsl-auto/1.0/sysroots/aarch64-fsl-linux", "-no-canonical-prefixes", "-fno-canonical-system-headers", "-Wno-builtin-macro-redefined", ], ), ], ), ], ) default_linker_flags = feature( name = "default_linker_flags", enabled = True, flag_sets = [ flag_set( actions = all_link_actions, flag_groups = ([ flag_group( flags = [ "-L/opt/xcu-fsl-auto/1.0/sysroots/aarch64-fsl-linux", "-L/opt/xcu-fsl-auto/1.0/sysroots/aarch64-fsl-linux/lib", "-L/opt/xcu-fsl-auto/1.0/sysroots/aarch64-fsl-linux/usr/lib", "-L/opt/xcu-fsl-auto/1.0/sysroots/aarch64-fsl-linux/usr/lib/aarch64-fsl-linux/10.2.0", ], ), ]), ), ], ) features = [ default_compiler_flags, default_linker_flags, ] return cc_common.create_cc_toolchain_config_info( ctx = ctx, cxx_builtin_include_directories = [ "/opt/xcu-fsl-auto/1.0/sysroots/aarch64-fsl-linux/usr/include", "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/lib/aarch64-fsl-linux/gcc/aarch64-fsl-linux/10.2.0", "/opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/lib/aarch64-fsl-linux/gcc/aarch64-fsl-linux/10.2.0/include", ], features = features, toolchain_identifier = "s32g-toolchain", host_system_name = "local", target_system_name = "unknown", target_cpu = "unknown", target_libc = "unknown", compiler = "unknown", abi_version = "unknown", abi_libc_version = "unknown", tool_paths = tool_paths, ) cc_toolchain_config = rule( implementation = _impl, attrs = {}, provides = [CcToolchainConfigInfo], ) ``` ## error info this is the error info. ``` Use --sandbox_debug to see verbose messages from the sandbox /opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/libexec/aarch64-fsl-linux/gcc/aarch64-fsl-linux/10.2.0/real-ld: cannot find Scrt1.o: No such file or directory /opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/libexec/aarch64-fsl-linux/gcc/aarch64-fsl-linux/10.2.0/real-ld: cannot find crti.o: No such file or directory /opt/xcu-fsl-auto/1.0/sysroots/x86_64-fslbsp-linux/usr/libexec/aarch64-fsl-linux/gcc/aarch64-fsl-linux/10.2.0/real-ld: cannot find crtbeginS.o: No such file or directory ``` according to this link `https://github.com/bazelbuild/bazel/issues/3844`. maybe I should declare those objects in cc_toolchain, but I do not understand how to declare them.
process
cannot find o no such file or directory i got this error while cross compile for while compile i got error cannot find o no such file or directory but this object is exist below is my bazel config build this is build file load cc toolchain config bzl cc toolchain config package default visibility filegroup name empty filegroup name all srcs glob cc toolchain config name toolchain config cc toolchain name toolchain all files all compiler files all dwp files all linker files all objcopy files all strip files all toolchain config toolchain config toolchain identifier toolchain toolchain type name toolchain type toolchain name linux toolchain exec compatible with platforms os linux platforms cpu target compatible with platforms os linux platforms cpu toolchain toolchain toolchain type bazel tools tools cpp toolchain type cc toolchain config bzl this is cc toolchain config bzl load bazel tools tools build defs cc action names bzl action names load bazel tools tools cpp cc toolchain config lib bzl feature flag group flag set tool path all link actions action names cpp link executable action names cpp link dynamic library action names cpp link nodeps dynamic library all compile actions action names assemble action names c compile action names clif match action names cpp compile action names cpp header parsing action names cpp module codegen action names cpp module compile action names linkstamp compile action names lto backend action names preprocess assemble def impl ctx tool paths tool path name ar path opt xcu fsl auto sysroots fslbsp linux usr bin fsl linux fsl linux ar tool path name cpp path opt xcu fsl auto sysroots fslbsp linux usr bin fsl linux fsl linux cpp tool path name gcc path opt xcu fsl auto sysroots fslbsp linux usr bin fsl linux fsl linux gcc tool path name gcov path opt xcu fsl auto sysroots fslbsp linux usr bin fsl linux fsl linux gcov tool path name ld path opt xcu fsl auto sysroots fslbsp linux usr bin fsl linux fsl linux ld tool path name nm path opt xcu fsl auto sysroots fslbsp linux usr bin fsl linux fsl linux nm tool path name objdump path opt xcu fsl auto sysroots fslbsp linux usr bin fsl linux fsl linux objdump tool path name strip path opt xcu fsl auto sysroots fslbsp linux usr bin fsl linux fsl linux strip default compiler flags feature name default compiler flags enabled true flag sets flag set actions all compile actions flag groups flag group flags sysroot opt xcu fsl auto sysroots fsl linux no canonical prefixes fno canonical system headers wno builtin macro redefined default linker flags feature name default linker flags enabled true flag sets flag set actions all link actions flag groups flag group flags l opt xcu fsl auto sysroots fsl linux l opt xcu fsl auto sysroots fsl linux lib l opt xcu fsl auto sysroots fsl linux usr lib l opt xcu fsl auto sysroots fsl linux usr lib fsl linux features default compiler flags default linker flags return cc common create cc toolchain config info ctx ctx cxx builtin include directories opt xcu fsl auto sysroots fsl linux usr include opt xcu fsl auto sysroots fslbsp linux usr lib fsl linux gcc fsl linux opt xcu fsl auto sysroots fslbsp linux usr lib fsl linux gcc fsl linux include features features toolchain identifier toolchain host system name local target system name unknown target cpu unknown target libc unknown compiler unknown abi version unknown abi libc version unknown tool paths tool paths cc toolchain config rule implementation impl attrs provides error info this is the error info use sandbox debug to see verbose messages from the sandbox opt xcu fsl auto sysroots fslbsp linux usr libexec fsl linux gcc fsl linux real ld cannot find o no such file or directory opt xcu fsl auto sysroots fslbsp linux usr libexec fsl linux gcc fsl linux real ld cannot find crti o no such file or directory opt xcu fsl auto sysroots fslbsp linux usr libexec fsl linux gcc fsl linux real ld cannot find crtbegins o no such file or directory according to this link maybe i should declare those objects in cc toolchain but i do not understand how to declare them
1
20,001
10,576,075,365
IssuesEvent
2019-10-07 17:03:18
xtermjs/xterm.js
https://api.github.com/repos/xtermjs/xterm.js
closed
Minimize drawing calls by finding words that have changed
area/performance area/renderer type/enhancement
Earlier individual cells were clipped such that text from one cell could never enter another cell. This was great for performance but not so great for font rendering. Now characters can overlap which leads to much more legible text but it also means that the line gets redrawn every time. The idea behind this issue is to bring back minimal drawing inside a line by diffing *words* instead of *characters* as it was doing before. A word is defined as a set of *visible* characters that is wrapped by "empty cells" both in the before state and the after state. An empty cell is defined as a character that has no foreground, eg. space char, invisible style, etc. An empty cell can have a background of any color. For example: ``` Before: foo bar baz After: test 12 345 ^ ^ | The space is aligned, the space used by baz->345 is a 3 letter word | This is a 7 char word as the space is in a different spot before after The words after are: - index 0 with a length of 7 - index 8 with a length of 3 ```
True
Minimize drawing calls by finding words that have changed - Earlier individual cells were clipped such that text from one cell could never enter another cell. This was great for performance but not so great for font rendering. Now characters can overlap which leads to much more legible text but it also means that the line gets redrawn every time. The idea behind this issue is to bring back minimal drawing inside a line by diffing *words* instead of *characters* as it was doing before. A word is defined as a set of *visible* characters that is wrapped by "empty cells" both in the before state and the after state. An empty cell is defined as a character that has no foreground, eg. space char, invisible style, etc. An empty cell can have a background of any color. For example: ``` Before: foo bar baz After: test 12 345 ^ ^ | The space is aligned, the space used by baz->345 is a 3 letter word | This is a 7 char word as the space is in a different spot before after The words after are: - index 0 with a length of 7 - index 8 with a length of 3 ```
non_process
minimize drawing calls by finding words that have changed earlier individual cells were clipped such that text from one cell could never enter another cell this was great for performance but not so great for font rendering now characters can overlap which leads to much more legible text but it also means that the line gets redrawn every time the idea behind this issue is to bring back minimal drawing inside a line by diffing words instead of characters as it was doing before a word is defined as a set of visible characters that is wrapped by empty cells both in the before state and the after state an empty cell is defined as a character that has no foreground eg space char invisible style etc an empty cell can have a background of any color for example before foo bar baz after test the space is aligned the space used by baz is a letter word this is a char word as the space is in a different spot before after the words after are index with a length of index with a length of
0
17,096
22,611,526,733
IssuesEvent
2022-06-29 17:39:02
spinalcordtoolbox/spinalcordtoolbox
https://api.github.com/repos/spinalcordtoolbox/spinalcordtoolbox
closed
CSA normalization models are in the wrong `data` subfolder
sct_process_segmentation good first issue refactoring
Normally, models are installed to the `$SCT_DIR/data` folder (i.e. a `data/` folder in the root of the SCT project folder). However, the models were instead added inside [`$SCT_DIR/spinalcordtoolbox/data`](https://github.com/spinalcordtoolbox/spinalcordtoolbox/tree/master/spinalcordtoolbox/data/) (i.e. an extra folder too deep). <details> This must have slipped by during review of #3515... honestly, an easy thing to miss, given how the paths are displayed in GitHub's file diff: ![Screenshot from 2021-11-01 11-11-20](https://user-images.githubusercontent.com/16181459/139694501-751492b1-e45d-49ee-b66d-da6390408477.png) ^ this is actually `spinalcordtoolbox/spinalcordtoolbox/data`. </details> In summary, all that needs to done is move the folder from `${SCT_DIR}/spinalcordtoolbox/data` to `${SCT_DIR}/data`, then update the following link in the codebase: https://github.com/spinalcordtoolbox/spinalcordtoolbox/blob/138918312d82ae85791a0b26472219b2085a3656/spinalcordtoolbox/scripts/sct_process_segmentation.py#L447-L448
1.0
CSA normalization models are in the wrong `data` subfolder - Normally, models are installed to the `$SCT_DIR/data` folder (i.e. a `data/` folder in the root of the SCT project folder). However, the models were instead added inside [`$SCT_DIR/spinalcordtoolbox/data`](https://github.com/spinalcordtoolbox/spinalcordtoolbox/tree/master/spinalcordtoolbox/data/) (i.e. an extra folder too deep). <details> This must have slipped by during review of #3515... honestly, an easy thing to miss, given how the paths are displayed in GitHub's file diff: ![Screenshot from 2021-11-01 11-11-20](https://user-images.githubusercontent.com/16181459/139694501-751492b1-e45d-49ee-b66d-da6390408477.png) ^ this is actually `spinalcordtoolbox/spinalcordtoolbox/data`. </details> In summary, all that needs to done is move the folder from `${SCT_DIR}/spinalcordtoolbox/data` to `${SCT_DIR}/data`, then update the following link in the codebase: https://github.com/spinalcordtoolbox/spinalcordtoolbox/blob/138918312d82ae85791a0b26472219b2085a3656/spinalcordtoolbox/scripts/sct_process_segmentation.py#L447-L448
process
csa normalization models are in the wrong data subfolder normally models are installed to the sct dir data folder i e a data folder in the root of the sct project folder however the models were instead added inside i e an extra folder too deep this must have slipped by during review of honestly an easy thing to miss given how the paths are displayed in github s file diff this is actually spinalcordtoolbox spinalcordtoolbox data in summary all that needs to done is move the folder from sct dir spinalcordtoolbox data to sct dir data then update the following link in the codebase
1
4,909
7,785,238,278
IssuesEvent
2018-06-06 15:20:20
ropensci/onboarding-meta
https://api.github.com/repos/ropensci/onboarding-meta
closed
Make checking GitHub linguist results for onboarded packages an editor task
process standards
If a package isn't classified as R repo, then tell the author to make a PR to https://github.com/github/linguist
1.0
Make checking GitHub linguist results for onboarded packages an editor task - If a package isn't classified as R repo, then tell the author to make a PR to https://github.com/github/linguist
process
make checking github linguist results for onboarded packages an editor task if a package isn t classified as r repo then tell the author to make a pr to
1
14,417
17,466,521,592
IssuesEvent
2021-08-06 17:41:59
ORNL-AMO/AMO-Tools-Desktop
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
opened
Manage Bin Sets/Cases
enhancement Process Cooling
User should be able to switch between multiple cases within the weather calculator and the active calculator. Each case may have different (and multiple) binning parameters, etc. This could look like case tabs above the add case button: ![image.png](https://images.zenhubusercontent.com/5e4547eef6a311c23c81ce81/365ff33c-9c06-4f43-9d09-c21bfe5a38a5)
1.0
Manage Bin Sets/Cases - User should be able to switch between multiple cases within the weather calculator and the active calculator. Each case may have different (and multiple) binning parameters, etc. This could look like case tabs above the add case button: ![image.png](https://images.zenhubusercontent.com/5e4547eef6a311c23c81ce81/365ff33c-9c06-4f43-9d09-c21bfe5a38a5)
process
manage bin sets cases user should be able to switch between multiple cases within the weather calculator and the active calculator each case may have different and multiple binning parameters etc this could look like case tabs above the add case button
1
2,121
4,958,421,107
IssuesEvent
2016-12-02 09:43:35
nodejs/node
https://api.github.com/repos/nodejs/node
closed
Same command works in exec but not in execSync
child_process windows
<!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: v6.7.0 * **Platform**: Windows 64-bit * **Subsystem**: <!-- Enter your issue details below this comment. --> This script work perfectly ```js const execSync = require('child_process').execSync; exec("_data\\bin\\trec_eval_Windows ./_data/QueryFile/qrels.txt ./_results/test_query.txt", (err,std) => { console.log( std ) }) ``` but this one would gives an error ```js const execSync = require('child_process').execSync; var child = execSync("_data\\bin\\trec_eval_Windows ./_data/QueryFile/qrels.txt ./_results/test_query.txt") console.log(child.toString()) ``` ``` child_process.js:526 throw err; ^ Error: Command failed: _data\bin\trec_eval_Windows ./_data/QueryFile/qrels.txt ./_results/test_query.txt at checkExecSyncError (child_process.js:483:13) at execSync (child_process.js:523:13) at Object.<anonymous> (....\testexe.js:8:13) at Module._compile (module.js:556:32) at Object.Module._extensions..js (module.js:565:10) at Module.load (module.js:473:32) at tryModuleLoad (module.js:432:12) at Function.Module._load (module.js:424:3) at Module.runMain (module.js:590:10) at run (bootstrap_node.js:394:7) ``` And the input command works in the command line Thank you
1.0
Same command works in exec but not in execSync - <!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: v6.7.0 * **Platform**: Windows 64-bit * **Subsystem**: <!-- Enter your issue details below this comment. --> This script work perfectly ```js const execSync = require('child_process').execSync; exec("_data\\bin\\trec_eval_Windows ./_data/QueryFile/qrels.txt ./_results/test_query.txt", (err,std) => { console.log( std ) }) ``` but this one would gives an error ```js const execSync = require('child_process').execSync; var child = execSync("_data\\bin\\trec_eval_Windows ./_data/QueryFile/qrels.txt ./_results/test_query.txt") console.log(child.toString()) ``` ``` child_process.js:526 throw err; ^ Error: Command failed: _data\bin\trec_eval_Windows ./_data/QueryFile/qrels.txt ./_results/test_query.txt at checkExecSyncError (child_process.js:483:13) at execSync (child_process.js:523:13) at Object.<anonymous> (....\testexe.js:8:13) at Module._compile (module.js:556:32) at Object.Module._extensions..js (module.js:565:10) at Module.load (module.js:473:32) at tryModuleLoad (module.js:432:12) at Function.Module._load (module.js:424:3) at Module.runMain (module.js:590:10) at run (bootstrap_node.js:394:7) ``` And the input command works in the command line Thank you
process
same command works in exec but not in execsync thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version platform windows bit subsystem this script work perfectly js const execsync require child process execsync exec data bin trec eval windows data queryfile qrels txt results test query txt err std console log std but this one would gives an error js const execsync require child process execsync var child execsync data bin trec eval windows data queryfile qrels txt results test query txt console log child tostring child process js throw err error command failed data bin trec eval windows data queryfile qrels txt results test query txt at checkexecsyncerror child process js at execsync child process js at object testexe js at module compile module js at object module extensions js module js at module load module js at trymoduleload module js at function module load module js at module runmain module js at run bootstrap node js and the input command works in the command line thank you
1
160,863
20,120,313,490
IssuesEvent
2022-02-08 01:06:45
AkshayMukkavilli/Analyzing-the-Significance-of-Structure-in-Amazon-Review-Data-Using-Machine-Learning-Approaches
https://api.github.com/repos/AkshayMukkavilli/Analyzing-the-Significance-of-Structure-in-Amazon-Review-Data-Using-Machine-Learning-Approaches
opened
CVE-2022-23573 (High) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl
security vulnerability
## CVE-2022-23573 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p> <p>Path to dependency file: /FinalProject/requirements.txt</p> <p>Path to vulnerable library: /teSource-ArchiveExtractor_8b9e071c-3b11-4aa9-ba60-cdeb60d053b7/20190525011350_65403/20190525011256_depth_0/9/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p> <p> Dependency Hierarchy: - :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Tensorflow is an Open Source Machine Learning Framework. The implementation of `AssignOp` can result in copying uninitialized data to a new tensor. This later results in undefined behavior. The implementation has a check that the left hand side of the assignment is initialized (to minimize number of allocations), but does not check that the right hand side is also initialized. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range. <p>Publish Date: 2022-02-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-23573>CVE-2022-23573</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-q85f-69q7-55h2">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-q85f-69q7-55h2</a></p> <p>Release Date: 2022-02-04</p> <p>Fix Resolution: tensorflow - 2.5.3,2.6.3,2.7.1,2.8.0;tensorflow-cpu - 2.5.3,2.6.3,2.7.1,2.8.0;tensorflow-gpu - 2.5.3,2.6.3,2.7.1,2.8.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-23573 (High) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2022-23573 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p> <p>Path to dependency file: /FinalProject/requirements.txt</p> <p>Path to vulnerable library: /teSource-ArchiveExtractor_8b9e071c-3b11-4aa9-ba60-cdeb60d053b7/20190525011350_65403/20190525011256_depth_0/9/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p> <p> Dependency Hierarchy: - :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Tensorflow is an Open Source Machine Learning Framework. The implementation of `AssignOp` can result in copying uninitialized data to a new tensor. This later results in undefined behavior. The implementation has a check that the left hand side of the assignment is initialized (to minimize number of allocations), but does not check that the right hand side is also initialized. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range. <p>Publish Date: 2022-02-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-23573>CVE-2022-23573</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-q85f-69q7-55h2">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-q85f-69q7-55h2</a></p> <p>Release Date: 2022-02-04</p> <p>Fix Resolution: tensorflow - 2.5.3,2.6.3,2.7.1,2.8.0;tensorflow-cpu - 2.5.3,2.6.3,2.7.1,2.8.0;tensorflow-gpu - 2.5.3,2.6.3,2.7.1,2.8.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in tensorflow whl cve high severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file finalproject requirements txt path to vulnerable library tesource archiveextractor depth tensorflow tensorflow data purelib tensorflow dependency hierarchy x tensorflow whl vulnerable library vulnerability details tensorflow is an open source machine learning framework the implementation of assignop can result in copying uninitialized data to a new tensor this later results in undefined behavior the implementation has a check that the left hand side of the assignment is initialized to minimize number of allocations but does not check that the right hand side is also initialized the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with whitesource
0
18,932
13,169,490,543
IssuesEvent
2020-08-11 13:49:47
wireapp/wire-webapp
https://api.github.com/repos/wireapp/wire-webapp
closed
Missing PASSWORD_RESET source code. Source code for account and teams.
comp: infrastructure prio: low type: feature / request ✨
Dear Wire developers, Can you please share source code for https://account.wire.com and https://teams.wire.com which both are set in https://github.com/wireapp/wire-webapp/blob/dev/app/script/config.js#L98 and https://github.com/wireapp/wire-webapp/blob/dev/app/script/config.js#L103 In current source codes it's seems to be missing PASSWORD_RESET: '/forgot/', feature Thank you for your efforts and sharing your project to open-source.
1.0
Missing PASSWORD_RESET source code. Source code for account and teams. - Dear Wire developers, Can you please share source code for https://account.wire.com and https://teams.wire.com which both are set in https://github.com/wireapp/wire-webapp/blob/dev/app/script/config.js#L98 and https://github.com/wireapp/wire-webapp/blob/dev/app/script/config.js#L103 In current source codes it's seems to be missing PASSWORD_RESET: '/forgot/', feature Thank you for your efforts and sharing your project to open-source.
non_process
missing password reset source code source code for account and teams dear wire developers can you please share source code for and which both are set in and in current source codes it s seems to be missing password reset forgot feature thank you for your efforts and sharing your project to open source
0
273,180
20,775,576,876
IssuesEvent
2022-03-16 10:09:46
image-et-son/p600fw
https://api.github.com/repos/image-et-son/p600fw
closed
new manual page 55 either either.
documentation
To reach the Teensy board unscrew the top 2 screws on the wooden side panels on either either. either either? ****** To reach the Teensy board, loosen the two screws at the top of the wooden side panels on both sides.
1.0
new manual page 55 either either. - To reach the Teensy board unscrew the top 2 screws on the wooden side panels on either either. either either? ****** To reach the Teensy board, loosen the two screws at the top of the wooden side panels on both sides.
non_process
new manual page either either to reach the teensy board unscrew the top screws on the wooden side panels on either either either either? to reach the teensy board loosen the two screws at the top of the wooden side panels on both sides
0
17,963
23,973,941,737
IssuesEvent
2022-09-13 09:58:56
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Update the Note in Runbook types section
automation/svc triaged cxp doc-enhancement process-automation/subsvc Pri2
Note in [Runbook types](https://docs.microsoft.com/en-us/azure/automation/automation-child-runbooks#runbook-types) section needs update. It also needs to provide a reference to [Start a child runbook by using a cmdlet](https://docs.microsoft.com/en-us/azure/automation/automation-child-runbooks#start-a-child-runbook-by-using-a-cmdlet) section and explain issues associated with starting a child runbook using the cmdlet. Reference: https://docs.microsoft.com/en-us/answers/questions/918563/index.html --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 23c183d0-5012-e2e1-5562-69135b3f6509 * Version Independent ID: 7f36ff87-e24a-7442-8d42-f621f5391814 * Content: [Create modular runbooks in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-child-runbooks#runbook-types) * Content Source: [articles/automation/automation-child-runbooks.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-child-runbooks.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @SnehaSudhirG * Microsoft Alias: **sudhirsneha**
1.0
Update the Note in Runbook types section - Note in [Runbook types](https://docs.microsoft.com/en-us/azure/automation/automation-child-runbooks#runbook-types) section needs update. It also needs to provide a reference to [Start a child runbook by using a cmdlet](https://docs.microsoft.com/en-us/azure/automation/automation-child-runbooks#start-a-child-runbook-by-using-a-cmdlet) section and explain issues associated with starting a child runbook using the cmdlet. Reference: https://docs.microsoft.com/en-us/answers/questions/918563/index.html --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 23c183d0-5012-e2e1-5562-69135b3f6509 * Version Independent ID: 7f36ff87-e24a-7442-8d42-f621f5391814 * Content: [Create modular runbooks in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-child-runbooks#runbook-types) * Content Source: [articles/automation/automation-child-runbooks.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-child-runbooks.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @SnehaSudhirG * Microsoft Alias: **sudhirsneha**
process
update the note in runbook types section note in section needs update it also needs to provide a reference to section and explain issues associated with starting a child runbook using the cmdlet reference document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login snehasudhirg microsoft alias sudhirsneha
1
261,358
27,809,744,497
IssuesEvent
2023-03-18 01:36:37
madhans23/linux-4.1.15
https://api.github.com/repos/madhans23/linux-4.1.15
closed
CVE-2016-9919 (High) detected in linux-stable-rtv4.1.33 - autoclosed
Mend: dependency security vulnerability
## CVE-2016-9919 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/madhans23/linux-4.1.15/commit/f9d19044b0eef1965f9bc412d7d9e579b74ec968">f9d19044b0eef1965f9bc412d7d9e579b74ec968</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv6/icmp.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv6/icmp.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The icmp6_send function in net/ipv6/icmp.c in the Linux kernel through 4.8.12 omits a certain check of the dst data structure, which allows remote attackers to cause a denial of service (panic) via a fragmented IPv6 packet. <p>Publish Date: 2016-12-08 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-9919>CVE-2016-9919</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-9919">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-9919</a></p> <p>Release Date: 2016-12-08</p> <p>Fix Resolution: v4.9-rc8</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2016-9919 (High) detected in linux-stable-rtv4.1.33 - autoclosed - ## CVE-2016-9919 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/madhans23/linux-4.1.15/commit/f9d19044b0eef1965f9bc412d7d9e579b74ec968">f9d19044b0eef1965f9bc412d7d9e579b74ec968</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv6/icmp.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv6/icmp.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The icmp6_send function in net/ipv6/icmp.c in the Linux kernel through 4.8.12 omits a certain check of the dst data structure, which allows remote attackers to cause a denial of service (panic) via a fragmented IPv6 packet. <p>Publish Date: 2016-12-08 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-9919>CVE-2016-9919</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-9919">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-9919</a></p> <p>Release Date: 2016-12-08</p> <p>Fix Resolution: v4.9-rc8</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in linux stable autoclosed cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files net icmp c net icmp c vulnerability details the send function in net icmp c in the linux kernel through omits a certain check of the dst data structure which allows remote attackers to cause a denial of service panic via a fragmented packet publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
17,327
23,144,197,041
IssuesEvent
2022-07-28 21:53:25
medic/cht-core
https://api.github.com/repos/medic/cht-core
closed
Release 3.15.0-FR-bulk-user-upload
Type: Internal process
When development is ready to begin on a [Feature Release](https://docs.communityhealthtoolkit.org/core/releases/feature_releases/#release-names), an engineer on the appropriate Care Team or Allies should be nominated as a Release Engineer. They will be responsible for making sure the following tasks are followed, though not necessarily doing the work themselves. # Planning - [x] Create a GH Milestone for the release. - [x] Add all the issues to be worked on to the Milestone. - [X] Have an actual named deployment and specific end user that will be testing this Feature Release. They need to test in production, on the latest version. No speculative Feature Releases. - [X] Assign an engineer as Release Engineer for this release. - @latin-panda \o/ # Development - [x] Create a new release branch in `cht-core` from the most recent release and call it `<major>.<minor>.<patch>-FR-<FEATURE-NAME>`. If latest is `3.15.0` and the feature is to "allow movies to be uploaded", call it `3.15.0-FR-movie-upload`. Done before the release so all PRs can be set to merge to this branch, and not to `master`. - [x] Set the version number in `package.json` and `package-lock.json` and submit a PR. The easiest way to do this is to use `npm --no-git-tag-version version <feature-release>`. - [x] Ensure QA is briefed and is partnering with the Trio to ensure early and often checks of the feature are on track to be of production quality from the start. # Releasing This is an iterative process and it's assumed there will be multiple numbered releases throughout development of the Feature Release. - [x] Build a beta named `<major>.<minor>.<patch>-FR-<FEATURE-NAME>-1` by pushing a git tag and when CI completes successfully notify the QA team that it's ready for release testing. If an updated Feature Release is needed, increment the last `1` by calling it `<major>.<minor>.<patch>-FR-<FEATURE-NAME>-2` etc. # Close-out - [x] Validate with the actual end user that this Feature Release delivers a quantifiable improvement. If yes, plan on adding the feature to the next minor release by creating a new ticket to merge the code to `master`. If no, we leave the code dead in this branch, never to be merged to `master`, but still loved all the same. - [x] Mark this issue "done" and close the Milestone.
1.0
Release 3.15.0-FR-bulk-user-upload - When development is ready to begin on a [Feature Release](https://docs.communityhealthtoolkit.org/core/releases/feature_releases/#release-names), an engineer on the appropriate Care Team or Allies should be nominated as a Release Engineer. They will be responsible for making sure the following tasks are followed, though not necessarily doing the work themselves. # Planning - [x] Create a GH Milestone for the release. - [x] Add all the issues to be worked on to the Milestone. - [X] Have an actual named deployment and specific end user that will be testing this Feature Release. They need to test in production, on the latest version. No speculative Feature Releases. - [X] Assign an engineer as Release Engineer for this release. - @latin-panda \o/ # Development - [x] Create a new release branch in `cht-core` from the most recent release and call it `<major>.<minor>.<patch>-FR-<FEATURE-NAME>`. If latest is `3.15.0` and the feature is to "allow movies to be uploaded", call it `3.15.0-FR-movie-upload`. Done before the release so all PRs can be set to merge to this branch, and not to `master`. - [x] Set the version number in `package.json` and `package-lock.json` and submit a PR. The easiest way to do this is to use `npm --no-git-tag-version version <feature-release>`. - [x] Ensure QA is briefed and is partnering with the Trio to ensure early and often checks of the feature are on track to be of production quality from the start. # Releasing This is an iterative process and it's assumed there will be multiple numbered releases throughout development of the Feature Release. - [x] Build a beta named `<major>.<minor>.<patch>-FR-<FEATURE-NAME>-1` by pushing a git tag and when CI completes successfully notify the QA team that it's ready for release testing. If an updated Feature Release is needed, increment the last `1` by calling it `<major>.<minor>.<patch>-FR-<FEATURE-NAME>-2` etc. # Close-out - [x] Validate with the actual end user that this Feature Release delivers a quantifiable improvement. If yes, plan on adding the feature to the next minor release by creating a new ticket to merge the code to `master`. If no, we leave the code dead in this branch, never to be merged to `master`, but still loved all the same. - [x] Mark this issue "done" and close the Milestone.
process
release fr bulk user upload when development is ready to begin on a an engineer on the appropriate care team or allies should be nominated as a release engineer they will be responsible for making sure the following tasks are followed though not necessarily doing the work themselves planning create a gh milestone for the release add all the issues to be worked on to the milestone have an actual named deployment and specific end user that will be testing this feature release they need to test in production on the latest version no speculative feature releases assign an engineer as release engineer for this release latin panda o development create a new release branch in cht core from the most recent release and call it fr if latest is and the feature is to allow movies to be uploaded call it fr movie upload done before the release so all prs can be set to merge to this branch and not to master set the version number in package json and package lock json and submit a pr the easiest way to do this is to use npm no git tag version version ensure qa is briefed and is partnering with the trio to ensure early and often checks of the feature are on track to be of production quality from the start releasing this is an iterative process and it s assumed there will be multiple numbered releases throughout development of the feature release build a beta named fr by pushing a git tag and when ci completes successfully notify the qa team that it s ready for release testing if an updated feature release is needed increment the last by calling it fr etc close out validate with the actual end user that this feature release delivers a quantifiable improvement if yes plan on adding the feature to the next minor release by creating a new ticket to merge the code to master if no we leave the code dead in this branch never to be merged to master but still loved all the same mark this issue done and close the milestone
1
139,805
12,879,938,878
IssuesEvent
2020-07-12 01:59:16
rdwoodring/seven-degrees-of-staind
https://api.github.com/repos/rdwoodring/seven-degrees-of-staind
closed
Update the installation documentation to reflect the switch to yarn and the new package structure
documentation
see #128
1.0
Update the installation documentation to reflect the switch to yarn and the new package structure - see #128
non_process
update the installation documentation to reflect the switch to yarn and the new package structure see
0
11,623
14,484,661,507
IssuesEvent
2020-12-10 16:37:13
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Pipeline Resources triggering based on commit
Pri2 devops-cicd-process/tech devops/prod product-question
Nowhere in the documentation explains the ability for the pipeline resource to be linked to the same commit as the commit that is being triggered on the root pipeline. Is this ability possible and if not will it be possible in the future? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: ee4ec9d0-e0d5-4fb4-7c3e-b84abfa290c2 * Version Independent ID: 3e2b80d9-30e5-0c48-49f0-4fcdfedf5eee * Content: [Resources - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=example) * Content Source: [docs/pipelines/process/resources.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/resources.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
Pipeline Resources triggering based on commit - Nowhere in the documentation explains the ability for the pipeline resource to be linked to the same commit as the commit that is being triggered on the root pipeline. Is this ability possible and if not will it be possible in the future? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: ee4ec9d0-e0d5-4fb4-7c3e-b84abfa290c2 * Version Independent ID: 3e2b80d9-30e5-0c48-49f0-4fcdfedf5eee * Content: [Resources - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=example) * Content Source: [docs/pipelines/process/resources.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/resources.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
pipeline resources triggering based on commit nowhere in the documentation explains the ability for the pipeline resource to be linked to the same commit as the commit that is being triggered on the root pipeline is this ability possible and if not will it be possible in the future document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
409,832
11,968,396,628
IssuesEvent
2020-04-06 08:35:10
osmontrouge/caresteouvert
https://api.github.com/repos/osmontrouge/caresteouvert
closed
NoteReview alors que correction effectuée
priority: medium question
Hello, Je suis passé sur une note, mais le compte générique de caresteouvert a crée une modif en base de sorte qu'il n'y avait rien à faire. J'ai eu ce truc plusieurs fois. Un genre de note faux-positif. Je laisse la note en place ![image](https://user-images.githubusercontent.com/10323517/77835553-29268700-714e-11ea-81b8-35e717374efa.png) ![image](https://user-images.githubusercontent.com/10323517/77835560-3e031a80-714e-11ea-8713-c00ef78f3887.png) Changeset: https://www.openstreetmap.org/changeset/82763009
1.0
NoteReview alors que correction effectuée - Hello, Je suis passé sur une note, mais le compte générique de caresteouvert a crée une modif en base de sorte qu'il n'y avait rien à faire. J'ai eu ce truc plusieurs fois. Un genre de note faux-positif. Je laisse la note en place ![image](https://user-images.githubusercontent.com/10323517/77835553-29268700-714e-11ea-81b8-35e717374efa.png) ![image](https://user-images.githubusercontent.com/10323517/77835560-3e031a80-714e-11ea-8713-c00ef78f3887.png) Changeset: https://www.openstreetmap.org/changeset/82763009
non_process
notereview alors que correction effectuée hello je suis passé sur une note mais le compte générique de caresteouvert a crée une modif en base de sorte qu il n y avait rien à faire j ai eu ce truc plusieurs fois un genre de note faux positif je laisse la note en place changeset
0
48,011
12,134,379,938
IssuesEvent
2020-04-23 10:36:59
kwk/test-llvm-bz-import-5
https://api.github.com/repos/kwk/test-llvm-bz-import-5
closed
3 VC8 Build Scripts (Projects) are out of date, patches included
BZ-BUG-STATUS: RESOLVED BZ-RESOLUTION: FIXED Build scripts/Makefiles dummy import from bugzilla
This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=2148.
1.0
3 VC8 Build Scripts (Projects) are out of date, patches included - This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=2148.
non_process
build scripts projects are out of date patches included this issue was imported from bugzilla
0
21,256
28,377,040,215
IssuesEvent
2023-04-12 21:50:22
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
[MLv2] [Bug] Join `:fields` not converted correctly
Type:Bug .Backend .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
A legacy MBQL query with `:fields` in a join does not validate against the pMBQL schema ```clj legacy query= {:database 2356, :type :query, :query {:joins [{:source-table 32386, :alias "Cat", :condition [:= [:field 133035 nil] [:field 133035 nil]], :fields [[:field 133049 {:join-alias "Cat"}]]}], :limit 1, :source-table 32383}} pMBQL= {:lib/type :mbql/query, :type :pipeline, :stages [{:lib/type :mbql.stage/mbql, :lib/options #:lib{:uuid "95d9eb3d-f6d2-45a4-a78d-f638871dd388"}, :joins [{:alias "Cat", :condition [:= #:lib{:uuid "cf80f8fc-9041-4aa2-a762-ae9efdd141df"} [:field #:lib{:uuid "4244eb5b-2c65-4b27-b8aa-0f10404ea44b"} 133035] [:field #:lib{:uuid "b97e1b92-791d-4913-bef4-f34257377430"} 133035]], :fields [[:field 133049 {:join-alias "Cat"}]], :lib/type :mbql/join, :stages [{:lib/type :mbql.stage/mbql, :lib/options #:lib{:uuid "e5973146-73d1-4e6c-a9c4-c0b44fd0625e"}, :source-table 32386}], :lib/options #:lib{:uuid "d84e85d6-c88a-4ff4-bd53-3a39723a95ef"}}], :limit 1, :source-table 32383}], :database 2356} ``` It looks like the `:fields` inside `:joins` aren't getting recursively transformed correctly
1.0
[MLv2] [Bug] Join `:fields` not converted correctly - A legacy MBQL query with `:fields` in a join does not validate against the pMBQL schema ```clj legacy query= {:database 2356, :type :query, :query {:joins [{:source-table 32386, :alias "Cat", :condition [:= [:field 133035 nil] [:field 133035 nil]], :fields [[:field 133049 {:join-alias "Cat"}]]}], :limit 1, :source-table 32383}} pMBQL= {:lib/type :mbql/query, :type :pipeline, :stages [{:lib/type :mbql.stage/mbql, :lib/options #:lib{:uuid "95d9eb3d-f6d2-45a4-a78d-f638871dd388"}, :joins [{:alias "Cat", :condition [:= #:lib{:uuid "cf80f8fc-9041-4aa2-a762-ae9efdd141df"} [:field #:lib{:uuid "4244eb5b-2c65-4b27-b8aa-0f10404ea44b"} 133035] [:field #:lib{:uuid "b97e1b92-791d-4913-bef4-f34257377430"} 133035]], :fields [[:field 133049 {:join-alias "Cat"}]], :lib/type :mbql/join, :stages [{:lib/type :mbql.stage/mbql, :lib/options #:lib{:uuid "e5973146-73d1-4e6c-a9c4-c0b44fd0625e"}, :source-table 32386}], :lib/options #:lib{:uuid "d84e85d6-c88a-4ff4-bd53-3a39723a95ef"}}], :limit 1, :source-table 32383}], :database 2356} ``` It looks like the `:fields` inside `:joins` aren't getting recursively transformed correctly
process
join fields not converted correctly a legacy mbql query with fields in a join does not validate against the pmbql schema clj legacy query database type query query joins source table alias cat condition fields limit source table pmbql lib type mbql query type pipeline stages lib type mbql stage mbql lib options lib uuid joins alias cat condition lib uuid fields lib type mbql join stages lib type mbql stage mbql lib options lib uuid source table lib options lib uuid limit source table database it looks like the fields inside joins aren t getting recursively transformed correctly
1
5,629
8,482,109,831
IssuesEvent
2018-10-25 17:36:13
aspnet/IISIntegration
https://api.github.com/repos/aspnet/IISIntegration
closed
question: What will work in/with `in-process` model
in-process
1. [x] asp.net-core on dotnet core with framework-dependent-publish 2. [ ] one application pool multiple application 3. [x] self-dependent-publish app 4. [ ] asp.net core on .net framework
1.0
question: What will work in/with `in-process` model - 1. [x] asp.net-core on dotnet core with framework-dependent-publish 2. [ ] one application pool multiple application 3. [x] self-dependent-publish app 4. [ ] asp.net core on .net framework
process
question what will work in with in process model asp net core on dotnet core with framework dependent publish one application pool multiple application self dependent publish app asp net core on net framework
1
3,958
6,893,594,723
IssuesEvent
2017-11-23 05:15:37
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_EnableRaisingEvents_CorrectExitCode failed in CI
area-System.Diagnostics.Process test-run-core
Failed test: System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_EnableRaisingEvents_CorrectExitCode Configuration: osx10.12_debug Detail: https://ci.dot.net/job/dotnet_corefx/job/master/job/osx10.12_debug/3296/testReport/System.Diagnostics.Tests/ProcessWaitingTests/SingleProcess_EnableRaisingEvents_CorrectExitCode_exitCode__0_/ MESSAGE: ~~~ Assert.Equal() Failure\nExpected: 0\nActual: 131 ~~~ STACK TRACE: ~~~ at System.Diagnostics.Tests.ProcessWaitingTests.<SingleProcess_EnableRaisingEvents_CorrectExitCode>d__6.MoveNext() in /Users/dotnet-bot/j/workspace/dotnet_corefx/master/osx10.12_debug/src/System.Diagnostics.Process/tests/ProcessWaitingTests.cs:line 123 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() in /Users/buildagent/agent/_work/153/s/src/mscorlib/src/System/Runtime/ExceptionServices/ExceptionServicesCommon.cs:line 130 at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) in /Users/buildagent/agent/_work/153/s/src/mscorlib/src/System/Runtime/CompilerServices/TaskAwaiter.cs:line 152 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() in /Users/buildagent/agent/_work/153/s/src/mscorlib/src/System/Runtime/ExceptionServices/ExceptionServicesCommon.cs:line 130 at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) in /Users/buildagent/agent/_work/153/s/src/mscorlib/src/System/Runtime/CompilerServices/TaskAwaiter.cs:line 152 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() in /Users/buildagent/agent/_work/153/s/src/mscorlib/src/System/Runtime/ExceptionServices/ExceptionServicesCommon.cs:line 130 at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) in /Users/buildagent/agent/_work/153/s/src/mscorlib/src/System/Runtime/CompilerServices/TaskAwaiter.cs:line 152 ~~~
1.0
System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_EnableRaisingEvents_CorrectExitCode failed in CI - Failed test: System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_EnableRaisingEvents_CorrectExitCode Configuration: osx10.12_debug Detail: https://ci.dot.net/job/dotnet_corefx/job/master/job/osx10.12_debug/3296/testReport/System.Diagnostics.Tests/ProcessWaitingTests/SingleProcess_EnableRaisingEvents_CorrectExitCode_exitCode__0_/ MESSAGE: ~~~ Assert.Equal() Failure\nExpected: 0\nActual: 131 ~~~ STACK TRACE: ~~~ at System.Diagnostics.Tests.ProcessWaitingTests.<SingleProcess_EnableRaisingEvents_CorrectExitCode>d__6.MoveNext() in /Users/dotnet-bot/j/workspace/dotnet_corefx/master/osx10.12_debug/src/System.Diagnostics.Process/tests/ProcessWaitingTests.cs:line 123 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() in /Users/buildagent/agent/_work/153/s/src/mscorlib/src/System/Runtime/ExceptionServices/ExceptionServicesCommon.cs:line 130 at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) in /Users/buildagent/agent/_work/153/s/src/mscorlib/src/System/Runtime/CompilerServices/TaskAwaiter.cs:line 152 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() in /Users/buildagent/agent/_work/153/s/src/mscorlib/src/System/Runtime/ExceptionServices/ExceptionServicesCommon.cs:line 130 at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) in /Users/buildagent/agent/_work/153/s/src/mscorlib/src/System/Runtime/CompilerServices/TaskAwaiter.cs:line 152 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() in /Users/buildagent/agent/_work/153/s/src/mscorlib/src/System/Runtime/ExceptionServices/ExceptionServicesCommon.cs:line 130 at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) in /Users/buildagent/agent/_work/153/s/src/mscorlib/src/System/Runtime/CompilerServices/TaskAwaiter.cs:line 152 ~~~
process
system diagnostics tests processwaitingtests singleprocess enableraisingevents correctexitcode failed in ci failed test system diagnostics tests processwaitingtests singleprocess enableraisingevents correctexitcode configuration debug detail message assert equal failure nexpected nactual stack trace at system diagnostics tests processwaitingtests d movenext in users dotnet bot j workspace dotnet corefx master debug src system diagnostics process tests processwaitingtests cs line end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw in users buildagent agent work s src mscorlib src system runtime exceptionservices exceptionservicescommon cs line at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task in users buildagent agent work s src mscorlib src system runtime compilerservices taskawaiter cs line end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw in users buildagent agent work s src mscorlib src system runtime exceptionservices exceptionservicescommon cs line at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task in users buildagent agent work s src mscorlib src system runtime compilerservices taskawaiter cs line end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw in users buildagent agent work s src mscorlib src system runtime exceptionservices exceptionservicescommon cs line at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task in users buildagent agent work s src mscorlib src system runtime compilerservices taskawaiter cs line
1
55,350
13,617,310,463
IssuesEvent
2020-09-23 16:49:00
virtualsatellite/VirtualSatellite4-CEF
https://api.github.com/repos/virtualsatellite/VirtualSatellite4-CEF
closed
Release 4.12.0
build
Virtual Satellite Release Version 4.12.0 This ticket captures all release related work of the VirtualSatellite4-CEF release 1. Perform version update: - [x] Checkout/Update the Development branch - [x] Remove current integration branch (Make sure no one else is integrating at the moment) - [x] Create new integration branch from development branch - [x] Run ant script to update version numbers - [x] Regenerate concept.xmi of all relevant projects - [x] Launch application from product configuration and resolve all dependency issues - [x] Merge integration branch into development branch (Pull Request named "Integration 4.12.0 - Remerge Versions") 2. Perform integration on integration branch: - [x] Apply all needed fixes - [x] Update the release notes 3. Update master/release branch: - [x] Merge integration branch into master branch (Pull Request named "Release 4.12.0") - [x] Create Release Tag 4. Merge back integration branch: - [x] Merge integration branch into development branch (Pull Request named "Integration 4.12.0 - Remerge Fixes") Well Done!! You should have a new Virtual Satellite CEF Release :rocket:
1.0
Release 4.12.0 - Virtual Satellite Release Version 4.12.0 This ticket captures all release related work of the VirtualSatellite4-CEF release 1. Perform version update: - [x] Checkout/Update the Development branch - [x] Remove current integration branch (Make sure no one else is integrating at the moment) - [x] Create new integration branch from development branch - [x] Run ant script to update version numbers - [x] Regenerate concept.xmi of all relevant projects - [x] Launch application from product configuration and resolve all dependency issues - [x] Merge integration branch into development branch (Pull Request named "Integration 4.12.0 - Remerge Versions") 2. Perform integration on integration branch: - [x] Apply all needed fixes - [x] Update the release notes 3. Update master/release branch: - [x] Merge integration branch into master branch (Pull Request named "Release 4.12.0") - [x] Create Release Tag 4. Merge back integration branch: - [x] Merge integration branch into development branch (Pull Request named "Integration 4.12.0 - Remerge Fixes") Well Done!! You should have a new Virtual Satellite CEF Release :rocket:
non_process
release virtual satellite release version this ticket captures all release related work of the cef release perform version update checkout update the development branch remove current integration branch make sure no one else is integrating at the moment create new integration branch from development branch run ant script to update version numbers regenerate concept xmi of all relevant projects launch application from product configuration and resolve all dependency issues merge integration branch into development branch pull request named integration remerge versions perform integration on integration branch apply all needed fixes update the release notes update master release branch merge integration branch into master branch pull request named release create release tag merge back integration branch merge integration branch into development branch pull request named integration remerge fixes well done you should have a new virtual satellite cef release rocket
0
11,840
9,460,887,092
IssuesEvent
2019-04-17 12:12:49
pytest-dev/pytest
https://api.github.com/repos/pytest-dev/pytest
closed
Flaky test: test_pytest_exit_returncode fails due to ResourceWarning
type: bug type: infrastructure
``` _________________________ test_pytest_exit_returncode __________________________ [gw0] linux -- Python 3.5.3 /home/travis/build/pytest-dev/pytest/.tox/pypy3-xdist/bin/python testdir = <Testdir local('/tmp/pytest-of-travis/pytest-0/popen-gw0/test_pytest_exit_returncode0')> def test_pytest_exit_returncode(testdir): testdir.makepyfile( """ import pytest def test_foo(): pytest.exit("some exit msg", 99) """ ) result = testdir.runpytest() result.stdout.fnmatch_lines(["*! *Exit: some exit msg !*"]) > assert result.stderr.lines == [""] E assert ['Exception i...name=16>', ''] == [''] E At index 0 diff: "Exception ignored in: <_io.FileIO name=16 mode='wb' closefd=True>" != '' E Left contains 2 more items, first extra item: 'ResourceWarning: unclosed file <_io.BufferedWriter name=16>' E Use -v to get the full diff /home/travis/build/pytest-dev/pytest/testing/test_runner.py:584: AssertionError ----------------------------- Captured stdout call ----------------------------- ============================= test session starts ============================== platform linux -- Python 3.5.3[pypy-6.0.0-final], pytest-4.4.1.dev66+g482a0ba, py-1.8.0, pluggy-0.9.0 hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/travis/build/pytest-dev/pytest/.hypothesis/examples') rootdir: /tmp/pytest-of-travis/pytest-0/popen-gw0/test_pytest_exit_returncode0 plugins: xdist-1.28.0, forked-1.0.2, hypothesis-4.15.0 collected 1 item test_pytest_exit_returncode.py ========================= no tests ran in 0.61 seconds ========================= !!!!!!!!!!!!!!!!!!!!! _pytest.outcomes.Exit: some exit msg !!!!!!!!!!!!!!!!!!!!! ----------------------------- Captured stderr call ----------------------------- Exception ignored in: <_io.FileIO name=16 mode='wb' closefd=True> ResourceWarning: unclosed file <_io.BufferedWriter name=16> =============================== warnings summary =============================== testing/test_warnings.py:701 /home/travis/build/pytest-dev/pytest/testing/test_warnings.py:701: PytestExperimentalApiWarning: testdir.copy_example is an experimental api that may change over time testdir.copy_example("warnings/test_group_warnings_by_message.py") -- Docs: https://docs.pytest.org/en/latest/warnings.html ``` https://travis-ci.org/pytest-dev/pytest/jobs/518502601#L5025 Python: pypy3.5-6.0 TOXENV=pypy3-xdist
1.0
Flaky test: test_pytest_exit_returncode fails due to ResourceWarning - ``` _________________________ test_pytest_exit_returncode __________________________ [gw0] linux -- Python 3.5.3 /home/travis/build/pytest-dev/pytest/.tox/pypy3-xdist/bin/python testdir = <Testdir local('/tmp/pytest-of-travis/pytest-0/popen-gw0/test_pytest_exit_returncode0')> def test_pytest_exit_returncode(testdir): testdir.makepyfile( """ import pytest def test_foo(): pytest.exit("some exit msg", 99) """ ) result = testdir.runpytest() result.stdout.fnmatch_lines(["*! *Exit: some exit msg !*"]) > assert result.stderr.lines == [""] E assert ['Exception i...name=16>', ''] == [''] E At index 0 diff: "Exception ignored in: <_io.FileIO name=16 mode='wb' closefd=True>" != '' E Left contains 2 more items, first extra item: 'ResourceWarning: unclosed file <_io.BufferedWriter name=16>' E Use -v to get the full diff /home/travis/build/pytest-dev/pytest/testing/test_runner.py:584: AssertionError ----------------------------- Captured stdout call ----------------------------- ============================= test session starts ============================== platform linux -- Python 3.5.3[pypy-6.0.0-final], pytest-4.4.1.dev66+g482a0ba, py-1.8.0, pluggy-0.9.0 hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/travis/build/pytest-dev/pytest/.hypothesis/examples') rootdir: /tmp/pytest-of-travis/pytest-0/popen-gw0/test_pytest_exit_returncode0 plugins: xdist-1.28.0, forked-1.0.2, hypothesis-4.15.0 collected 1 item test_pytest_exit_returncode.py ========================= no tests ran in 0.61 seconds ========================= !!!!!!!!!!!!!!!!!!!!! _pytest.outcomes.Exit: some exit msg !!!!!!!!!!!!!!!!!!!!! ----------------------------- Captured stderr call ----------------------------- Exception ignored in: <_io.FileIO name=16 mode='wb' closefd=True> ResourceWarning: unclosed file <_io.BufferedWriter name=16> =============================== warnings summary =============================== testing/test_warnings.py:701 /home/travis/build/pytest-dev/pytest/testing/test_warnings.py:701: PytestExperimentalApiWarning: testdir.copy_example is an experimental api that may change over time testdir.copy_example("warnings/test_group_warnings_by_message.py") -- Docs: https://docs.pytest.org/en/latest/warnings.html ``` https://travis-ci.org/pytest-dev/pytest/jobs/518502601#L5025 Python: pypy3.5-6.0 TOXENV=pypy3-xdist
non_process
flaky test test pytest exit returncode fails due to resourcewarning test pytest exit returncode linux python home travis build pytest dev pytest tox xdist bin python testdir def test pytest exit returncode testdir testdir makepyfile import pytest def test foo pytest exit some exit msg result testdir runpytest result stdout fnmatch lines assert result stderr lines e assert e at index diff exception ignored in e left contains more items first extra item resourcewarning unclosed file e use v to get the full diff home travis build pytest dev pytest testing test runner py assertionerror captured stdout call test session starts platform linux python pytest py pluggy hypothesis profile default database directorybasedexampledatabase home travis build pytest dev pytest hypothesis examples rootdir tmp pytest of travis pytest popen test pytest exit plugins xdist forked hypothesis collected item test pytest exit returncode py no tests ran in seconds pytest outcomes exit some exit msg captured stderr call exception ignored in resourcewarning unclosed file warnings summary testing test warnings py home travis build pytest dev pytest testing test warnings py pytestexperimentalapiwarning testdir copy example is an experimental api that may change over time testdir copy example warnings test group warnings by message py docs python toxenv xdist
0
381,121
11,273,857,986
IssuesEvent
2020-01-14 17:20:21
myceworld/myce
https://api.github.com/repos/myceworld/myce
closed
[third party] update coingecko
Priority: Low Type: Bug
**Describe the solution** https://www.coingecko.com/en/coins/myce shows no `circulating supply` and affects several sites taking the info from there
1.0
[third party] update coingecko - **Describe the solution** https://www.coingecko.com/en/coins/myce shows no `circulating supply` and affects several sites taking the info from there
non_process
update coingecko describe the solution shows no circulating supply and affects several sites taking the info from there
0
16,409
21,191,413,463
IssuesEvent
2022-04-08 17:52:33
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
internal: flaky unit test - PEM certs
process: flaky test stage: icebox
### Current behavior When running our Cypress unit tests, this has been failing often lately. <img width="1404" alt="Screen Shot 2021-07-19 at 8 52 09 AM" src="https://user-images.githubusercontent.com/1271364/126170867-6e4a546e-53da-4086-82fc-6e993e479d07.png"> ### Desired behavior _No response_ ### Test code to reproduce Failing Circl: https://app.circleci.com/pipelines/github/cypress-io/cypress/22260/workflows/043dddbc-1095-4a98-bd32-3606e59e65ce/jobs/816365 ### Cypress Version develop ### Other _No response_
1.0
internal: flaky unit test - PEM certs - ### Current behavior When running our Cypress unit tests, this has been failing often lately. <img width="1404" alt="Screen Shot 2021-07-19 at 8 52 09 AM" src="https://user-images.githubusercontent.com/1271364/126170867-6e4a546e-53da-4086-82fc-6e993e479d07.png"> ### Desired behavior _No response_ ### Test code to reproduce Failing Circl: https://app.circleci.com/pipelines/github/cypress-io/cypress/22260/workflows/043dddbc-1095-4a98-bd32-3606e59e65ce/jobs/816365 ### Cypress Version develop ### Other _No response_
process
internal flaky unit test pem certs current behavior when running our cypress unit tests this has been failing often lately img width alt screen shot at am src desired behavior no response test code to reproduce failing circl cypress version develop other no response
1
320,233
23,804,819,912
IssuesEvent
2022-09-03 21:54:21
strigiforme/portfolio
https://api.github.com/repos/strigiforme/portfolio
closed
Document SSL Setup
documentation
I used the instructions at https://certbot.eff.org/instructions?ws=nginx&os=ubuntufocal Create some light docs to describe how this was done.
1.0
Document SSL Setup - I used the instructions at https://certbot.eff.org/instructions?ws=nginx&os=ubuntufocal Create some light docs to describe how this was done.
non_process
document ssl setup i used the instructions at create some light docs to describe how this was done
0
200,402
15,105,362,015
IssuesEvent
2021-02-08 12:57:09
NOAA-EMC/UFS_UTILS
https://api.github.com/repos/NOAA-EMC/UFS_UTILS
opened
Modify existing workflow to use caches for ESMF, NCEPLIBS, etc.
test
Currently we are rebuilding the same versions of ESMF, Jasper, and NCEPLIBS. Seems like we could cache these, since they are the same every time. This would make the CI faster.
1.0
Modify existing workflow to use caches for ESMF, NCEPLIBS, etc. - Currently we are rebuilding the same versions of ESMF, Jasper, and NCEPLIBS. Seems like we could cache these, since they are the same every time. This would make the CI faster.
non_process
modify existing workflow to use caches for esmf nceplibs etc currently we are rebuilding the same versions of esmf jasper and nceplibs seems like we could cache these since they are the same every time this would make the ci faster
0
20,734
27,432,591,840
IssuesEvent
2023-03-02 03:23:27
alibaba/MNN
https://api.github.com/repos/alibaba/MNN
closed
怎么把输出的特征图保存成图片(转换成opencv然后保存也可)
question cv/ImageProcess
类似demo/segment.cpp,获得的outputTensor怎么保存成图片呢? ``` net->runSession(session); auto outputTensor = net->getSessionOutput(session, nullptr); ```
1.0
怎么把输出的特征图保存成图片(转换成opencv然后保存也可) - 类似demo/segment.cpp,获得的outputTensor怎么保存成图片呢? ``` net->runSession(session); auto outputTensor = net->getSessionOutput(session, nullptr); ```
process
怎么把输出的特征图保存成图片(转换成opencv然后保存也可) 类似demo segment cpp,获得的outputtensor怎么保存成图片呢? net runsession session auto outputtensor net getsessionoutput session nullptr
1
9,503
12,490,670,393
IssuesEvent
2020-06-01 01:10:19
pingcap/tidb
https://api.github.com/repos/pingcap/tidb
closed
TiDB should pushdown collation insensitive expr to TiFlash even if new collation is enabled
component/charset component/coprocessor type/feature-request
## Feature Request **Is your feature request related to a problem? Please describe:** <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> TiDB support collation if new collation feature is enable, while TiFlash does not support collation yet, so TiDB disable all the expr pushdown to TiFlash if new collation is enabled. Which mean expression like `int_column > 10` can not be pushed down to TiFlash. **Describe the feature you'd like:** <!-- A clear and concise description of what you want to happen. --> TiDB should pushdown collation insensitive expr to TiFlash even if new collation is enabled **Describe alternatives you've considered:** <!-- A clear and concise description of any alternative solutions or features you've considered. --> **Teachability, Documentation, Adoption, Migration Strategy:** <!-- If you can, explain some scenarios how users might use this, situations it would be helpful in. Any API designs, mockups, or diagrams are also helpful. -->
1.0
TiDB should pushdown collation insensitive expr to TiFlash even if new collation is enabled - ## Feature Request **Is your feature request related to a problem? Please describe:** <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> TiDB support collation if new collation feature is enable, while TiFlash does not support collation yet, so TiDB disable all the expr pushdown to TiFlash if new collation is enabled. Which mean expression like `int_column > 10` can not be pushed down to TiFlash. **Describe the feature you'd like:** <!-- A clear and concise description of what you want to happen. --> TiDB should pushdown collation insensitive expr to TiFlash even if new collation is enabled **Describe alternatives you've considered:** <!-- A clear and concise description of any alternative solutions or features you've considered. --> **Teachability, Documentation, Adoption, Migration Strategy:** <!-- If you can, explain some scenarios how users might use this, situations it would be helpful in. Any API designs, mockups, or diagrams are also helpful. -->
process
tidb should pushdown collation insensitive expr to tiflash even if new collation is enabled feature request is your feature request related to a problem please describe tidb support collation if new collation feature is enable while tiflash does not support collation yet so tidb disable all the expr pushdown to tiflash if new collation is enabled which mean expression like int column can not be pushed down to tiflash describe the feature you d like tidb should pushdown collation insensitive expr to tiflash even if new collation is enabled describe alternatives you ve considered teachability documentation adoption migration strategy
1
7,524
10,599,433,742
IssuesEvent
2019-10-10 07:56:49
linnovate/root
https://api.github.com/repos/linnovate/root
closed
filters in sub projects dont work
Process bug Projects
create a project and a sub project try to filter by status or custom,status, title, etc... the list isnt updated
1.0
filters in sub projects dont work - create a project and a sub project try to filter by status or custom,status, title, etc... the list isnt updated
process
filters in sub projects dont work create a project and a sub project try to filter by status or custom status title etc the list isnt updated
1
22,965
15,705,354,936
IssuesEvent
2021-03-26 16:03:09
emory-libraries/blacklight-catalog
https://api.github.com/repos/emory-libraries/blacklight-catalog
closed
STORY: Automate Infrastructure Configuration - ARCH
Epic Infrastructure Operational Story
In order to facilitate environment setup, disaster recovery, and performance testing, we want infrastructure configuration to be automated. - [x] Get new environment terraformed - [x] Update tfm to use stock amzn2 AMI - [x] ansible all required packages - [x] verify ansible shibboleth config w/ apache - [x] ensure new environment works with capistrano - [x] create new solr core in solr-cloud installation - [x] configure f5 rules to hit ALB - [x] configure dns to refer arch to new environment - [x] Configure Shibb for arch - [ ] rinse and repeat for test - [ ] rinse and repeat for prod
1.0
STORY: Automate Infrastructure Configuration - ARCH - In order to facilitate environment setup, disaster recovery, and performance testing, we want infrastructure configuration to be automated. - [x] Get new environment terraformed - [x] Update tfm to use stock amzn2 AMI - [x] ansible all required packages - [x] verify ansible shibboleth config w/ apache - [x] ensure new environment works with capistrano - [x] create new solr core in solr-cloud installation - [x] configure f5 rules to hit ALB - [x] configure dns to refer arch to new environment - [x] Configure Shibb for arch - [ ] rinse and repeat for test - [ ] rinse and repeat for prod
non_process
story automate infrastructure configuration arch in order to facilitate environment setup disaster recovery and performance testing we want infrastructure configuration to be automated get new environment terraformed update tfm to use stock ami ansible all required packages verify ansible shibboleth config w apache ensure new environment works with capistrano create new solr core in solr cloud installation configure rules to hit alb configure dns to refer arch to new environment configure shibb for arch rinse and repeat for test rinse and repeat for prod
0
334,866
10,146,629,637
IssuesEvent
2019-08-05 08:38:23
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.qwant.com - site is not usable
browser-fenix engine-gecko priority-important
<!-- @browser: Firefox preview build #11901818 --> <!-- @ua_header: Mozilla/5.0 (Android 5.1; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 --> <!-- @reported_with: --> <!-- @extra_labels: browser-fenix --> **URL**: https://www.qwant.com/maps **Browser / Version**: Firefox preview build #11901818 **Operating System**: Android 5.1 **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: map is not rendered **Steps to Reproduce**: While with Firefox the map is showed correctly, with Firefox preview this doesn't work <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.qwant.com - site is not usable - <!-- @browser: Firefox preview build #11901818 --> <!-- @ua_header: Mozilla/5.0 (Android 5.1; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 --> <!-- @reported_with: --> <!-- @extra_labels: browser-fenix --> **URL**: https://www.qwant.com/maps **Browser / Version**: Firefox preview build #11901818 **Operating System**: Android 5.1 **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: map is not rendered **Steps to Reproduce**: While with Firefox the map is showed correctly, with Firefox preview this doesn't work <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
site is not usable url browser version firefox preview build operating system android tested another browser yes problem type site is not usable description map is not rendered steps to reproduce while with firefox the map is showed correctly with firefox preview this doesn t work browser configuration none from with ❤️
0
22,014
30,519,477,794
IssuesEvent
2023-07-19 07:01:00
fkdl0048/BookReview
https://api.github.com/repos/fkdl0048/BookReview
closed
The Object-Oriented Thought Process
2023 The Object-Oriented Thought Process
2차 오프라인 멘토링 증정 책 - [x] #39 - [x] #53 - [x] #62 - [x] #82 - [x] #83 - [x] #84 - [x] #117 - [x] #119 - [x] #120 - [x] #121 - [x] #122 - [x] #123
1.0
The Object-Oriented Thought Process - 2차 오프라인 멘토링 증정 책 - [x] #39 - [x] #53 - [x] #62 - [x] #82 - [x] #83 - [x] #84 - [x] #117 - [x] #119 - [x] #120 - [x] #121 - [x] #122 - [x] #123
process
the object oriented thought process 오프라인 멘토링 증정 책
1
2,073
4,889,611,615
IssuesEvent
2016-11-18 10:49:01
openvstorage/framework
https://api.github.com/repos/openvstorage/framework
closed
Disable role assignment button when a role is being assigned
priority_minor process_wontfix type_feature
In continuation of #694 it would be interesting to disable the role assignment button when a role is already being assigned to a partition.
1.0
Disable role assignment button when a role is being assigned - In continuation of #694 it would be interesting to disable the role assignment button when a role is already being assigned to a partition.
process
disable role assignment button when a role is being assigned in continuation of it would be interesting to disable the role assignment button when a role is already being assigned to a partition
1
16,442
21,317,070,219
IssuesEvent
2022-04-16 13:16:28
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
No warning for broken in-topic link / unresolved link text.
bug priority/medium preprocess stale
## Expected Behavior The attached topic has three invalid `@href` attributes that refer to content within the same file. When building HTML5 output, the result is a broken link and link text like `#jira680__twwo` and `#badtopic__twwo` The build should warn about these unresolved links during `topicpull`. ## Actual Behavior No message, clean build, bad output. ## Steps to Reproduce Build the following topic to HTML5: ``` <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "topic.dtd"> <topic id="jira680" xml:lang="en-us"> <title>Jira680</title> <shortdesc>Why does this link not report an error</shortdesc> <body> <ol> <li>First item! look at the second! <xref href="#./tvo"/> and <xref href="#jira680/twwo"/> and <xref href="#badtopic/twwo"/></li> <li id="two">whoops I broke my ID when I linked here</li> </ol> </body> </topic> ``` ## Environment * DITA-OT version: 3.3.4 * Operating system and version: Windows * How did you run DITA-OT? `dita` command * Transformation type: HTML5
1.0
No warning for broken in-topic link / unresolved link text. - ## Expected Behavior The attached topic has three invalid `@href` attributes that refer to content within the same file. When building HTML5 output, the result is a broken link and link text like `#jira680__twwo` and `#badtopic__twwo` The build should warn about these unresolved links during `topicpull`. ## Actual Behavior No message, clean build, bad output. ## Steps to Reproduce Build the following topic to HTML5: ``` <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "topic.dtd"> <topic id="jira680" xml:lang="en-us"> <title>Jira680</title> <shortdesc>Why does this link not report an error</shortdesc> <body> <ol> <li>First item! look at the second! <xref href="#./tvo"/> and <xref href="#jira680/twwo"/> and <xref href="#badtopic/twwo"/></li> <li id="two">whoops I broke my ID when I linked here</li> </ol> </body> </topic> ``` ## Environment * DITA-OT version: 3.3.4 * Operating system and version: Windows * How did you run DITA-OT? `dita` command * Transformation type: HTML5
process
no warning for broken in topic link unresolved link text expected behavior the attached topic has three invalid href attributes that refer to content within the same file when building output the result is a broken link and link text like twwo and badtopic twwo the build should warn about these unresolved links during topicpull actual behavior no message clean build bad output steps to reproduce build the following topic to why does this link not report an error first item look at the second and and xref href badtopic twwo whoops i broke my id when i linked here environment dita ot version operating system and version windows how did you run dita ot dita command transformation type
1
259,297
27,621,798,018
IssuesEvent
2023-03-10 01:12:24
nidhi7598/linux-3.0.35
https://api.github.com/repos/nidhi7598/linux-3.0.35
closed
CVE-2016-9604 (Medium) detected in linuxlinux-3.0.40 - autoclosed
Mend: dependency security vulnerability
## CVE-2016-9604 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-3.0.40</b></p></summary> <p> <p>Apache Software Foundation (ASF)</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35/commit/4cc6d4a22f88b8effe1090492c1a242ce587b492">4cc6d4a22f88b8effe1090492c1a242ce587b492</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/security/keys/keyctl.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/security/keys/keyctl.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> It was discovered in the Linux kernel before 4.11-rc8 that root can gain direct access to an internal keyring, such as '.dns_resolver' in RHEL-7 or '.builtin_trusted_keys' upstream, by joining it as its session keyring. This allows root to bypass module signature verification by adding a new public key of its own devising to the keyring. <p>Publish Date: 2018-07-11 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-9604>CVE-2016-9604</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-9604">https://nvd.nist.gov/vuln/detail/CVE-2016-9604</a></p> <p>Release Date: 2018-07-11</p> <p>Fix Resolution: 4.11-rc8</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2016-9604 (Medium) detected in linuxlinux-3.0.40 - autoclosed - ## CVE-2016-9604 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-3.0.40</b></p></summary> <p> <p>Apache Software Foundation (ASF)</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35/commit/4cc6d4a22f88b8effe1090492c1a242ce587b492">4cc6d4a22f88b8effe1090492c1a242ce587b492</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/security/keys/keyctl.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/security/keys/keyctl.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> It was discovered in the Linux kernel before 4.11-rc8 that root can gain direct access to an internal keyring, such as '.dns_resolver' in RHEL-7 or '.builtin_trusted_keys' upstream, by joining it as its session keyring. This allows root to bypass module signature verification by adding a new public key of its own devising to the keyring. <p>Publish Date: 2018-07-11 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-9604>CVE-2016-9604</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-9604">https://nvd.nist.gov/vuln/detail/CVE-2016-9604</a></p> <p>Release Date: 2018-07-11</p> <p>Fix Resolution: 4.11-rc8</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in linuxlinux autoclosed cve medium severity vulnerability vulnerable library linuxlinux apache software foundation asf library home page a href found in head commit a href found in base branch master vulnerable source files security keys keyctl c security keys keyctl c vulnerability details it was discovered in the linux kernel before that root can gain direct access to an internal keyring such as dns resolver in rhel or builtin trusted keys upstream by joining it as its session keyring this allows root to bypass module signature verification by adding a new public key of its own devising to the keyring publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
16,691
21,791,228,517
IssuesEvent
2022-05-14 23:38:26
googlefonts/noto-fonts
https://api.github.com/repos/googlefonts/noto-fonts
closed
Filenames causing problem with GitHub Desktop in Windows
Platform-Windows Noto-Process-Issue
The names of certain recently added files are invalid in Windows, which creates problems when pulling with GitHub Desktop. The files are in the main/images/issue_data/ directory, a total of 8 files with names starting with “s:” (which is not valid in Windows filenames), such as `s: myanmar-TOP01-googlefonts.noto-fonts.weekly-data.csv`
1.0
Filenames causing problem with GitHub Desktop in Windows - The names of certain recently added files are invalid in Windows, which creates problems when pulling with GitHub Desktop. The files are in the main/images/issue_data/ directory, a total of 8 files with names starting with “s:” (which is not valid in Windows filenames), such as `s: myanmar-TOP01-googlefonts.noto-fonts.weekly-data.csv`
process
filenames causing problem with github desktop in windows the names of certain recently added files are invalid in windows which creates problems when pulling with github desktop the files are in the main images issue data directory a total of files with names starting with “s ” which is not valid in windows filenames such as s myanmar googlefonts noto fonts weekly data csv
1
431,052
30,215,125,961
IssuesEvent
2023-07-05 15:05:55
cdisc-org/cdisc-rules-engine
https://api.github.com/repos/cdisc-org/cdisc-rules-engine
closed
Operator refactoring
documentation schema
**Description** Currently operator parameters are independent of operators **Acceptance Criteria** - Each operator in the json-schema has required and optional parameters specified individually
1.0
Operator refactoring - **Description** Currently operator parameters are independent of operators **Acceptance Criteria** - Each operator in the json-schema has required and optional parameters specified individually
non_process
operator refactoring description currently operator parameters are independent of operators acceptance criteria each operator in the json schema has required and optional parameters specified individually
0
84,939
7,948,729,041
IssuesEvent
2018-07-11 09:05:00
apache/incubator-mxnet
https://api.github.com/repos/apache/incubator-mxnet
closed
Flaky keyserver keyserver.ubuntu.com
CI Flaky Test
ci/docker/install/ubuntu_scala.sh often fails due to gpg keyserver.
1.0
Flaky keyserver keyserver.ubuntu.com - ci/docker/install/ubuntu_scala.sh often fails due to gpg keyserver.
non_process
flaky keyserver keyserver ubuntu com ci docker install ubuntu scala sh often fails due to gpg keyserver
0
19,957
26,432,825,340
IssuesEvent
2023-01-15 02:00:07
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Fri, 13 Jan 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events ### Event-Based Frame Interpolation with Ad-hoc Deblurring - **Authors:** Lei Sun, Christos Sakaridis, Jingyun Liang, Peng Sun, Jiezhang Cao, Kai Zhang, Qi Jiang, Kaiwei Wang, Luc Van Gool - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.05191 - **Pdf link:** https://arxiv.org/pdf/2301.05191 - **Abstract** The performance of video frame interpolation is inherently correlated with the ability to handle motion in the input scene. Even though previous works recognize the utility of asynchronous event information for this task, they ignore the fact that motion may or may not result in blur in the input video to be interpolated, depending on the length of the exposure time of the frames and the speed of the motion, and assume either that the input video is sharp, restricting themselves to frame interpolation, or that it is blurry, including an explicit, separate deblurring stage before interpolation in their pipeline. We instead propose a general method for event-based frame interpolation that performs deblurring ad-hoc and thus works both on sharp and blurry input videos. Our model consists in a bidirectional recurrent network that naturally incorporates the temporal dimension of interpolation and fuses information from the input frames and the events adaptively based on their temporal proximity. In addition, we introduce a novel real-world high-resolution dataset with events and color videos named HighREV, which provides a challenging evaluation setting for the examined task. Extensive experiments on the standard GoPro benchmark and on our dataset show that our network consistently outperforms previous state-of-the-art methods on frame interpolation, single image deblurring and the joint task of interpolation and deblurring. Our code and dataset will be made publicly available. ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### Real-time FPGA implementation of the Semi-Global Matching stereo vision algorithm for a 4K/UHD video stream - **Authors:** Mariusz Grabowski, Tomasz Kryjak - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2301.04847 - **Pdf link:** https://arxiv.org/pdf/2301.04847 - **Abstract** In this paper, we propose a real-time FPGA implementation of the Semi-Global Matching (SGM) stereo vision algorithm. The designed module supports a 4K/Ultra HD (3840 x 2160 pixels @ 30 frames per second) video stream in a 4 pixel per clock (ppc) format and a 64-pixel disparity range. The baseline SGM implementation had to be modified to process pixels in the 4ppc format and meet the timing constrains, however, our version provides results comparable to the original design. The solution has been positively evaluated on the Xilinx VC707 development board with a Virtex-7 FPGA device. ### Fairly Private: Investigating The Fairness of Visual Privacy Preservation Algorithms - **Authors:** Sophie Noiret, Siddharth Ravi, Martin Kampel, Francisco Florez-Revuelta - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Cryptography and Security (cs.CR); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2301.05012 - **Pdf link:** https://arxiv.org/pdf/2301.05012 - **Abstract** As the privacy risks posed by camera surveillance and facial recognition have grown, so has the research into privacy preservation algorithms. Among these, visual privacy preservation algorithms attempt to impart bodily privacy to subjects in visuals by obfuscating privacy-sensitive areas. While disparate performances of facial recognition systems across phenotypes are the subject of much study, its counterpart, privacy preservation, is not commonly analysed from a fairness perspective. In this paper, the fairness of commonly used visual privacy preservation algorithms is investigated through the performances of facial recognition models on obfuscated images. Experiments on the PubFig dataset clearly show that the privacy protection provided is unequal across groups. ### Scene-Aware 3D Multi-Human Motion Capture from a Single Camera - **Authors:** Diogo Luvizon, Marc Habermann, Vladislav Golyanik, Adam Kortylewski, Christian Theobalt - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.05175 - **Pdf link:** https://arxiv.org/pdf/2301.05175 - **Abstract** In this work, we consider the problem of estimating the 3D position of multiple humans in a scene as well as their body shape and articulation from a single RGB video recorded with a static camera. In contrast to expensive marker-based or multi-view systems, our lightweight setup is ideal for private users as it enables an affordable 3D motion capture that is easy to install and does not require expert knowledge. To deal with this challenging setting, we leverage recent advances in computer vision using large-scale pre-trained models for a variety of modalities, including 2D body joints, joint angles, normalized disparity maps, and human segmentation masks. Thus, we introduce the first non-linear optimization-based approach that jointly solves for the absolute 3D position of each human, their articulated pose, their individual shapes as well as the scale of the scene. In particular, we estimate the scene depth and person unique scale from normalized disparity predictions using the 2D body joints and joint angles. Given the per-frame scene depth, we reconstruct a point-cloud of the static scene in 3D space. Finally, given the per-frame 3D estimates of the humans and scene point-cloud, we perform a space-time coherent optimization over the video to ensure temporal, spatial and physical plausibility. We evaluate our method on established multi-person 3D human pose benchmarks where we consistently outperform previous methods and we qualitatively demonstrate that our method is robust to in-the-wild conditions including challenging scenes with people of different sizes. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW ### Edge Preserving Implicit Surface Representation of Point Clouds - **Authors:** Xiaogang Wang, Yuhang Cheng, Liang Wang, Jiangbo Lu, Kai Xu, Guoqiang Xiao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR) - **Arxiv link:** https://arxiv.org/abs/2301.04860 - **Pdf link:** https://arxiv.org/pdf/2301.04860 - **Abstract** Learning implicit surface directly from raw data recently has become a very attractive representation method for 3D reconstruction tasks due to its excellent performance. However, as the raw data quality deteriorates, the implicit functions often lead to unsatisfactory reconstruction results. To this end, we propose a novel edge-preserving implicit surface reconstruction method, which mainly consists of a differentiable Laplican regularizer and a dynamic edge sampling strategy. Among them, the differential Laplican regularizer can effectively alleviate the implicit surface unsmoothness caused by the point cloud quality deteriorates; Meanwhile, in order to reduce the excessive smoothing at the edge regions of implicit suface, we proposed a dynamic edge extract strategy for sampling near the sharp edge of point cloud, which can effectively avoid the Laplacian regularizer from smoothing all regions. Finally, we combine them with a simple regularization term for robust implicit surface reconstruction. Compared with the state-of-the-art methods, experimental results show that our method significantly improves the quality of 3D reconstruction results. Moreover, we demonstrate through several experiments that our method can be conveniently and effectively applied to some point cloud analysis tasks, including point cloud edge feature extraction, normal estimation,etc. ## Keyword: raw image There is no result
2.0
New submissions for Fri, 13 Jan 23 - ## Keyword: events ### Event-Based Frame Interpolation with Ad-hoc Deblurring - **Authors:** Lei Sun, Christos Sakaridis, Jingyun Liang, Peng Sun, Jiezhang Cao, Kai Zhang, Qi Jiang, Kaiwei Wang, Luc Van Gool - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.05191 - **Pdf link:** https://arxiv.org/pdf/2301.05191 - **Abstract** The performance of video frame interpolation is inherently correlated with the ability to handle motion in the input scene. Even though previous works recognize the utility of asynchronous event information for this task, they ignore the fact that motion may or may not result in blur in the input video to be interpolated, depending on the length of the exposure time of the frames and the speed of the motion, and assume either that the input video is sharp, restricting themselves to frame interpolation, or that it is blurry, including an explicit, separate deblurring stage before interpolation in their pipeline. We instead propose a general method for event-based frame interpolation that performs deblurring ad-hoc and thus works both on sharp and blurry input videos. Our model consists in a bidirectional recurrent network that naturally incorporates the temporal dimension of interpolation and fuses information from the input frames and the events adaptively based on their temporal proximity. In addition, we introduce a novel real-world high-resolution dataset with events and color videos named HighREV, which provides a challenging evaluation setting for the examined task. Extensive experiments on the standard GoPro benchmark and on our dataset show that our network consistently outperforms previous state-of-the-art methods on frame interpolation, single image deblurring and the joint task of interpolation and deblurring. Our code and dataset will be made publicly available. ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### Real-time FPGA implementation of the Semi-Global Matching stereo vision algorithm for a 4K/UHD video stream - **Authors:** Mariusz Grabowski, Tomasz Kryjak - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2301.04847 - **Pdf link:** https://arxiv.org/pdf/2301.04847 - **Abstract** In this paper, we propose a real-time FPGA implementation of the Semi-Global Matching (SGM) stereo vision algorithm. The designed module supports a 4K/Ultra HD (3840 x 2160 pixels @ 30 frames per second) video stream in a 4 pixel per clock (ppc) format and a 64-pixel disparity range. The baseline SGM implementation had to be modified to process pixels in the 4ppc format and meet the timing constrains, however, our version provides results comparable to the original design. The solution has been positively evaluated on the Xilinx VC707 development board with a Virtex-7 FPGA device. ### Fairly Private: Investigating The Fairness of Visual Privacy Preservation Algorithms - **Authors:** Sophie Noiret, Siddharth Ravi, Martin Kampel, Francisco Florez-Revuelta - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Cryptography and Security (cs.CR); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2301.05012 - **Pdf link:** https://arxiv.org/pdf/2301.05012 - **Abstract** As the privacy risks posed by camera surveillance and facial recognition have grown, so has the research into privacy preservation algorithms. Among these, visual privacy preservation algorithms attempt to impart bodily privacy to subjects in visuals by obfuscating privacy-sensitive areas. While disparate performances of facial recognition systems across phenotypes are the subject of much study, its counterpart, privacy preservation, is not commonly analysed from a fairness perspective. In this paper, the fairness of commonly used visual privacy preservation algorithms is investigated through the performances of facial recognition models on obfuscated images. Experiments on the PubFig dataset clearly show that the privacy protection provided is unequal across groups. ### Scene-Aware 3D Multi-Human Motion Capture from a Single Camera - **Authors:** Diogo Luvizon, Marc Habermann, Vladislav Golyanik, Adam Kortylewski, Christian Theobalt - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.05175 - **Pdf link:** https://arxiv.org/pdf/2301.05175 - **Abstract** In this work, we consider the problem of estimating the 3D position of multiple humans in a scene as well as their body shape and articulation from a single RGB video recorded with a static camera. In contrast to expensive marker-based or multi-view systems, our lightweight setup is ideal for private users as it enables an affordable 3D motion capture that is easy to install and does not require expert knowledge. To deal with this challenging setting, we leverage recent advances in computer vision using large-scale pre-trained models for a variety of modalities, including 2D body joints, joint angles, normalized disparity maps, and human segmentation masks. Thus, we introduce the first non-linear optimization-based approach that jointly solves for the absolute 3D position of each human, their articulated pose, their individual shapes as well as the scale of the scene. In particular, we estimate the scene depth and person unique scale from normalized disparity predictions using the 2D body joints and joint angles. Given the per-frame scene depth, we reconstruct a point-cloud of the static scene in 3D space. Finally, given the per-frame 3D estimates of the humans and scene point-cloud, we perform a space-time coherent optimization over the video to ensure temporal, spatial and physical plausibility. We evaluate our method on established multi-person 3D human pose benchmarks where we consistently outperform previous methods and we qualitatively demonstrate that our method is robust to in-the-wild conditions including challenging scenes with people of different sizes. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW ### Edge Preserving Implicit Surface Representation of Point Clouds - **Authors:** Xiaogang Wang, Yuhang Cheng, Liang Wang, Jiangbo Lu, Kai Xu, Guoqiang Xiao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR) - **Arxiv link:** https://arxiv.org/abs/2301.04860 - **Pdf link:** https://arxiv.org/pdf/2301.04860 - **Abstract** Learning implicit surface directly from raw data recently has become a very attractive representation method for 3D reconstruction tasks due to its excellent performance. However, as the raw data quality deteriorates, the implicit functions often lead to unsatisfactory reconstruction results. To this end, we propose a novel edge-preserving implicit surface reconstruction method, which mainly consists of a differentiable Laplican regularizer and a dynamic edge sampling strategy. Among them, the differential Laplican regularizer can effectively alleviate the implicit surface unsmoothness caused by the point cloud quality deteriorates; Meanwhile, in order to reduce the excessive smoothing at the edge regions of implicit suface, we proposed a dynamic edge extract strategy for sampling near the sharp edge of point cloud, which can effectively avoid the Laplacian regularizer from smoothing all regions. Finally, we combine them with a simple regularization term for robust implicit surface reconstruction. Compared with the state-of-the-art methods, experimental results show that our method significantly improves the quality of 3D reconstruction results. Moreover, we demonstrate through several experiments that our method can be conveniently and effectively applied to some point cloud analysis tasks, including point cloud edge feature extraction, normal estimation,etc. ## Keyword: raw image There is no result
process
new submissions for fri jan keyword events event based frame interpolation with ad hoc deblurring authors lei sun christos sakaridis jingyun liang peng sun jiezhang cao kai zhang qi jiang kaiwei wang luc van gool subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the performance of video frame interpolation is inherently correlated with the ability to handle motion in the input scene even though previous works recognize the utility of asynchronous event information for this task they ignore the fact that motion may or may not result in blur in the input video to be interpolated depending on the length of the exposure time of the frames and the speed of the motion and assume either that the input video is sharp restricting themselves to frame interpolation or that it is blurry including an explicit separate deblurring stage before interpolation in their pipeline we instead propose a general method for event based frame interpolation that performs deblurring ad hoc and thus works both on sharp and blurry input videos our model consists in a bidirectional recurrent network that naturally incorporates the temporal dimension of interpolation and fuses information from the input frames and the events adaptively based on their temporal proximity in addition we introduce a novel real world high resolution dataset with events and color videos named highrev which provides a challenging evaluation setting for the examined task extensive experiments on the standard gopro benchmark and on our dataset show that our network consistently outperforms previous state of the art methods on frame interpolation single image deblurring and the joint task of interpolation and deblurring our code and dataset will be made publicly available keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp real time fpga implementation of the semi global matching stereo vision algorithm for a uhd video stream authors mariusz grabowski tomasz kryjak subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract in this paper we propose a real time fpga implementation of the semi global matching sgm stereo vision algorithm the designed module supports a ultra hd x pixels frames per second video stream in a pixel per clock ppc format and a pixel disparity range the baseline sgm implementation had to be modified to process pixels in the format and meet the timing constrains however our version provides results comparable to the original design the solution has been positively evaluated on the xilinx development board with a virtex fpga device fairly private investigating the fairness of visual privacy preservation algorithms authors sophie noiret siddharth ravi martin kampel francisco florez revuelta subjects computer vision and pattern recognition cs cv cryptography and security cs cr machine learning cs lg arxiv link pdf link abstract as the privacy risks posed by camera surveillance and facial recognition have grown so has the research into privacy preservation algorithms among these visual privacy preservation algorithms attempt to impart bodily privacy to subjects in visuals by obfuscating privacy sensitive areas while disparate performances of facial recognition systems across phenotypes are the subject of much study its counterpart privacy preservation is not commonly analysed from a fairness perspective in this paper the fairness of commonly used visual privacy preservation algorithms is investigated through the performances of facial recognition models on obfuscated images experiments on the pubfig dataset clearly show that the privacy protection provided is unequal across groups scene aware multi human motion capture from a single camera authors diogo luvizon marc habermann vladislav golyanik adam kortylewski christian theobalt subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract in this work we consider the problem of estimating the position of multiple humans in a scene as well as their body shape and articulation from a single rgb video recorded with a static camera in contrast to expensive marker based or multi view systems our lightweight setup is ideal for private users as it enables an affordable motion capture that is easy to install and does not require expert knowledge to deal with this challenging setting we leverage recent advances in computer vision using large scale pre trained models for a variety of modalities including body joints joint angles normalized disparity maps and human segmentation masks thus we introduce the first non linear optimization based approach that jointly solves for the absolute position of each human their articulated pose their individual shapes as well as the scale of the scene in particular we estimate the scene depth and person unique scale from normalized disparity predictions using the body joints and joint angles given the per frame scene depth we reconstruct a point cloud of the static scene in space finally given the per frame estimates of the humans and scene point cloud we perform a space time coherent optimization over the video to ensure temporal spatial and physical plausibility we evaluate our method on established multi person human pose benchmarks where we consistently outperform previous methods and we qualitatively demonstrate that our method is robust to in the wild conditions including challenging scenes with people of different sizes keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw edge preserving implicit surface representation of point clouds authors xiaogang wang yuhang cheng liang wang jiangbo lu kai xu guoqiang xiao subjects computer vision and pattern recognition cs cv graphics cs gr arxiv link pdf link abstract learning implicit surface directly from raw data recently has become a very attractive representation method for reconstruction tasks due to its excellent performance however as the raw data quality deteriorates the implicit functions often lead to unsatisfactory reconstruction results to this end we propose a novel edge preserving implicit surface reconstruction method which mainly consists of a differentiable laplican regularizer and a dynamic edge sampling strategy among them the differential laplican regularizer can effectively alleviate the implicit surface unsmoothness caused by the point cloud quality deteriorates meanwhile in order to reduce the excessive smoothing at the edge regions of implicit suface we proposed a dynamic edge extract strategy for sampling near the sharp edge of point cloud which can effectively avoid the laplacian regularizer from smoothing all regions finally we combine them with a simple regularization term for robust implicit surface reconstruction compared with the state of the art methods experimental results show that our method significantly improves the quality of reconstruction results moreover we demonstrate through several experiments that our method can be conveniently and effectively applied to some point cloud analysis tasks including point cloud edge feature extraction normal estimation etc keyword raw image there is no result
1
171,047
27,053,129,506
IssuesEvent
2023-02-13 14:32:03
CMPUT301W23T47/Canary
https://api.github.com/repos/CMPUT301W23T47/Canary
closed
UI: Lo-FI Mockup for viewing Player's ranking by highest scored QR Code
design
Design a Lo-FI Mockup for viewing Player's ranking by highest-scored QR Code
1.0
UI: Lo-FI Mockup for viewing Player's ranking by highest scored QR Code - Design a Lo-FI Mockup for viewing Player's ranking by highest-scored QR Code
non_process
ui lo fi mockup for viewing player s ranking by highest scored qr code design a lo fi mockup for viewing player s ranking by highest scored qr code
0
270,546
23,517,195,372
IssuesEvent
2022-08-18 23:08:06
LMastro99/cricsheet-data
https://api.github.com/repos/LMastro99/cricsheet-data
opened
Add test for 'clean_innings_regulation_2_absent_hurt_data' function in Preprocess_Data class
testing
[Add 'clean_innings_regulation_2_absent_hurt_data'](https://github.com/LMastro99/cricsheet-data/commit/31789b45ab1e5340e16037720a7d54b6cd1ceeb8)
1.0
Add test for 'clean_innings_regulation_2_absent_hurt_data' function in Preprocess_Data class - [Add 'clean_innings_regulation_2_absent_hurt_data'](https://github.com/LMastro99/cricsheet-data/commit/31789b45ab1e5340e16037720a7d54b6cd1ceeb8)
non_process
add test for clean innings regulation absent hurt data function in preprocess data class
0
9,982
13,024,757,863
IssuesEvent
2020-07-27 12:26:45
spring-projects/spring-hateoas
https://api.github.com/repos/spring-projects/spring-hateoas
closed
VndErrors logref is not just Integer
in: mediatypes process: waiting for review specification type: bug
Looks like constructor is trying to parse `Integer` out from `String`: https://github.com/spring-projects/spring-hateoas/blob/5d1136ee121fa099ca0ae8fe08572a2b5f10b165/src/main/java/org/springframework/hateoas/mediatype/vnderrors/VndErrors.java#L95 This then for example in a dataflow errors out when trying to upgrade to boot 2.3.x: ``` java.lang.NumberFormatException: For input string: "NoSuchAppRegistrationException" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Integer.parseInt(Integer.java:580) at java.lang.Integer.parseInt(Integer.java:615) at org.springframework.hateoas.mediatype.vnderrors.VndErrors.<init>(VndErrors.java:95) at org.springframework.cloud.dataflow.server.controller.RestControllerAdvice.onNotFoundException(RestControllerAdvice.java:163) ``` At least https://github.com/blongden/vnd.error mentions `logref` being `numeric/alpha/alphanumeric`
1.0
VndErrors logref is not just Integer - Looks like constructor is trying to parse `Integer` out from `String`: https://github.com/spring-projects/spring-hateoas/blob/5d1136ee121fa099ca0ae8fe08572a2b5f10b165/src/main/java/org/springframework/hateoas/mediatype/vnderrors/VndErrors.java#L95 This then for example in a dataflow errors out when trying to upgrade to boot 2.3.x: ``` java.lang.NumberFormatException: For input string: "NoSuchAppRegistrationException" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Integer.parseInt(Integer.java:580) at java.lang.Integer.parseInt(Integer.java:615) at org.springframework.hateoas.mediatype.vnderrors.VndErrors.<init>(VndErrors.java:95) at org.springframework.cloud.dataflow.server.controller.RestControllerAdvice.onNotFoundException(RestControllerAdvice.java:163) ``` At least https://github.com/blongden/vnd.error mentions `logref` being `numeric/alpha/alphanumeric`
process
vnderrors logref is not just integer looks like constructor is trying to parse integer out from string this then for example in a dataflow errors out when trying to upgrade to boot x java lang numberformatexception for input string nosuchappregistrationexception at java lang numberformatexception forinputstring numberformatexception java at java lang integer parseint integer java at java lang integer parseint integer java at org springframework hateoas mediatype vnderrors vnderrors vnderrors java at org springframework cloud dataflow server controller restcontrolleradvice onnotfoundexception restcontrolleradvice java at least mentions logref being numeric alpha alphanumeric
1
2,731
5,619,694,076
IssuesEvent
2017-04-04 02:57:27
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
Can I have the opportunity to filter already parsed html report with dates.
add duplicate enhancement log-processing
Hello, it's me again. I'd like to ask if we have the opportunity to filter real time html report with `start_date` and `end_date` arguments. Since real time html report got more and more information day by day, it would be useful if we can select specific time period and look deep into it.
1.0
Can I have the opportunity to filter already parsed html report with dates. - Hello, it's me again. I'd like to ask if we have the opportunity to filter real time html report with `start_date` and `end_date` arguments. Since real time html report got more and more information day by day, it would be useful if we can select specific time period and look deep into it.
process
can i have the opportunity to filter already parsed html report with dates hello it s me again i d like to ask if we have the opportunity to filter real time html report with start date and end date arguments since real time html report got more and more information day by day it would be useful if we can select specific time period and look deep into it
1
25,230
7,657,114,765
IssuesEvent
2018-05-10 18:33:05
runconduit/conduit
https://api.github.com/repos/runconduit/conduit
opened
Strip all Go executables
area/build
Right now we strip debug information on the `conduit` CLI executable, using `-ldflags -s -w`, due to PR #367. I didn't do the same for the other executables because I didn't know if it would be useful to have debug information for them when running in the cluster. The goal with stripping the `conduit` executable was to reduce its download size. Stripping the other (Go) executables would also reduce their download size though I'm not sure how much we care now since the base Docker image is probably much bigger. However, @olix0r found in PR #921 that stripping also seems to speed up the build. That might be a good justification to strip the rest of them.
1.0
Strip all Go executables - Right now we strip debug information on the `conduit` CLI executable, using `-ldflags -s -w`, due to PR #367. I didn't do the same for the other executables because I didn't know if it would be useful to have debug information for them when running in the cluster. The goal with stripping the `conduit` executable was to reduce its download size. Stripping the other (Go) executables would also reduce their download size though I'm not sure how much we care now since the base Docker image is probably much bigger. However, @olix0r found in PR #921 that stripping also seems to speed up the build. That might be a good justification to strip the rest of them.
non_process
strip all go executables right now we strip debug information on the conduit cli executable using ldflags s w due to pr i didn t do the same for the other executables because i didn t know if it would be useful to have debug information for them when running in the cluster the goal with stripping the conduit executable was to reduce its download size stripping the other go executables would also reduce their download size though i m not sure how much we care now since the base docker image is probably much bigger however found in pr that stripping also seems to speed up the build that might be a good justification to strip the rest of them
0
66,473
16,618,930,155
IssuesEvent
2021-06-02 20:47:08
Macaulay2/M2
https://api.github.com/repos/Macaulay2/M2
closed
Minimum autoconf version?
build system
Currently, it's set at 2.71: https://github.com/Macaulay2/M2/blob/104c35429dbc9d0a9defe0443f82caeaaf440679/M2/Makefile#L19-L24 However, autoconf 2.71 is still relatively new (Jan 2021), and isn't available in any Debian-based distributions yet (except Debian experimental -- https://tracker.debian.org/pkg/autoconf). I have no problems building Macaulay2 with autoconf 2.69 (which is what's available on my system -- Ubuntu 21.04) after editing the Makefile to allow it. Is there a reason the minimum version is set to what it's at? I almost never build any of the included libraries or programs, so maybe one of them needs it?
1.0
Minimum autoconf version? - Currently, it's set at 2.71: https://github.com/Macaulay2/M2/blob/104c35429dbc9d0a9defe0443f82caeaaf440679/M2/Makefile#L19-L24 However, autoconf 2.71 is still relatively new (Jan 2021), and isn't available in any Debian-based distributions yet (except Debian experimental -- https://tracker.debian.org/pkg/autoconf). I have no problems building Macaulay2 with autoconf 2.69 (which is what's available on my system -- Ubuntu 21.04) after editing the Makefile to allow it. Is there a reason the minimum version is set to what it's at? I almost never build any of the included libraries or programs, so maybe one of them needs it?
non_process
minimum autoconf version currently it s set at however autoconf is still relatively new jan and isn t available in any debian based distributions yet except debian experimental i have no problems building with autoconf which is what s available on my system ubuntu after editing the makefile to allow it is there a reason the minimum version is set to what it s at i almost never build any of the included libraries or programs so maybe one of them needs it
0
486,718
14,012,975,219
IssuesEvent
2020-10-29 09:47:47
jetstack/cert-manager
https://api.github.com/repos/jetstack/cert-manager
closed
Incorrect zone/domain when looking up CAA
area/acme/http01 priority/awaiting-more-evidence triage/needs-information
> Bugs should be filed for issues encountered whilst operating cert-manager. > You should first attempt to resolve your issues through the community support > channels, e.g. Slack, in order to rule out individual configuration errors. > Please provide as much detail as possible. **Describe the bug**: A clear and concise description of what the bug is. Using cert-manger to issue ACME certificates for a subdomain using HTTP01 challenge results in the error `Accepting challenge authorization failed: acme: authorization error for whoami.subdomain.example.com: 400 urn:ietf:params:acme:error:dns: DNS problem: SERVFAIL looking up CAA for example.com - the domain's nameservers may be malfunctioning` This happens because cert-manager is looking for CAA records for the root domain example.com instead of the provided zone subdomain.example.com. If a zone for the root domain does not exist, certificate issuance will fail. **Expected behaviour**: Certificate should be issued without an error. **Steps to reproduce the bug**: * Bring up a k3s cluster. * Create a DNS Zone sub.dom.ain and point it to your cluster/loadbalancer * Install Cert-Manager and create a LetsEncrypt staging Issuer * Deploy a certifcate using the above issuer for any ingress resource * Certificate Issuance will fail **Anything else we need to know?**: This only happens when CAA records for the root domain do not exist. **Environment details:**: - Kubernetes version (e.g. v1.10.2): 1.17.5 - Cloud-provider/provisioner (e.g. GKE, kops AWS, etc): k3s - cert-manager version (e.g. v0.4.0): 0.15.1 - Install method ( static manifests): manifests /kind bug
1.0
Incorrect zone/domain when looking up CAA - > Bugs should be filed for issues encountered whilst operating cert-manager. > You should first attempt to resolve your issues through the community support > channels, e.g. Slack, in order to rule out individual configuration errors. > Please provide as much detail as possible. **Describe the bug**: A clear and concise description of what the bug is. Using cert-manger to issue ACME certificates for a subdomain using HTTP01 challenge results in the error `Accepting challenge authorization failed: acme: authorization error for whoami.subdomain.example.com: 400 urn:ietf:params:acme:error:dns: DNS problem: SERVFAIL looking up CAA for example.com - the domain's nameservers may be malfunctioning` This happens because cert-manager is looking for CAA records for the root domain example.com instead of the provided zone subdomain.example.com. If a zone for the root domain does not exist, certificate issuance will fail. **Expected behaviour**: Certificate should be issued without an error. **Steps to reproduce the bug**: * Bring up a k3s cluster. * Create a DNS Zone sub.dom.ain and point it to your cluster/loadbalancer * Install Cert-Manager and create a LetsEncrypt staging Issuer * Deploy a certifcate using the above issuer for any ingress resource * Certificate Issuance will fail **Anything else we need to know?**: This only happens when CAA records for the root domain do not exist. **Environment details:**: - Kubernetes version (e.g. v1.10.2): 1.17.5 - Cloud-provider/provisioner (e.g. GKE, kops AWS, etc): k3s - cert-manager version (e.g. v0.4.0): 0.15.1 - Install method ( static manifests): manifests /kind bug
non_process
incorrect zone domain when looking up caa bugs should be filed for issues encountered whilst operating cert manager you should first attempt to resolve your issues through the community support channels e g slack in order to rule out individual configuration errors please provide as much detail as possible describe the bug a clear and concise description of what the bug is using cert manger to issue acme certificates for a subdomain using challenge results in the error accepting challenge authorization failed acme authorization error for whoami subdomain example com urn ietf params acme error dns dns problem servfail looking up caa for example com the domain s nameservers may be malfunctioning this happens because cert manager is looking for caa records for the root domain example com instead of the provided zone subdomain example com if a zone for the root domain does not exist certificate issuance will fail expected behaviour certificate should be issued without an error steps to reproduce the bug bring up a cluster create a dns zone sub dom ain and point it to your cluster loadbalancer install cert manager and create a letsencrypt staging issuer deploy a certifcate using the above issuer for any ingress resource certificate issuance will fail anything else we need to know this only happens when caa records for the root domain do not exist environment details kubernetes version e g cloud provider provisioner e g gke kops aws etc cert manager version e g install method static manifests manifests kind bug
0
237,771
19,674,251,720
IssuesEvent
2022-01-11 10:37:38
parallaxsecond/parsec
https://api.github.com/repos/parallaxsecond/parsec
opened
Disable test from old E2E suite
bug small testing
The following test should be disabled/ignored in the CI script, when running old E2E tests in the per-provider workflows: per_provider::normal_tests::asym_sign_verify::fail_verify_hash . The test takes a valid signature and adds 1 to one of its bytes, attempting to make it invalid. However, every now and then the initial byte value turns out to be 0xFF, which leads to the addition overflowing and thus the whole test and workflow failing.
1.0
Disable test from old E2E suite - The following test should be disabled/ignored in the CI script, when running old E2E tests in the per-provider workflows: per_provider::normal_tests::asym_sign_verify::fail_verify_hash . The test takes a valid signature and adds 1 to one of its bytes, attempting to make it invalid. However, every now and then the initial byte value turns out to be 0xFF, which leads to the addition overflowing and thus the whole test and workflow failing.
non_process
disable test from old suite the following test should be disabled ignored in the ci script when running old tests in the per provider workflows per provider normal tests asym sign verify fail verify hash the test takes a valid signature and adds to one of its bytes attempting to make it invalid however every now and then the initial byte value turns out to be which leads to the addition overflowing and thus the whole test and workflow failing
0
14,736
18,006,275,432
IssuesEvent
2021-09-16 00:17:14
dtcenter/MET
https://api.github.com/repos/dtcenter/MET
closed
pb2nc: missing some values on processing all variable with BURF input
type: enhancement priority: medium alert: NEED MORE DEFINITION alert: NEED ACCOUNT KEY component: CI/CD alert: NEED PROJECT ASSIGNMENT requestor: METplus Team MET: PreProcessing Tools (Point)
To process all available variables for BUFR input, pb2nc checks the valid data at the first level only. I found a case the valid data for HOCT variable is at the third level (the 56959-th message from /d1/projects/MET/MET_test_data/unit_test/obs_data/prepbufr/ndas/nam.20120409.t12z.prepbufr.tm00.nr, the unit test "pb2nc_NDAS_var_all"). The unittest worked OK because the HOCT data was filtered out by station id ("14008", the unit test accepts only "72364", "72265", "72274", "72426", "72489"). ## Describe the Enhancement ## Checking the valid value up to 10 levels. The best solution will be checking all vertical levels. This will cause more execution time to find out which variables have the valid value. ### Time Estimate ### 4 hours ### Sub-Issues ### Consider breaking the enhancement down into sub-issues. - [ ] *Add a checkbox for each sub-issue here.* ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [x] Select **engineer(s)** or **no engineer** required - [x] Select **scientist(s)** or **no scientist** required: None ### Labels ### - [x] Select **component(s)** - [x] Select **priority** - [x] Select **requestor(s)** ### Projects and Milestone ### - [ ] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label - [ ] Select **Milestone** as the next official version or **Future Versions** ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) ## Enhancement Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [ ] Fork this repository or create a branch of **develop**. Branch name: `feature_<Issue Number>_<Description>` - [ ] Complete the development and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **develop**. Pull request: `feature <Issue Number> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Linked issues** Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Close this issue.
1.0
pb2nc: missing some values on processing all variable with BURF input - To process all available variables for BUFR input, pb2nc checks the valid data at the first level only. I found a case the valid data for HOCT variable is at the third level (the 56959-th message from /d1/projects/MET/MET_test_data/unit_test/obs_data/prepbufr/ndas/nam.20120409.t12z.prepbufr.tm00.nr, the unit test "pb2nc_NDAS_var_all"). The unittest worked OK because the HOCT data was filtered out by station id ("14008", the unit test accepts only "72364", "72265", "72274", "72426", "72489"). ## Describe the Enhancement ## Checking the valid value up to 10 levels. The best solution will be checking all vertical levels. This will cause more execution time to find out which variables have the valid value. ### Time Estimate ### 4 hours ### Sub-Issues ### Consider breaking the enhancement down into sub-issues. - [ ] *Add a checkbox for each sub-issue here.* ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [x] Select **engineer(s)** or **no engineer** required - [x] Select **scientist(s)** or **no scientist** required: None ### Labels ### - [x] Select **component(s)** - [x] Select **priority** - [x] Select **requestor(s)** ### Projects and Milestone ### - [ ] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label - [ ] Select **Milestone** as the next official version or **Future Versions** ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) ## Enhancement Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [ ] Fork this repository or create a branch of **develop**. Branch name: `feature_<Issue Number>_<Description>` - [ ] Complete the development and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **develop**. Pull request: `feature <Issue Number> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Linked issues** Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Close this issue.
process
missing some values on processing all variable with burf input to process all available variables for bufr input checks the valid data at the first level only i found a case the valid data for hoct variable is at the third level the th message from projects met met test data unit test obs data prepbufr ndas nam prepbufr nr the unit test ndas var all the unittest worked ok because the hoct data was filtered out by station id the unit test accepts only describe the enhancement checking the valid value up to levels the best solution will be checking all vertical levels this will cause more execution time to find out which variables have the valid value time estimate hours sub issues consider breaking the enhancement down into sub issues add a checkbox for each sub issue here relevant deadlines list relevant project deadlines here or state none funding source define the source of funding and account keys here or state none define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required none labels select component s select priority select requestor s projects and milestone select repository and or organization level project s or add alert need project assignment label select milestone as the next official version or future versions define related issue s consider the impact to the other metplus components enhancement checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of develop branch name feature complete the development and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into develop pull request feature define the pull request metadata as permissions allow select reviewer s and linked issues select repository level development cycle project for the next official release select milestone as the next official version iterate until the reviewer s accept and merge your changes delete your fork or branch close this issue
1
17,433
23,251,367,836
IssuesEvent
2022-08-04 04:19:58
streamnative/flink
https://api.github.com/repos/streamnative/flink
opened
[Pulsar Connector] Backlog has too many noack messages.
compute/data-processing
According to community user, they run into some backlog related issues in 1.15.0.1. The two issues are: First: no ack messages are around 50 ~ 100, which is not desired. Need to add more context ![image.png](https://images.zenhubusercontent.com/61564d60ef4304714f6337a4/dc52a08d-3172-45c7-b13e-6053efa628f9) Second: the mark-delete-position seems points to a ledger from an old subscription instead of the new subscription. ![WechatIMG106.png](https://images.zenhubusercontent.com/61564d60ef4304714f6337a4/facc8f34-9e39-471a-b54a-7a29f6a4e346) ![WechatIMG107.png](https://images.zenhubusercontent.com/61564d60ef4304714f6337a4/67688e8a-4c64-46b8-b6a3-62964f110c36 )
1.0
[Pulsar Connector] Backlog has too many noack messages. - According to community user, they run into some backlog related issues in 1.15.0.1. The two issues are: First: no ack messages are around 50 ~ 100, which is not desired. Need to add more context ![image.png](https://images.zenhubusercontent.com/61564d60ef4304714f6337a4/dc52a08d-3172-45c7-b13e-6053efa628f9) Second: the mark-delete-position seems points to a ledger from an old subscription instead of the new subscription. ![WechatIMG106.png](https://images.zenhubusercontent.com/61564d60ef4304714f6337a4/facc8f34-9e39-471a-b54a-7a29f6a4e346) ![WechatIMG107.png](https://images.zenhubusercontent.com/61564d60ef4304714f6337a4/67688e8a-4c64-46b8-b6a3-62964f110c36 )
process
backlog has too many noack messages according to community user they run into some backlog related issues in the two issues are first no ack messages are around which is not desired need to add more context second the mark delete position seems points to a ledger from an old subscription instead of the new subscription
1
12,954
15,339,374,239
IssuesEvent
2021-02-27 01:44:39
Ghost-chu/QuickShop-Reremake
https://api.github.com/repos/Ghost-chu/QuickShop-Reremake
closed
[PERFORMANCE] ProtectionListenerBase.getShopRedstone() is bad on performance
Help Wanted In Process Performance Issue Priority:Major Waiting For Reply
**Describe the issue** ProtectionListenerBase.getShopRedstone() used in various places such as BlockListener#onInventoryMove() and ShopProtectionListener#onInventoryMove() is causing bad performance, using up to 10% of each tick on some sparks! This is extremely poor performance. I can see that you have tried to cache the shop result, it seems the cache is not working well. **To Reproduce** N/A **Expected behavior** Good performance. **Screenshots** N/A **Paste link:** Execute command /qs paste, you will get a link contains your server information, paste it under this text. You must create a paste, except plugin completely won't work. If you create failed, you should find a paste file under the plugin/QuickShop folder. - /qs paste does not output anything to file or console. **Spark result link: (Require)** - https://spark.lucko.me/#VXEICulqdf **Additional context** N/A
1.0
[PERFORMANCE] ProtectionListenerBase.getShopRedstone() is bad on performance - **Describe the issue** ProtectionListenerBase.getShopRedstone() used in various places such as BlockListener#onInventoryMove() and ShopProtectionListener#onInventoryMove() is causing bad performance, using up to 10% of each tick on some sparks! This is extremely poor performance. I can see that you have tried to cache the shop result, it seems the cache is not working well. **To Reproduce** N/A **Expected behavior** Good performance. **Screenshots** N/A **Paste link:** Execute command /qs paste, you will get a link contains your server information, paste it under this text. You must create a paste, except plugin completely won't work. If you create failed, you should find a paste file under the plugin/QuickShop folder. - /qs paste does not output anything to file or console. **Spark result link: (Require)** - https://spark.lucko.me/#VXEICulqdf **Additional context** N/A
process
protectionlistenerbase getshopredstone is bad on performance describe the issue protectionlistenerbase getshopredstone used in various places such as blocklistener oninventorymove and shopprotectionlistener oninventorymove is causing bad performance using up to of each tick on some sparks this is extremely poor performance i can see that you have tried to cache the shop result it seems the cache is not working well to reproduce n a expected behavior good performance screenshots n a paste link execute command qs paste you will get a link contains your server information paste it under this text you must create a paste except plugin completely won t work if you create failed you should find a paste file under the plugin quickshop folder qs paste does not output anything to file or console spark result link require additional context n a
1
15,544
19,703,501,609
IssuesEvent
2022-01-12 19:07:53
googleapis/java-managed-identities
https://api.github.com/repos/googleapis/java-managed-identities
opened
Your .repo-metadata.json file has a problem 🤒
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan 📈: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'managed-identities' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file: Result of scan 📈: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'managed-identities' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 release level must be equal to one of the allowed values in repo metadata json api shortname managed identities invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
1
19,446
25,722,349,927
IssuesEvent
2022-12-07 14:31:16
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
[servicegraphprocessor] Edge loss attributes from client-side or server-side
bug priority:p2 processor/servicegraph
### Describe the issue you're reporting Edge attributes are set by `p.upsertDimensions()` whatever `span.kind` is client or server: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/304196f51b12804846e24d060d0f9b9e2b6df059/processor/servicegraphprocessor/processor.go#L198-L231 it does't handle a sense that the two spans have a attribute with same name but different value, like client span have attribute of `cluster: A` but server span have attribute of `cluster: B` we can add prefix to distinguish them: client_cluster: A server_cluster: B
1.0
[servicegraphprocessor] Edge loss attributes from client-side or server-side - ### Describe the issue you're reporting Edge attributes are set by `p.upsertDimensions()` whatever `span.kind` is client or server: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/304196f51b12804846e24d060d0f9b9e2b6df059/processor/servicegraphprocessor/processor.go#L198-L231 it does't handle a sense that the two spans have a attribute with same name but different value, like client span have attribute of `cluster: A` but server span have attribute of `cluster: B` we can add prefix to distinguish them: client_cluster: A server_cluster: B
process
edge loss attributes from client side or server side describe the issue you re reporting edge attributes are set by p upsertdimensions whatever span kind is client or server it does t handle a sense that the two spans have a attribute with same name but different value like client span have attribute of cluster a but server span have attribute of cluster b we can add prefix to distinguish them client cluster a server cluster b
1
127,367
5,029,268,262
IssuesEvent
2016-12-15 20:40:32
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
opened
kubectl in-cluster config trumps command line arguments
priority/important-soon
Found in openshift, but I'm pretty sure the codepaths are the same. ``` sh-4.2$ oc --token=my-users-token get user/~ NAME UID FULL NAME IDENTITIES system:serviceaccount:deads:default ``` That means the token is getting ignored and in-cluster config is being used in preference to flags. @kubernetes/sig-cli @smarterclayton
1.0
kubectl in-cluster config trumps command line arguments - Found in openshift, but I'm pretty sure the codepaths are the same. ``` sh-4.2$ oc --token=my-users-token get user/~ NAME UID FULL NAME IDENTITIES system:serviceaccount:deads:default ``` That means the token is getting ignored and in-cluster config is being used in preference to flags. @kubernetes/sig-cli @smarterclayton
non_process
kubectl in cluster config trumps command line arguments found in openshift but i m pretty sure the codepaths are the same sh oc token my users token get user name uid full name identities system serviceaccount deads default that means the token is getting ignored and in cluster config is being used in preference to flags kubernetes sig cli smarterclayton
0
17,158
22,716,878,530
IssuesEvent
2022-07-06 03:31:58
camunda/feel-scala
https://api.github.com/repos/camunda/feel-scala
closed
Release 1.15.0 doesn't have a compile dependency to fastparse
type: bug team/process-automation
**Describe the bug** The version `1.15.0` has a dependency issue. In a dependent project, the new version results in the following runtime error: ``` java.lang.NoClassDefFoundError: fastparse/Parsed ``` In the [pom.xml](https://mvnrepository.com/artifact/org.camunda.feel/feel-engine/1.15.0) of this version, the dependency to `fastparse` has the scope `provided` instead of `compile`. But this doesn't match with the actual definition in the [pom.xml](https://github.com/camunda/feel-scala/blob/1.15.0/pom.xml#L78-L82). It seems to be related to the new version `3.3.0` of the `maven-shade-plugin` (merged [PR](https://github.com/camunda/feel-scala/pull/412)). The `maven-shade plugin` creates a `dependency-reduced-pom.xml` that overrides the original `pom.xml` when deploying the release. Build output with version `3.3.0`: ``` [INFO] --- maven-shade-plugin:3.3.0:shade (default) @ feel-engine --- [INFO] Including org.scala-lang:scala-library:jar:2.13.8 in the shaded jar. [INFO] Including com.lihaoyi:fastparse_2.13:jar:2.3.3 in the shaded jar. [INFO] Including com.lihaoyi:sourcecode_2.13:jar:0.2.3 in the shaded jar. [INFO] Including com.lihaoyi:geny_2.13:jar:0.6.10 in the shaded jar. [INFO] Excluding org.slf4j:slf4j-api:jar:1.7.25 from the shaded jar. [WARNING] Could not get sources for org.camunda.feel:feel-engine:jar:1.16.0-SNAPSHOT [INFO] Dependency-reduced POM written at: /home/philipp/IdeaProjects/feel-scala/dependency-reduced-pom.xml [INFO] Attaching shaded artifact. [INFO] --- maven-install-plugin:2.4:install (default-install) @ feel-engine --- [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT.jar [INFO] Installing /home/philipp/IdeaProjects/feel-scala/dependency-reduced-pom.xml to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT.pom [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/dependencies.txt to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-third-party-bom.txt [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT-javadoc.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-javadoc.jar [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT-complete.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-complete.jar [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT-scala-shaded.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-scala-shaded.jar [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT-scala-shaded-sources.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-scala-shaded-sources.jar ``` Build output with version `3.2.4`: ``` [INFO] --- maven-shade-plugin:3.2.4:shade (default) @ feel-engine --- [INFO] Including org.scala-lang:scala-library:jar:2.13.8 in the shaded jar. [INFO] Including com.lihaoyi:fastparse_2.13:jar:2.3.3 in the shaded jar. [INFO] Including com.lihaoyi:sourcecode_2.13:jar:0.2.3 in the shaded jar. [INFO] Including com.lihaoyi:geny_2.13:jar:0.6.10 in the shaded jar. [INFO] Excluding org.slf4j:slf4j-api:jar:1.7.25 from the shaded jar. [WARNING] Could not get sources for org.camunda.feel:feel-engine:jar:1.16.0-SNAPSHOT [INFO] Attaching shaded artifact. [INFO] --- maven-install-plugin:2.4:install (default-install) @ feel-engine --- [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT.jar [INFO] Installing /home/philipp/IdeaProjects/feel-scala/pom.xml to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT.pom [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/dependencies.txt to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-third-party-bom.txt [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT-javadoc.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-javadoc.jar [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT-complete.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-complete.jar [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT-scala-shaded.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-scala-shaded.jar [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT-scala-shaded-sources.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-scala-shaded-sources.jar ``` Related issues: * https://issues.apache.org/jira/browse/MSHADE-321 * https://issues.apache.org/jira/browse/MSHADE-419 **To Reproduce** See https://github.com/camunda/zeebe/pull/9664 And https://github.com/camunda-community-hub/eze/pull/171 **Expected behavior** The version `1.15.x` includes `fastpase` as `compile` dependency. It can be used from a dependent project as the previous versions, without including transitive dependencies. **Environment** * FEEL engine version: `1.15.0` * Affects: * Camunda Automation Platform 7: [7.x] <!-- link the issue: https://jira.camunda.com/browse/CAM- --> * Zeebe broker: [0.x] <!-- link the issue: https://github.com/zeebe-io/zeebe/issues# -->
1.0
Release 1.15.0 doesn't have a compile dependency to fastparse - **Describe the bug** The version `1.15.0` has a dependency issue. In a dependent project, the new version results in the following runtime error: ``` java.lang.NoClassDefFoundError: fastparse/Parsed ``` In the [pom.xml](https://mvnrepository.com/artifact/org.camunda.feel/feel-engine/1.15.0) of this version, the dependency to `fastparse` has the scope `provided` instead of `compile`. But this doesn't match with the actual definition in the [pom.xml](https://github.com/camunda/feel-scala/blob/1.15.0/pom.xml#L78-L82). It seems to be related to the new version `3.3.0` of the `maven-shade-plugin` (merged [PR](https://github.com/camunda/feel-scala/pull/412)). The `maven-shade plugin` creates a `dependency-reduced-pom.xml` that overrides the original `pom.xml` when deploying the release. Build output with version `3.3.0`: ``` [INFO] --- maven-shade-plugin:3.3.0:shade (default) @ feel-engine --- [INFO] Including org.scala-lang:scala-library:jar:2.13.8 in the shaded jar. [INFO] Including com.lihaoyi:fastparse_2.13:jar:2.3.3 in the shaded jar. [INFO] Including com.lihaoyi:sourcecode_2.13:jar:0.2.3 in the shaded jar. [INFO] Including com.lihaoyi:geny_2.13:jar:0.6.10 in the shaded jar. [INFO] Excluding org.slf4j:slf4j-api:jar:1.7.25 from the shaded jar. [WARNING] Could not get sources for org.camunda.feel:feel-engine:jar:1.16.0-SNAPSHOT [INFO] Dependency-reduced POM written at: /home/philipp/IdeaProjects/feel-scala/dependency-reduced-pom.xml [INFO] Attaching shaded artifact. [INFO] --- maven-install-plugin:2.4:install (default-install) @ feel-engine --- [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT.jar [INFO] Installing /home/philipp/IdeaProjects/feel-scala/dependency-reduced-pom.xml to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT.pom [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/dependencies.txt to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-third-party-bom.txt [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT-javadoc.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-javadoc.jar [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT-complete.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-complete.jar [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT-scala-shaded.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-scala-shaded.jar [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT-scala-shaded-sources.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-scala-shaded-sources.jar ``` Build output with version `3.2.4`: ``` [INFO] --- maven-shade-plugin:3.2.4:shade (default) @ feel-engine --- [INFO] Including org.scala-lang:scala-library:jar:2.13.8 in the shaded jar. [INFO] Including com.lihaoyi:fastparse_2.13:jar:2.3.3 in the shaded jar. [INFO] Including com.lihaoyi:sourcecode_2.13:jar:0.2.3 in the shaded jar. [INFO] Including com.lihaoyi:geny_2.13:jar:0.6.10 in the shaded jar. [INFO] Excluding org.slf4j:slf4j-api:jar:1.7.25 from the shaded jar. [WARNING] Could not get sources for org.camunda.feel:feel-engine:jar:1.16.0-SNAPSHOT [INFO] Attaching shaded artifact. [INFO] --- maven-install-plugin:2.4:install (default-install) @ feel-engine --- [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT.jar [INFO] Installing /home/philipp/IdeaProjects/feel-scala/pom.xml to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT.pom [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/dependencies.txt to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-third-party-bom.txt [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT-javadoc.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-javadoc.jar [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT-complete.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-complete.jar [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT-scala-shaded.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-scala-shaded.jar [INFO] Installing /home/philipp/IdeaProjects/feel-scala/target/feel-engine-1.16.0-SNAPSHOT-scala-shaded-sources.jar to /home/philipp/.m2/repository/org/camunda/feel/feel-engine/1.16.0-SNAPSHOT/feel-engine-1.16.0-SNAPSHOT-scala-shaded-sources.jar ``` Related issues: * https://issues.apache.org/jira/browse/MSHADE-321 * https://issues.apache.org/jira/browse/MSHADE-419 **To Reproduce** See https://github.com/camunda/zeebe/pull/9664 And https://github.com/camunda-community-hub/eze/pull/171 **Expected behavior** The version `1.15.x` includes `fastpase` as `compile` dependency. It can be used from a dependent project as the previous versions, without including transitive dependencies. **Environment** * FEEL engine version: `1.15.0` * Affects: * Camunda Automation Platform 7: [7.x] <!-- link the issue: https://jira.camunda.com/browse/CAM- --> * Zeebe broker: [0.x] <!-- link the issue: https://github.com/zeebe-io/zeebe/issues# -->
process
release doesn t have a compile dependency to fastparse describe the bug the version has a dependency issue in a dependent project the new version results in the following runtime error java lang noclassdeffounderror fastparse parsed in the of this version the dependency to fastparse has the scope provided instead of compile but this doesn t match with the actual definition in the it seems to be related to the new version of the maven shade plugin merged the maven shade plugin creates a dependency reduced pom xml that overrides the original pom xml when deploying the release build output with version maven shade plugin shade default feel engine including org scala lang scala library jar in the shaded jar including com lihaoyi fastparse jar in the shaded jar including com lihaoyi sourcecode jar in the shaded jar including com lihaoyi geny jar in the shaded jar excluding org api jar from the shaded jar could not get sources for org camunda feel feel engine jar snapshot dependency reduced pom written at home philipp ideaprojects feel scala dependency reduced pom xml attaching shaded artifact maven install plugin install default install feel engine installing home philipp ideaprojects feel scala target feel engine snapshot jar to home philipp repository org camunda feel feel engine snapshot feel engine snapshot jar installing home philipp ideaprojects feel scala dependency reduced pom xml to home philipp repository org camunda feel feel engine snapshot feel engine snapshot pom installing home philipp ideaprojects feel scala target dependencies txt to home philipp repository org camunda feel feel engine snapshot feel engine snapshot third party bom txt installing home philipp ideaprojects feel scala target feel engine snapshot javadoc jar to home philipp repository org camunda feel feel engine snapshot feel engine snapshot javadoc jar installing home philipp ideaprojects feel scala target feel engine snapshot complete jar to home philipp repository org camunda feel feel engine snapshot feel engine snapshot complete jar installing home philipp ideaprojects feel scala target feel engine snapshot scala shaded jar to home philipp repository org camunda feel feel engine snapshot feel engine snapshot scala shaded jar installing home philipp ideaprojects feel scala target feel engine snapshot scala shaded sources jar to home philipp repository org camunda feel feel engine snapshot feel engine snapshot scala shaded sources jar build output with version maven shade plugin shade default feel engine including org scala lang scala library jar in the shaded jar including com lihaoyi fastparse jar in the shaded jar including com lihaoyi sourcecode jar in the shaded jar including com lihaoyi geny jar in the shaded jar excluding org api jar from the shaded jar could not get sources for org camunda feel feel engine jar snapshot attaching shaded artifact maven install plugin install default install feel engine installing home philipp ideaprojects feel scala target feel engine snapshot jar to home philipp repository org camunda feel feel engine snapshot feel engine snapshot jar installing home philipp ideaprojects feel scala pom xml to home philipp repository org camunda feel feel engine snapshot feel engine snapshot pom installing home philipp ideaprojects feel scala target dependencies txt to home philipp repository org camunda feel feel engine snapshot feel engine snapshot third party bom txt installing home philipp ideaprojects feel scala target feel engine snapshot javadoc jar to home philipp repository org camunda feel feel engine snapshot feel engine snapshot javadoc jar installing home philipp ideaprojects feel scala target feel engine snapshot complete jar to home philipp repository org camunda feel feel engine snapshot feel engine snapshot complete jar installing home philipp ideaprojects feel scala target feel engine snapshot scala shaded jar to home philipp repository org camunda feel feel engine snapshot feel engine snapshot scala shaded jar installing home philipp ideaprojects feel scala target feel engine snapshot scala shaded sources jar to home philipp repository org camunda feel feel engine snapshot feel engine snapshot scala shaded sources jar related issues to reproduce see and expected behavior the version x includes fastpase as compile dependency it can be used from a dependent project as the previous versions without including transitive dependencies environment feel engine version affects camunda automation platform zeebe broker
1
9,993
13,040,023,562
IssuesEvent
2020-07-28 17:45:49
hashicorp/packer
https://api.github.com/repos/hashicorp/packer
closed
[feature] Add support for vsphere vcenter/host
good first issue post-processor/vsphere
Trying to deploy an image to a vCenter on a specific host and it seems that the `ovftool --help examples` supports it ``` ovftool /ovfs/my_vapp.ovf \ vi://username:pass@my_vc_server/?ip=10.20.30.40 (.ovf file to vCenter server using managed ESX host ip address) ovftool vi://username:pass@my_vc_server/my_datacenter?ds=\ [Storage1] foo/foo.vmx c:\ovfs\ (VM on ESX/vCenter server to OVF using datastore location query) ovftool /ovfs/my_vapp.ovf \ vi://username:pass@my_vc_server/my_datacenter/host/my_host (.ovf file to vCenter server using vCenter inventory path) ``` According to the parameters https://www.packer.io/docs/post-processors/vsphere.html the only option is to pass the `host` value but not the `<vcenter ip>`
1.0
[feature] Add support for vsphere vcenter/host - Trying to deploy an image to a vCenter on a specific host and it seems that the `ovftool --help examples` supports it ``` ovftool /ovfs/my_vapp.ovf \ vi://username:pass@my_vc_server/?ip=10.20.30.40 (.ovf file to vCenter server using managed ESX host ip address) ovftool vi://username:pass@my_vc_server/my_datacenter?ds=\ [Storage1] foo/foo.vmx c:\ovfs\ (VM on ESX/vCenter server to OVF using datastore location query) ovftool /ovfs/my_vapp.ovf \ vi://username:pass@my_vc_server/my_datacenter/host/my_host (.ovf file to vCenter server using vCenter inventory path) ``` According to the parameters https://www.packer.io/docs/post-processors/vsphere.html the only option is to pass the `host` value but not the `<vcenter ip>`
process
add support for vsphere vcenter host trying to deploy an image to a vcenter on a specific host and it seems that the ovftool help examples supports it ovftool ovfs my vapp ovf vi username pass my vc server ip ovf file to vcenter server using managed esx host ip address ovftool vi username pass my vc server my datacenter ds foo foo vmx c ovfs vm on esx vcenter server to ovf using datastore location query ovftool ovfs my vapp ovf vi username pass my vc server my datacenter host my host ovf file to vcenter server using vcenter inventory path according to the parameters the only option is to pass the host value but not the
1
19,717
26,073,661,719
IssuesEvent
2022-12-24 06:27:50
pyanodon/pybugreports
https://api.github.com/repos/pyanodon/pybugreports
closed
Dependency loop with Ammo Loader+ mod.
mod:pypostprocessing postprocess-fail compatibility
### Mod source Factorio Mod Portal ### Which mod are you having an issue with? - [ ] pyalienlife - [ ] pyalternativeenergy - [ ] pycoalprocessing - [ ] pyfusionenergy - [ ] pyhightech - [ ] pyindustry - [ ] pypetroleumhandling - [X] pypostprocessing - [ ] pyrawores ### Operating system >=Windows 10 ### What kind of issue is this? - [X] Compatibility - [ ] Locale (names, descriptions, unknown keys) - [ ] Graphical - [x] Crash - [ ] Progression - [ ] Balance - [X] Pypostprocessing failure - [ ] Other ### What is the problem? The game refuses to load when I enable Ammo Loader+ (https://mods.factorio.com/mod/ammo-loader/discussion) with the following error: ![pyerror](https://user-images.githubusercontent.com/7191983/195996617-b870cebb-df15-4988-a938-a55b0e914a58.jpg) ### Steps to reproduce 1. Install the full pyanodon suite, 2. Install Ammo Loader+ (https://mods.factorio.com/mod/ammo-loader/discussion) 3. Fail to start with a dependency loop. ### Additional context _No response_ ### Log file [factorio-current.log](https://github.com/pyanodon/pybugreports/files/9792565/factorio-current.log)
2.0
Dependency loop with Ammo Loader+ mod. - ### Mod source Factorio Mod Portal ### Which mod are you having an issue with? - [ ] pyalienlife - [ ] pyalternativeenergy - [ ] pycoalprocessing - [ ] pyfusionenergy - [ ] pyhightech - [ ] pyindustry - [ ] pypetroleumhandling - [X] pypostprocessing - [ ] pyrawores ### Operating system >=Windows 10 ### What kind of issue is this? - [X] Compatibility - [ ] Locale (names, descriptions, unknown keys) - [ ] Graphical - [x] Crash - [ ] Progression - [ ] Balance - [X] Pypostprocessing failure - [ ] Other ### What is the problem? The game refuses to load when I enable Ammo Loader+ (https://mods.factorio.com/mod/ammo-loader/discussion) with the following error: ![pyerror](https://user-images.githubusercontent.com/7191983/195996617-b870cebb-df15-4988-a938-a55b0e914a58.jpg) ### Steps to reproduce 1. Install the full pyanodon suite, 2. Install Ammo Loader+ (https://mods.factorio.com/mod/ammo-loader/discussion) 3. Fail to start with a dependency loop. ### Additional context _No response_ ### Log file [factorio-current.log](https://github.com/pyanodon/pybugreports/files/9792565/factorio-current.log)
process
dependency loop with ammo loader mod mod source factorio mod portal which mod are you having an issue with pyalienlife pyalternativeenergy pycoalprocessing pyfusionenergy pyhightech pyindustry pypetroleumhandling pypostprocessing pyrawores operating system windows what kind of issue is this compatibility locale names descriptions unknown keys graphical crash progression balance pypostprocessing failure other what is the problem the game refuses to load when i enable ammo loader with the following error steps to reproduce install the full pyanodon suite install ammo loader fail to start with a dependency loop additional context no response log file
1
410,539
11,993,264,718
IssuesEvent
2020-04-08 11:41:15
hotosm/tasking-manager
https://api.github.com/repos/hotosm/tasking-manager
opened
Validate random tasks
Component: Frontend Priority: Low Status: Needs implementation Type: Enhancement
User feedback: > it seems the only way to Validate a task is to manually select it from the grid of the map. Wasn’t able to find a way to randomly validate tasks like we can do in TM3, which I liked.
1.0
Validate random tasks - User feedback: > it seems the only way to Validate a task is to manually select it from the grid of the map. Wasn’t able to find a way to randomly validate tasks like we can do in TM3, which I liked.
non_process
validate random tasks user feedback it seems the only way to validate a task is to manually select it from the grid of the map wasn’t able to find a way to randomly validate tasks like we can do in which i liked
0