Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
249,729
18,858,231,264
IssuesEvent
2021-11-12 09:31:53
fans2619/pe
https://api.github.com/repos/fans2619/pe
opened
DG: missing important user stories
severity.Low type.DocumentationBug
The app can filter tasks by multiple criteria, but the user story only mentions module. ![image.png](https://raw.githubusercontent.com/fans2619/pe/main/files/51318f4e-b0ff-48ee-bb9e-6b5a9785b7b0.png) <!--session: 1636704000181-d38cfce8-c355-4989-aedc-b31601a51075--> <!--Version: Web v3.4.1-->
1.0
DG: missing important user stories - The app can filter tasks by multiple criteria, but the user story only mentions module. ![image.png](https://raw.githubusercontent.com/fans2619/pe/main/files/51318f4e-b0ff-48ee-bb9e-6b5a9785b7b0.png) <!--session: 1636704000181-d38cfce8-c355-4989-aedc-b31601a51075--> <!--Version: Web v3.4.1-->
non_process
dg missing important user stories the app can filter tasks by multiple criteria but the user story only mentions module
0
244,700
18,765,810,898
IssuesEvent
2021-11-05 23:54:16
dmyersturnbull/tyrannosaurus
https://api.github.com/repos/dmyersturnbull/tyrannosaurus
closed
README comparison table is broken
status: fixed ✓ kind: documentation kind: bug
See the screenshot, the README.md table has an error in it: ![Screenshot from 2021-10-24 10-38-14](https://user-images.githubusercontent.com/31610422/138606015-6c9c6f89-f82c-4984-a9d4-1fe3d783c142.png)
1.0
README comparison table is broken - See the screenshot, the README.md table has an error in it: ![Screenshot from 2021-10-24 10-38-14](https://user-images.githubusercontent.com/31610422/138606015-6c9c6f89-f82c-4984-a9d4-1fe3d783c142.png)
non_process
readme comparison table is broken see the screenshot the readme md table has an error in it
0
55,217
6,893,014,592
IssuesEvent
2017-11-23 00:16:01
Automattic/wp-calypso
https://api.github.com/repos/Automattic/wp-calypso
opened
Store Shipping Labels: Address modification notice has no icon
Design Store
The address modification notice might warrant a revisit since the notice component styling changes: <img width="847" alt="screen shot 2017-11-22 at 4 04 29 pm" src="https://user-images.githubusercontent.com/63922/33154826-72037f04-cfa8-11e7-97ed-e5aecaba53ce.png">
1.0
Store Shipping Labels: Address modification notice has no icon - The address modification notice might warrant a revisit since the notice component styling changes: <img width="847" alt="screen shot 2017-11-22 at 4 04 29 pm" src="https://user-images.githubusercontent.com/63922/33154826-72037f04-cfa8-11e7-97ed-e5aecaba53ce.png">
non_process
store shipping labels address modification notice has no icon the address modification notice might warrant a revisit since the notice component styling changes img width alt screen shot at pm src
0
12,650
15,023,214,267
IssuesEvent
2021-02-01 17:56:21
hashgraph/hedera-mirror-node
https://api.github.com/repos/hashgraph/hedera-mirror-node
closed
Integrate rosetta module into maven cycle
P3 enhancement process rosetta
**Problem** The rosetta module current uses standalone go commands to build and test. Given the rest of the modules can all be managed under maven for build and test we should incorporate rosetta also. **Solution** Integrate rosetta module into maven cycle [mvn-golang](https://github.com/raydac/mvn-golang) seems like a good starting option **Alternatives** **Additional Context**
1.0
Integrate rosetta module into maven cycle - **Problem** The rosetta module current uses standalone go commands to build and test. Given the rest of the modules can all be managed under maven for build and test we should incorporate rosetta also. **Solution** Integrate rosetta module into maven cycle [mvn-golang](https://github.com/raydac/mvn-golang) seems like a good starting option **Alternatives** **Additional Context**
process
integrate rosetta module into maven cycle problem the rosetta module current uses standalone go commands to build and test given the rest of the modules can all be managed under maven for build and test we should incorporate rosetta also solution integrate rosetta module into maven cycle seems like a good starting option alternatives additional context
1
732,999
25,284,246,193
IssuesEvent
2022-11-16 17:56:43
googleapis/nodejs-compute
https://api.github.com/repos/googleapis/nodejs-compute
closed
VMDisk Snapshot
priority: p3 type: feature request api: compute
Thanks for stopping by to let us know something could be better! **PLEASE READ**: If you have a support contract with Google, please create an issue in the [support console](https://cloud.google.com/support/) instead of filing on GitHub. This will ensure a timely response. **Is your feature request related to a problem? Please describe.** I need to take manual backups on GCP and looking for ways to automate it **Describe the solution you'd like** Create VM Disk backups using nodejs. **Describe alternatives you've considered** **Additional context** https://github.com/andrikosrikos/Google-Cloud-Functions/blob/master/Node.js/backupVMDiskSnapshot.js
1.0
VMDisk Snapshot - Thanks for stopping by to let us know something could be better! **PLEASE READ**: If you have a support contract with Google, please create an issue in the [support console](https://cloud.google.com/support/) instead of filing on GitHub. This will ensure a timely response. **Is your feature request related to a problem? Please describe.** I need to take manual backups on GCP and looking for ways to automate it **Describe the solution you'd like** Create VM Disk backups using nodejs. **Describe alternatives you've considered** **Additional context** https://github.com/andrikosrikos/Google-Cloud-Functions/blob/master/Node.js/backupVMDiskSnapshot.js
non_process
vmdisk snapshot thanks for stopping by to let us know something could be better please read if you have a support contract with google please create an issue in the instead of filing on github this will ensure a timely response is your feature request related to a problem please describe i need to take manual backups on gcp and looking for ways to automate it describe the solution you d like create vm disk backups using nodejs describe alternatives you ve considered additional context
0
357,072
25,176,319,810
IssuesEvent
2022-11-11 09:34:43
ichigh0st/pe
https://api.github.com/repos/ichigh0st/pe
opened
Proposed export list feature in DG has been implemented
type.DocumentationBug severity.VeryLow
No details provided by bug reporter. <!--session: 1668153167875-9825e2e9-0581-4c44-b169-8f750bbdb01e--> <!--Version: Web v3.4.4-->
1.0
Proposed export list feature in DG has been implemented - No details provided by bug reporter. <!--session: 1668153167875-9825e2e9-0581-4c44-b169-8f750bbdb01e--> <!--Version: Web v3.4.4-->
non_process
proposed export list feature in dg has been implemented no details provided by bug reporter
0
206,272
7,111,102,899
IssuesEvent
2018-01-17 13:09:15
rte-antares-rpackage/antaDraft
https://api.github.com/repos/rte-antares-rpackage/antaDraft
closed
[prod] NA au lieu de 0 pour plusieurs filières sur SPAIN
Priority 1
Ce que je constate sur le FTP ![capa_ftp](https://user-images.githubusercontent.com/22026724/35043761-175d703e-fb8e-11e7-9c0e-513a63c1a28e.png) Ce que je constate sur RStudio ![capa_r](https://user-images.githubusercontent.com/22026724/35043770-1e7e49b0-fb8e-11e7-9099-62532f069557.png) Le code pour retrouver l'anomalie : Au lieu d'avoir un NA on devrait avoir un 0. ```R perso_data_channel<-"D:\\Users\\jalazawa\\Documents\\2_ANTARES\\Dev\\packages\\transparency\\data_20180104\\B-PRODUCTION\\B01-Production_réalisée_par_filière\\2016" perso_data_channel_capa2016<-"D:\\Users\\jalazawa\\Documents\\2_ANTARES\\Dev\\packages\\transparency\\data_20180104\\B-PRODUCTION\\B06-Capacité_installée_par_filière\\2016" res2016<-anta_prod_channel( production_dir =perso_data_channel, capacity_dir = perso_data_channel_capa2016 ) res2016Valid<-augment_validation(res2016) res2016Valid[ res2016Valid$DateTime=="2016-01-01 02:00:00" & res2016Valid$AreaTypeCode=="BZN" & res2016Valid$country=="SPAIN" & (res2016Valid$production_type=="Fossil Coal-derived gas" | res2016Valid$production_type=="Fossil Gas") ,] %>% View("es_bzn_ca") ```
1.0
[prod] NA au lieu de 0 pour plusieurs filières sur SPAIN - Ce que je constate sur le FTP ![capa_ftp](https://user-images.githubusercontent.com/22026724/35043761-175d703e-fb8e-11e7-9c0e-513a63c1a28e.png) Ce que je constate sur RStudio ![capa_r](https://user-images.githubusercontent.com/22026724/35043770-1e7e49b0-fb8e-11e7-9099-62532f069557.png) Le code pour retrouver l'anomalie : Au lieu d'avoir un NA on devrait avoir un 0. ```R perso_data_channel<-"D:\\Users\\jalazawa\\Documents\\2_ANTARES\\Dev\\packages\\transparency\\data_20180104\\B-PRODUCTION\\B01-Production_réalisée_par_filière\\2016" perso_data_channel_capa2016<-"D:\\Users\\jalazawa\\Documents\\2_ANTARES\\Dev\\packages\\transparency\\data_20180104\\B-PRODUCTION\\B06-Capacité_installée_par_filière\\2016" res2016<-anta_prod_channel( production_dir =perso_data_channel, capacity_dir = perso_data_channel_capa2016 ) res2016Valid<-augment_validation(res2016) res2016Valid[ res2016Valid$DateTime=="2016-01-01 02:00:00" & res2016Valid$AreaTypeCode=="BZN" & res2016Valid$country=="SPAIN" & (res2016Valid$production_type=="Fossil Coal-derived gas" | res2016Valid$production_type=="Fossil Gas") ,] %>% View("es_bzn_ca") ```
non_process
na au lieu de pour plusieurs filières sur spain ce que je constate sur le ftp ce que je constate sur rstudio le code pour retrouver l anomalie au lieu d avoir un na on devrait avoir un r perso data channel d users jalazawa documents antares dev packages transparency data b production production réalisée par filière perso data channel d users jalazawa documents antares dev packages transparency data b production capacité installée par filière anta prod channel production dir perso data channel capacity dir perso data channel augment validation view es bzn ca
0
2,048
4,858,003,633
IssuesEvent
2016-11-12 22:20:15
CredentialTransparencyInitiative/vocabularies
https://api.github.com/repos/CredentialTransparencyInitiative/vocabularies
closed
Review ProcessProfile
Process Profile
Many properties within ProcessProfile probably belong with other entities - we need to review how ProcessProfile is used and determine which of its properties should be moved elsewhere. Likely suspects so far: - staffEvaluationMethod (move to Organization) - staffSelectionCriteria (move to Organization)
1.0
Review ProcessProfile - Many properties within ProcessProfile probably belong with other entities - we need to review how ProcessProfile is used and determine which of its properties should be moved elsewhere. Likely suspects so far: - staffEvaluationMethod (move to Organization) - staffSelectionCriteria (move to Organization)
process
review processprofile many properties within processprofile probably belong with other entities we need to review how processprofile is used and determine which of its properties should be moved elsewhere likely suspects so far staffevaluationmethod move to organization staffselectioncriteria move to organization
1
34,580
12,293,476,381
IssuesEvent
2020-05-10 19:10:54
heholek/better-onetab
https://api.github.com/repos/heholek/better-onetab
opened
CVE-2018-11693 (High) detected in opennms-opennms-source-22.0.1-1
security vulnerability
## CVE-2018-11693 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opennmsopennms-source-22.0.1-1</b></p></summary> <p> <p>A Java based fault and performance management system</p> <p>Library home page: <a href=https://sourceforge.net/projects/opennms/>https://sourceforge.net/projects/opennms/</a></p> <p>Found in HEAD commit: <a href="https://github.com/heholek/better-onetab/commit/34ef2ec6547275b25ebbcfdab80f0894bbeac266">34ef2ec6547275b25ebbcfdab80f0894bbeac266</a></p> </p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (64)</summary> <p></p> <p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p> <p> - /better-onetab/node_modules/console-browserify/test/static/test-adapter.js - /better-onetab/node_modules/nan/nan_callbacks_pre_12_inl.h - /better-onetab/node_modules/node-sass/src/libsass/src/expand.hpp - /better-onetab/node_modules/node-sass/src/sass_types/factory.cpp - /better-onetab/node_modules/js-base64/.attic/test-moment/./yoshinoya.js - /better-onetab/node_modules/node-sass/src/sass_types/boolean.cpp - /better-onetab/node_modules/node-sass/src/sass_types/value.h - /better-onetab/node_modules/node-sass/src/libsass/src/emitter.hpp - /better-onetab/node_modules/nan/nan_converters_pre_43_inl.h - /better-onetab/node_modules/node-sass/src/libsass/src/file.cpp - /better-onetab/node_modules/nan/nan_persistent_12_inl.h - /better-onetab/node_modules/node-sass/src/libsass/src/operation.hpp - /better-onetab/node_modules/nan/nan_persistent_pre_12_inl.h - /better-onetab/node_modules/node-sass/src/libsass/src/operators.hpp - /better-onetab/node_modules/node-sass/src/libsass/src/constants.hpp - /better-onetab/node_modules/node-sass/src/libsass/src/error_handling.hpp - /better-onetab/node_modules/nan/nan_implementation_pre_12_inl.h - /better-onetab/node_modules/js-base64/test/./dankogai.js - /better-onetab/node_modules/node-sass/src/libsass/src/constants.cpp - /better-onetab/node_modules/node-sass/src/sass_types/list.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/functions.hpp - /better-onetab/node_modules/node-sass/src/libsass/src/util.cpp - /better-onetab/node_modules/node-sass/src/custom_function_bridge.cpp - /better-onetab/node_modules/node-sass/src/custom_importer_bridge.h - /better-onetab/node_modules/node-sass/src/libsass/src/bind.cpp - /better-onetab/node_modules/nan/nan_json.h - /better-onetab/node_modules/node-sass/src/libsass/src/eval.hpp - /better-onetab/node_modules/nan/nan_converters.h - /better-onetab/node_modules/node-sass/src/libsass/src/backtrace.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/extend.cpp - /better-onetab/node_modules/node-sass/src/sass_types/sass_value_wrapper.h - /better-onetab/node_modules/node-sass/src/libsass/src/error_handling.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/emitter.cpp - /better-onetab/node_modules/node-sass/src/sass_types/number.cpp - /better-onetab/node_modules/node-sass/src/sass_types/color.h - /better-onetab/node_modules/nan/nan_new.h - /better-onetab/node_modules/node-sass/src/libsass/src/sass_values.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/ast.hpp - /better-onetab/node_modules/node-sass/src/libsass/src/output.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/check_nesting.cpp - /better-onetab/node_modules/node-sass/src/sass_types/null.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/ast_def_macros.hpp - /better-onetab/node_modules/node-sass/src/libsass/src/cssize.hpp - /better-onetab/node_modules/node-sass/src/libsass/src/ast.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/to_c.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/to_value.hpp - /better-onetab/node_modules/node-sass/src/libsass/src/ast_fwd_decl.hpp - /better-onetab/node_modules/nan/nan_callbacks.h - /better-onetab/node_modules/node-sass/src/libsass/src/inspect.hpp - /better-onetab/node_modules/node-sass/src/sass_types/color.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/values.cpp - /better-onetab/node_modules/node-sass/src/sass_types/list.h - /better-onetab/node_modules/node-sass/src/libsass/src/check_nesting.hpp - /better-onetab/node_modules/nan/nan_define_own_property_helper.h - /better-onetab/node_modules/js-base64/test/./es5.js - /better-onetab/node_modules/node-sass/src/sass_types/map.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/to_value.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/context.cpp - /better-onetab/node_modules/node-sass/src/sass_types/string.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/sass_context.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/prelexer.hpp - /better-onetab/node_modules/node-sass/src/libsass/src/context.hpp - /better-onetab/node_modules/node-sass/src/sass_types/boolean.h - /better-onetab/node_modules/nan/nan_private.h </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in LibSass through 3.5.4. An out-of-bounds read of a memory region was found in the function Sass::Prelexer::skip_over_scopes which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service. <p>Publish Date: 2018-06-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11693>CVE-2018-11693</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11693">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11693</a></p> <p>Release Date: 2018-06-04</p> <p>Fix Resolution: LibSass - 3.5.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-11693 (High) detected in opennms-opennms-source-22.0.1-1 - ## CVE-2018-11693 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opennmsopennms-source-22.0.1-1</b></p></summary> <p> <p>A Java based fault and performance management system</p> <p>Library home page: <a href=https://sourceforge.net/projects/opennms/>https://sourceforge.net/projects/opennms/</a></p> <p>Found in HEAD commit: <a href="https://github.com/heholek/better-onetab/commit/34ef2ec6547275b25ebbcfdab80f0894bbeac266">34ef2ec6547275b25ebbcfdab80f0894bbeac266</a></p> </p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (64)</summary> <p></p> <p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p> <p> - /better-onetab/node_modules/console-browserify/test/static/test-adapter.js - /better-onetab/node_modules/nan/nan_callbacks_pre_12_inl.h - /better-onetab/node_modules/node-sass/src/libsass/src/expand.hpp - /better-onetab/node_modules/node-sass/src/sass_types/factory.cpp - /better-onetab/node_modules/js-base64/.attic/test-moment/./yoshinoya.js - /better-onetab/node_modules/node-sass/src/sass_types/boolean.cpp - /better-onetab/node_modules/node-sass/src/sass_types/value.h - /better-onetab/node_modules/node-sass/src/libsass/src/emitter.hpp - /better-onetab/node_modules/nan/nan_converters_pre_43_inl.h - /better-onetab/node_modules/node-sass/src/libsass/src/file.cpp - /better-onetab/node_modules/nan/nan_persistent_12_inl.h - /better-onetab/node_modules/node-sass/src/libsass/src/operation.hpp - /better-onetab/node_modules/nan/nan_persistent_pre_12_inl.h - /better-onetab/node_modules/node-sass/src/libsass/src/operators.hpp - /better-onetab/node_modules/node-sass/src/libsass/src/constants.hpp - /better-onetab/node_modules/node-sass/src/libsass/src/error_handling.hpp - /better-onetab/node_modules/nan/nan_implementation_pre_12_inl.h - /better-onetab/node_modules/js-base64/test/./dankogai.js - /better-onetab/node_modules/node-sass/src/libsass/src/constants.cpp - /better-onetab/node_modules/node-sass/src/sass_types/list.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/functions.hpp - /better-onetab/node_modules/node-sass/src/libsass/src/util.cpp - /better-onetab/node_modules/node-sass/src/custom_function_bridge.cpp - /better-onetab/node_modules/node-sass/src/custom_importer_bridge.h - /better-onetab/node_modules/node-sass/src/libsass/src/bind.cpp - /better-onetab/node_modules/nan/nan_json.h - /better-onetab/node_modules/node-sass/src/libsass/src/eval.hpp - /better-onetab/node_modules/nan/nan_converters.h - /better-onetab/node_modules/node-sass/src/libsass/src/backtrace.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/extend.cpp - /better-onetab/node_modules/node-sass/src/sass_types/sass_value_wrapper.h - /better-onetab/node_modules/node-sass/src/libsass/src/error_handling.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/emitter.cpp - /better-onetab/node_modules/node-sass/src/sass_types/number.cpp - /better-onetab/node_modules/node-sass/src/sass_types/color.h - /better-onetab/node_modules/nan/nan_new.h - /better-onetab/node_modules/node-sass/src/libsass/src/sass_values.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/ast.hpp - /better-onetab/node_modules/node-sass/src/libsass/src/output.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/check_nesting.cpp - /better-onetab/node_modules/node-sass/src/sass_types/null.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/ast_def_macros.hpp - /better-onetab/node_modules/node-sass/src/libsass/src/cssize.hpp - /better-onetab/node_modules/node-sass/src/libsass/src/ast.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/to_c.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/to_value.hpp - /better-onetab/node_modules/node-sass/src/libsass/src/ast_fwd_decl.hpp - /better-onetab/node_modules/nan/nan_callbacks.h - /better-onetab/node_modules/node-sass/src/libsass/src/inspect.hpp - /better-onetab/node_modules/node-sass/src/sass_types/color.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/values.cpp - /better-onetab/node_modules/node-sass/src/sass_types/list.h - /better-onetab/node_modules/node-sass/src/libsass/src/check_nesting.hpp - /better-onetab/node_modules/nan/nan_define_own_property_helper.h - /better-onetab/node_modules/js-base64/test/./es5.js - /better-onetab/node_modules/node-sass/src/sass_types/map.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/to_value.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/context.cpp - /better-onetab/node_modules/node-sass/src/sass_types/string.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/sass_context.cpp - /better-onetab/node_modules/node-sass/src/libsass/src/prelexer.hpp - /better-onetab/node_modules/node-sass/src/libsass/src/context.hpp - /better-onetab/node_modules/node-sass/src/sass_types/boolean.h - /better-onetab/node_modules/nan/nan_private.h </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in LibSass through 3.5.4. An out-of-bounds read of a memory region was found in the function Sass::Prelexer::skip_over_scopes which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service. <p>Publish Date: 2018-06-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11693>CVE-2018-11693</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11693">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11693</a></p> <p>Release Date: 2018-06-04</p> <p>Fix Resolution: LibSass - 3.5.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in opennms opennms source cve high severity vulnerability vulnerable library opennmsopennms source a java based fault and performance management system library home page a href found in head commit a href library source files the source files were matched to this source library based on a best effort match source libraries are selected from a list of probable public libraries better onetab node modules console browserify test static test adapter js better onetab node modules nan nan callbacks pre inl h better onetab node modules node sass src libsass src expand hpp better onetab node modules node sass src sass types factory cpp better onetab node modules js attic test moment yoshinoya js better onetab node modules node sass src sass types boolean cpp better onetab node modules node sass src sass types value h better onetab node modules node sass src libsass src emitter hpp better onetab node modules nan nan converters pre inl h better onetab node modules node sass src libsass src file cpp better onetab node modules nan nan persistent inl h better onetab node modules node sass src libsass src operation hpp better onetab node modules nan nan persistent pre inl h better onetab node modules node sass src libsass src operators hpp better onetab node modules node sass src libsass src constants hpp better onetab node modules node sass src libsass src error handling hpp better onetab node modules nan nan implementation pre inl h better onetab node modules js test dankogai js better onetab node modules node sass src libsass src constants cpp better onetab node modules node sass src sass types list cpp better onetab node modules node sass src libsass src functions hpp better onetab node modules node sass src libsass src util cpp better onetab node modules node sass src custom function bridge cpp better onetab node modules node sass src custom importer bridge h better onetab node modules node sass src libsass src bind cpp better onetab node modules nan nan json h better onetab node modules node sass src libsass src eval hpp better onetab node modules nan nan converters h better onetab node modules node sass src libsass src backtrace cpp better onetab node modules node sass src libsass src extend cpp better onetab node modules node sass src sass types sass value wrapper h better onetab node modules node sass src libsass src error handling cpp better onetab node modules node sass src libsass src emitter cpp better onetab node modules node sass src sass types number cpp better onetab node modules node sass src sass types color h better onetab node modules nan nan new h better onetab node modules node sass src libsass src sass values cpp better onetab node modules node sass src libsass src ast hpp better onetab node modules node sass src libsass src output cpp better onetab node modules node sass src libsass src check nesting cpp better onetab node modules node sass src sass types null cpp better onetab node modules node sass src libsass src ast def macros hpp better onetab node modules node sass src libsass src cssize hpp better onetab node modules node sass src libsass src ast cpp better onetab node modules node sass src libsass src to c cpp better onetab node modules node sass src libsass src to value hpp better onetab node modules node sass src libsass src ast fwd decl hpp better onetab node modules nan nan callbacks h better onetab node modules node sass src libsass src inspect hpp better onetab node modules node sass src sass types color cpp better onetab node modules node sass src libsass src values cpp better onetab node modules node sass src sass types list h better onetab node modules node sass src libsass src check nesting hpp better onetab node modules nan nan define own property helper h better onetab node modules js test js better onetab node modules node sass src sass types map cpp better onetab node modules node sass src libsass src to value cpp better onetab node modules node sass src libsass src context cpp better onetab node modules node sass src sass types string cpp better onetab node modules node sass src libsass src sass context cpp better onetab node modules node sass src libsass src prelexer hpp better onetab node modules node sass src libsass src context hpp better onetab node modules node sass src sass types boolean h better onetab node modules nan nan private h vulnerability details an issue was discovered in libsass through an out of bounds read of a memory region was found in the function sass prelexer skip over scopes which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass step up your open source security game with whitesource
0
5,413
8,248,134,309
IssuesEvent
2018-09-11 17:31:35
emacs-ess/ESS
https://api.github.com/repos/emacs-ess/ESS
closed
Error with tracebug
bug:severe process:debug
Tracebug recently stopped working for me. I don't have time to investigate but I see these errors: ``` error in process filter: ess--dbg-find-buffer: No catch for tag: --cl-block-nil--, nil error in process filter: No catch for tag: --cl-block-nil--, nil ``` Can anyone confirm?
1.0
Error with tracebug - Tracebug recently stopped working for me. I don't have time to investigate but I see these errors: ``` error in process filter: ess--dbg-find-buffer: No catch for tag: --cl-block-nil--, nil error in process filter: No catch for tag: --cl-block-nil--, nil ``` Can anyone confirm?
process
error with tracebug tracebug recently stopped working for me i don t have time to investigate but i see these errors error in process filter ess dbg find buffer no catch for tag cl block nil nil error in process filter no catch for tag cl block nil nil can anyone confirm
1
247,730
20,987,903,712
IssuesEvent
2022-03-29 06:22:02
LimeChain/hashport-validator
https://api.github.com/repos/LimeChain/hashport-validator
closed
Interface and mock for gorm.DB
unit tests
Currently, we can't mock gorm.DB as it is concrete struct. In order to do It we need to do the following: - Create an interface for gorm.DB - Create Mock which implements the new interface
1.0
Interface and mock for gorm.DB - Currently, we can't mock gorm.DB as it is concrete struct. In order to do It we need to do the following: - Create an interface for gorm.DB - Create Mock which implements the new interface
non_process
interface and mock for gorm db currently we can t mock gorm db as it is concrete struct in order to do it we need to do the following create an interface for gorm db create mock which implements the new interface
0
175,103
21,300,770,930
IssuesEvent
2022-04-15 02:35:06
NakRex/virtual-library
https://api.github.com/repos/NakRex/virtual-library
opened
CVE-2021-43138 (High) detected in async-0.9.2.tgz, async-2.6.3.tgz
security vulnerability
## CVE-2021-43138 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>async-0.9.2.tgz</b>, <b>async-2.6.3.tgz</b></p></summary> <p> <details><summary><b>async-0.9.2.tgz</b></p></summary> <p>Higher-order functions and common patterns for asynchronous code</p> <p>Library home page: <a href="https://registry.npmjs.org/async/-/async-0.9.2.tgz">https://registry.npmjs.org/async/-/async-0.9.2.tgz</a></p> <p>Path to dependency file: /frontend/package.json</p> <p>Path to vulnerable library: /frontend/node_modules/jake/node_modules/async/package.json,/Backend/node_modules/async/package.json</p> <p> Dependency Hierarchy: - ejs-3.1.6.tgz (Root Library) - jake-10.8.2.tgz - :x: **async-0.9.2.tgz** (Vulnerable Library) </details> <details><summary><b>async-2.6.3.tgz</b></p></summary> <p>Higher-order functions and common patterns for asynchronous code</p> <p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.6.3.tgz">https://registry.npmjs.org/async/-/async-2.6.3.tgz</a></p> <p>Path to dependency file: /frontend/package.json</p> <p>Path to vulnerable library: /frontend/node_modules/async/package.json</p> <p> Dependency Hierarchy: - react-scripts-5.0.0.tgz (Root Library) - webpack-dev-server-4.7.1.tgz - portfinder-1.0.28.tgz - :x: **async-2.6.3.tgz** (Vulnerable Library) </details> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2) , which could let a malicious user obtain privileges via the mapValues() method. <p>Publish Date: 2022-04-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43138>CVE-2021-43138</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p> <p>Release Date: 2022-04-06</p> <p>Fix Resolution: async - v3.2.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-43138 (High) detected in async-0.9.2.tgz, async-2.6.3.tgz - ## CVE-2021-43138 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>async-0.9.2.tgz</b>, <b>async-2.6.3.tgz</b></p></summary> <p> <details><summary><b>async-0.9.2.tgz</b></p></summary> <p>Higher-order functions and common patterns for asynchronous code</p> <p>Library home page: <a href="https://registry.npmjs.org/async/-/async-0.9.2.tgz">https://registry.npmjs.org/async/-/async-0.9.2.tgz</a></p> <p>Path to dependency file: /frontend/package.json</p> <p>Path to vulnerable library: /frontend/node_modules/jake/node_modules/async/package.json,/Backend/node_modules/async/package.json</p> <p> Dependency Hierarchy: - ejs-3.1.6.tgz (Root Library) - jake-10.8.2.tgz - :x: **async-0.9.2.tgz** (Vulnerable Library) </details> <details><summary><b>async-2.6.3.tgz</b></p></summary> <p>Higher-order functions and common patterns for asynchronous code</p> <p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.6.3.tgz">https://registry.npmjs.org/async/-/async-2.6.3.tgz</a></p> <p>Path to dependency file: /frontend/package.json</p> <p>Path to vulnerable library: /frontend/node_modules/async/package.json</p> <p> Dependency Hierarchy: - react-scripts-5.0.0.tgz (Root Library) - webpack-dev-server-4.7.1.tgz - portfinder-1.0.28.tgz - :x: **async-2.6.3.tgz** (Vulnerable Library) </details> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2) , which could let a malicious user obtain privileges via the mapValues() method. <p>Publish Date: 2022-04-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43138>CVE-2021-43138</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p> <p>Release Date: 2022-04-06</p> <p>Fix Resolution: async - v3.2.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in async tgz async tgz cve high severity vulnerability vulnerable libraries async tgz async tgz async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file frontend package json path to vulnerable library frontend node modules jake node modules async package json backend node modules async package json dependency hierarchy ejs tgz root library jake tgz x async tgz vulnerable library async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file frontend package json path to vulnerable library frontend node modules async package json dependency hierarchy react scripts tgz root library webpack dev server tgz portfinder tgz x async tgz vulnerable library found in base branch main vulnerability details a vulnerability exists in async through fixed in which could let a malicious user obtain privileges via the mapvalues method publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution async step up your open source security game with whitesource
0
308,163
9,430,987,546
IssuesEvent
2019-04-12 10:22:40
CS2103-AY1819S2-W14-2/main
https://api.github.com/repos/CS2103-AY1819S2-W14-2/main
closed
[QOL] Unused/outdated assets and temp folders/methods to be removed
priority.Medium severity.Medium status.Ongoing
- [x] Remove unused folders from application directory. - [ ] Remove outdated methods
1.0
[QOL] Unused/outdated assets and temp folders/methods to be removed - - [x] Remove unused folders from application directory. - [ ] Remove outdated methods
non_process
unused outdated assets and temp folders methods to be removed remove unused folders from application directory remove outdated methods
0
6,480
9,552,538,721
IssuesEvent
2019-05-02 16:53:45
hashicorp/packer
https://api.github.com/repos/hashicorp/packer
closed
post-processor vsphere-template add parameter for wait for ‘VMware vSphere to start’ bug/feature-request
bug post-processor/vsphere-template waiting-reply
Packer version: v1.2.1 Host platform: Packer running on Windows 10, ESXi/vCenter 6.5 and ESXi 6.0/vCenter 6.5. This is a bug report that could be worked around with the simple new feature. The problem appeared in Terraform, trying to clone a VM from a template that had been successfully created by Packer using the builders "vmware-iso" on a [Remote vSphere Hypervisor ](https://www.packer.io/docs/builders/vmware-iso.html#building-on-a-remote-vsphere-hypervisor) and the post-processors "vsphere-template". See [sample Packer template ](https://gist.github.com/GMZwinge/8743b1be26a931d846c1b43aabb880fd). Terraform was giving this error: * vsphere_virtual_machine.vm: Resource 'data.vsphere_virtual_machine.template' not found for variable 'data.vsphere_virtual_machine.template.id' This problem always happened on a test ESXi/vCenter 6.5 running on a slower hardware, while it did not happen all the time on a faster hardware running ESXi 6.0/vCenter 6.5. After some investigation, the root cause was identified as being two entries uuid.bios and vc.uuid that were missing from the .vmtx file. Backtracking to the Packer script revealed that sometime around the end of the builder “vmware-iso” and the start of the post processor “vsphere-template”, those entries disappear from the .vmx file, and reappear with different values some seconds later (about 10 seconds on the faster hardware and about 15 seconds on the slower hardware). Looking at the vsphere-template post-processor code, I noticed a 10 seconds delay when starting the post processor “vsphere-template”. See https://github.com/hashicorp/packer/blob/ace5fb7622ed46b63831d43ecd6d05b58544cf25/post-processor/vsphere-template/post-processor.go#L105 The comments explaining the reason for this delay have slightly changed over time, but have always been vague about the reason for the delay: Before Jul 10 2017 , https://github.com/hashicorp/packer/commit/3cc9f204acc289e9adbf70c3be087b5c2dd25b8a#diff-2d1af112f5b55ed31686536a6d1b4ac1 //We give a vSphere-ESXI 10s to sync Jul 18 2017 , https://github.com/hashicorp/packer/commit/fa10616f57f1801713a70793cb2596967b6bbb32#diff-2d1af112f5b55ed31686536a6d1b4ac1: // In some occasions when the VM is mark as template it loses its configuration if it's done immediately // after the ESXi creates it. If vSphere is given a few seconds this behavior doesn't reappear. Aug 14 2017 , https://github.com/hashicorp/packer/commit/81272d1427b5ce0c30fb79d55a1f7618921a8ad4#diff-2d1af112f5b55ed31686536a6d1b4ac1: // In some occasions the VM state is powered on and if we immediately try to mark as template // (after the ESXi creates it) it will fail. If vSphere is given a few seconds this behavior doesn't reappear. I still don’t know what triggers the removal and addition of those uuids, but it seems clear that the reason for the delay is not fully understood. Turning the delay into a parameter could give a workaround for this issue and possibly future issues due to ESXi/vCenter doing things outside of Packer's knowledge. Thanks, Georges
1.0
post-processor vsphere-template add parameter for wait for ‘VMware vSphere to start’ bug/feature-request - Packer version: v1.2.1 Host platform: Packer running on Windows 10, ESXi/vCenter 6.5 and ESXi 6.0/vCenter 6.5. This is a bug report that could be worked around with the simple new feature. The problem appeared in Terraform, trying to clone a VM from a template that had been successfully created by Packer using the builders "vmware-iso" on a [Remote vSphere Hypervisor ](https://www.packer.io/docs/builders/vmware-iso.html#building-on-a-remote-vsphere-hypervisor) and the post-processors "vsphere-template". See [sample Packer template ](https://gist.github.com/GMZwinge/8743b1be26a931d846c1b43aabb880fd). Terraform was giving this error: * vsphere_virtual_machine.vm: Resource 'data.vsphere_virtual_machine.template' not found for variable 'data.vsphere_virtual_machine.template.id' This problem always happened on a test ESXi/vCenter 6.5 running on a slower hardware, while it did not happen all the time on a faster hardware running ESXi 6.0/vCenter 6.5. After some investigation, the root cause was identified as being two entries uuid.bios and vc.uuid that were missing from the .vmtx file. Backtracking to the Packer script revealed that sometime around the end of the builder “vmware-iso” and the start of the post processor “vsphere-template”, those entries disappear from the .vmx file, and reappear with different values some seconds later (about 10 seconds on the faster hardware and about 15 seconds on the slower hardware). Looking at the vsphere-template post-processor code, I noticed a 10 seconds delay when starting the post processor “vsphere-template”. See https://github.com/hashicorp/packer/blob/ace5fb7622ed46b63831d43ecd6d05b58544cf25/post-processor/vsphere-template/post-processor.go#L105 The comments explaining the reason for this delay have slightly changed over time, but have always been vague about the reason for the delay: Before Jul 10 2017 , https://github.com/hashicorp/packer/commit/3cc9f204acc289e9adbf70c3be087b5c2dd25b8a#diff-2d1af112f5b55ed31686536a6d1b4ac1 //We give a vSphere-ESXI 10s to sync Jul 18 2017 , https://github.com/hashicorp/packer/commit/fa10616f57f1801713a70793cb2596967b6bbb32#diff-2d1af112f5b55ed31686536a6d1b4ac1: // In some occasions when the VM is mark as template it loses its configuration if it's done immediately // after the ESXi creates it. If vSphere is given a few seconds this behavior doesn't reappear. Aug 14 2017 , https://github.com/hashicorp/packer/commit/81272d1427b5ce0c30fb79d55a1f7618921a8ad4#diff-2d1af112f5b55ed31686536a6d1b4ac1: // In some occasions the VM state is powered on and if we immediately try to mark as template // (after the ESXi creates it) it will fail. If vSphere is given a few seconds this behavior doesn't reappear. I still don’t know what triggers the removal and addition of those uuids, but it seems clear that the reason for the delay is not fully understood. Turning the delay into a parameter could give a workaround for this issue and possibly future issues due to ESXi/vCenter doing things outside of Packer's knowledge. Thanks, Georges
process
post processor vsphere template add parameter for wait for ‘vmware vsphere to start’ bug feature request packer version host platform packer running on windows esxi vcenter and esxi vcenter this is a bug report that could be worked around with the simple new feature the problem appeared in terraform trying to clone a vm from a template that had been successfully created by packer using the builders vmware iso on a and the post processors vsphere template see terraform was giving this error vsphere virtual machine vm resource data vsphere virtual machine template not found for variable data vsphere virtual machine template id this problem always happened on a test esxi vcenter running on a slower hardware while it did not happen all the time on a faster hardware running esxi vcenter after some investigation the root cause was identified as being two entries uuid bios and vc uuid that were missing from the vmtx file backtracking to the packer script revealed that sometime around the end of the builder “vmware iso” and the start of the post processor “vsphere template” those entries disappear from the vmx file and reappear with different values some seconds later about seconds on the faster hardware and about seconds on the slower hardware looking at the vsphere template post processor code i noticed a seconds delay when starting the post processor “vsphere template” see the comments explaining the reason for this delay have slightly changed over time but have always been vague about the reason for the delay before jul we give a vsphere esxi to sync jul in some occasions when the vm is mark as template it loses its configuration if it s done immediately after the esxi creates it if vsphere is given a few seconds this behavior doesn t reappear aug in some occasions the vm state is powered on and if we immediately try to mark as template after the esxi creates it it will fail if vsphere is given a few seconds this behavior doesn t reappear i still don’t know what triggers the removal and addition of those uuids but it seems clear that the reason for the delay is not fully understood turning the delay into a parameter could give a workaround for this issue and possibly future issues due to esxi vcenter doing things outside of packer s knowledge thanks georges
1
176,590
21,411,784,899
IssuesEvent
2022-04-22 06:58:28
AlexRogalskiy/java-patterns
https://api.github.com/repos/AlexRogalskiy/java-patterns
opened
CVE-2018-3721 (Medium) detected in multiple libraries
security vulnerability
## CVE-2018-3721 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-4.17.4.tgz</b>, <b>lodash-3.10.1.tgz</b>, <b>lodash-2.4.2.tgz</b></p></summary> <p> <details><summary><b>lodash-4.17.4.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.4.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.4.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/gitbook-cli/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - gitbook-cli-2.3.2.tgz (Root Library) - :x: **lodash-4.17.4.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-3.10.1.tgz</b></p></summary> <p>The modern build of lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/jsdoctypeparser/node_modules/lodash/package.json,/node_modules/jscs/node_modules/lodash/package.json,/node_modules/xmlbuilder/node_modules/lodash/package.json,/node_modules/roadmarks/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - jscs-3.0.7.tgz (Root Library) - :x: **lodash-3.10.1.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-2.4.2.tgz</b></p></summary> <p>A utility library delivering consistency, customization, performance, & extras.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/dockerfile_lint/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - dockerfile_lint-0.3.4.tgz (Root Library) - :x: **lodash-2.4.2.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/java-patterns/commit/0e3f838823fb09cc237bb3fc8f2e2651a2d0f0e6">0e3f838823fb09cc237bb3fc8f2e2651a2d0f0e6</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of "Object" via __proto__, causing the addition or modification of an existing property that will exist on all objects. <p>Publish Date: 2018-06-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3721>CVE-2018-3721</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-3721">https://nvd.nist.gov/vuln/detail/CVE-2018-3721</a></p> <p>Release Date: 2018-06-07</p> <p>Fix Resolution: 4.17.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-3721 (Medium) detected in multiple libraries - ## CVE-2018-3721 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-4.17.4.tgz</b>, <b>lodash-3.10.1.tgz</b>, <b>lodash-2.4.2.tgz</b></p></summary> <p> <details><summary><b>lodash-4.17.4.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.4.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.4.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/gitbook-cli/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - gitbook-cli-2.3.2.tgz (Root Library) - :x: **lodash-4.17.4.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-3.10.1.tgz</b></p></summary> <p>The modern build of lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/jsdoctypeparser/node_modules/lodash/package.json,/node_modules/jscs/node_modules/lodash/package.json,/node_modules/xmlbuilder/node_modules/lodash/package.json,/node_modules/roadmarks/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - jscs-3.0.7.tgz (Root Library) - :x: **lodash-3.10.1.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-2.4.2.tgz</b></p></summary> <p>A utility library delivering consistency, customization, performance, & extras.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/dockerfile_lint/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - dockerfile_lint-0.3.4.tgz (Root Library) - :x: **lodash-2.4.2.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/java-patterns/commit/0e3f838823fb09cc237bb3fc8f2e2651a2d0f0e6">0e3f838823fb09cc237bb3fc8f2e2651a2d0f0e6</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of "Object" via __proto__, causing the addition or modification of an existing property that will exist on all objects. <p>Publish Date: 2018-06-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3721>CVE-2018-3721</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-3721">https://nvd.nist.gov/vuln/detail/CVE-2018-3721</a></p> <p>Release Date: 2018-06-07</p> <p>Fix Resolution: 4.17.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries lodash tgz lodash tgz lodash tgz lodash tgz lodash modular utilities library home page a href path to dependency file package json path to vulnerable library node modules gitbook cli node modules lodash package json dependency hierarchy gitbook cli tgz root library x lodash tgz vulnerable library lodash tgz the modern build of lodash modular utilities library home page a href path to dependency file package json path to vulnerable library node modules jsdoctypeparser node modules lodash package json node modules jscs node modules lodash package json node modules xmlbuilder node modules lodash package json node modules roadmarks node modules lodash package json dependency hierarchy jscs tgz root library x lodash tgz vulnerable library lodash tgz a utility library delivering consistency customization performance extras library home page a href path to dependency file package json path to vulnerable library node modules dockerfile lint node modules lodash package json dependency hierarchy dockerfile lint tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash node module before suffers from a modification of assumed immutable data maid vulnerability via defaultsdeep merge and mergewith functions which allows a malicious user to modify the prototype of object via proto causing the addition or modification of an existing property that will exist on all objects publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
82,805
23,880,319,525
IssuesEvent
2022-09-08 00:19:59
cloudfoundry/korifi
https://api.github.com/repos/cloudfoundry/korifi
opened
[Feature]: When repushing an existing App, specifying any buildpack other than default errors
Build and Run Workloads
### Blockers/Dependencies #1054 ### Background **As a** cf CLI user **I want** to receive an error message when I select a buildpack other than default **So that** I have understand that I cannot select a specific buildpack (right now we silently ignore the buildpack request) ### Acceptance Criteria **GIVEN** a korifi cluster **WHEN** I run `cf push` **AND** I wait for the push to succeed **AND** I run `cf push -b paketo-buildpacks/go` for the same app **THEN I** should see that the build fails with an error explaining that only the default buildpack is currently supported --- **WHEN I** run `cf push`, `cf push -b default`, or `cf push -b null` for an existing App **THEN I** should see that the build succeeds **AND** that `cf app <AppName>` shows no buildpacks (i.e default) ### Dev Notes This is a follow-up to #1171 to handle the update case. The original story only handled this problem when the App is first created.
1.0
[Feature]: When repushing an existing App, specifying any buildpack other than default errors - ### Blockers/Dependencies #1054 ### Background **As a** cf CLI user **I want** to receive an error message when I select a buildpack other than default **So that** I have understand that I cannot select a specific buildpack (right now we silently ignore the buildpack request) ### Acceptance Criteria **GIVEN** a korifi cluster **WHEN** I run `cf push` **AND** I wait for the push to succeed **AND** I run `cf push -b paketo-buildpacks/go` for the same app **THEN I** should see that the build fails with an error explaining that only the default buildpack is currently supported --- **WHEN I** run `cf push`, `cf push -b default`, or `cf push -b null` for an existing App **THEN I** should see that the build succeeds **AND** that `cf app <AppName>` shows no buildpacks (i.e default) ### Dev Notes This is a follow-up to #1171 to handle the update case. The original story only handled this problem when the App is first created.
non_process
when repushing an existing app specifying any buildpack other than default errors blockers dependencies background as a cf cli user i want to receive an error message when i select a buildpack other than default so that i have understand that i cannot select a specific buildpack right now we silently ignore the buildpack request acceptance criteria given a korifi cluster when i run cf push and i wait for the push to succeed and i run cf push b paketo buildpacks go for the same app then i should see that the build fails with an error explaining that only the default buildpack is currently supported when i run cf push cf push b default or cf push b null for an existing app then i should see that the build succeeds and that cf app shows no buildpacks i e default dev notes this is a follow up to to handle the update case the original story only handled this problem when the app is first created
0
162,482
25,545,016,158
IssuesEvent
2022-11-29 18:02:41
flutter/flutter
https://api.github.com/repos/flutter/flutter
closed
Update Snackbar to support Material 3
framework f: material design
As part of #91605, we need to migrate the `Snackbar` widget to [Material 3](https://m3.material.io/components/snackbar/overview): <img width="357" alt="Screen Shot 2022-10-26 at 11 52 11 AM" src="https://user-images.githubusercontent.com/19588/198112489-048946c3-66a6-46e5-bef4-537782b4b2b6.png">
1.0
Update Snackbar to support Material 3 - As part of #91605, we need to migrate the `Snackbar` widget to [Material 3](https://m3.material.io/components/snackbar/overview): <img width="357" alt="Screen Shot 2022-10-26 at 11 52 11 AM" src="https://user-images.githubusercontent.com/19588/198112489-048946c3-66a6-46e5-bef4-537782b4b2b6.png">
non_process
update snackbar to support material as part of we need to migrate the snackbar widget to img width alt screen shot at am src
0
218,941
7,332,821,003
IssuesEvent
2018-03-05 17:24:22
NCEAS/metacat
https://api.github.com/repos/NCEAS/metacat
closed
Problem in activating a new generated ldap account
Component: Bugzilla-Id Priority: Normal Status: Closed Tracker: Bug
--- Author Name: **Jing Tao** (Jing Tao) Original Redmine Issue: 6473, https://projects.ecoinformatics.org/ecoinfo/issues/6473 Original Date: 2014-03-19 Original Assignee: Jing Tao --- Zach Nelson reported there was an issue to activate his account: hash string Kv9aZuLOdu$tO3 doesn't match our record. but the account was activated: river:conf tao$ ldapsearch -x -h ldap.ecoinformatics.org -b o=unaffiliated,dc=ecoinformatics,dc=org uid=zach-nelson 1. extended LDIF # 1. LDAPv3 1. base <o=unaffiliated,dc=ecoinformatics,dc=org> with scope subtree 1. filter: uid=zach-nelson 1. requesting: ALL # 1. zach-nelson, unaffiliated, ecoinformatics.org dn: uid=zach-nelson,o=unaffiliated,dc=ecoinformatics,dc=org cn: Zachary nelson sn: nelson givenName: Zachary mail: z.j.nelson2010@gmail.com employeeNumber: Kv9aZuLOdu$tO3^X uidNumber: 30056 gidNumber: 30056 loginShell: /sbin/nologin homeDirectory: /dev/null objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount o: unaffiliated gecos: Zachary nelson,,, uid: zach-nelson 1. search result search: 2 result: 0 Success 1. numResponses: 2 1. numEntries: 1 We need figure out why the hash string didn't match and why the account was activated even though the hash string didn't match.
1.0
Problem in activating a new generated ldap account - --- Author Name: **Jing Tao** (Jing Tao) Original Redmine Issue: 6473, https://projects.ecoinformatics.org/ecoinfo/issues/6473 Original Date: 2014-03-19 Original Assignee: Jing Tao --- Zach Nelson reported there was an issue to activate his account: hash string Kv9aZuLOdu$tO3 doesn't match our record. but the account was activated: river:conf tao$ ldapsearch -x -h ldap.ecoinformatics.org -b o=unaffiliated,dc=ecoinformatics,dc=org uid=zach-nelson 1. extended LDIF # 1. LDAPv3 1. base <o=unaffiliated,dc=ecoinformatics,dc=org> with scope subtree 1. filter: uid=zach-nelson 1. requesting: ALL # 1. zach-nelson, unaffiliated, ecoinformatics.org dn: uid=zach-nelson,o=unaffiliated,dc=ecoinformatics,dc=org cn: Zachary nelson sn: nelson givenName: Zachary mail: z.j.nelson2010@gmail.com employeeNumber: Kv9aZuLOdu$tO3^X uidNumber: 30056 gidNumber: 30056 loginShell: /sbin/nologin homeDirectory: /dev/null objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount o: unaffiliated gecos: Zachary nelson,,, uid: zach-nelson 1. search result search: 2 result: 0 Success 1. numResponses: 2 1. numEntries: 1 We need figure out why the hash string didn't match and why the account was activated even though the hash string didn't match.
non_process
problem in activating a new generated ldap account author name jing tao jing tao original redmine issue original date original assignee jing tao zach nelson reported there was an issue to activate his account hash string doesn t match our record but the account was activated river conf tao ldapsearch x h ldap ecoinformatics org b o unaffiliated dc ecoinformatics dc org uid zach nelson extended ldif base with scope subtree filter uid zach nelson requesting all zach nelson unaffiliated ecoinformatics org dn uid zach nelson o unaffiliated dc ecoinformatics dc org cn zachary nelson sn nelson givenname zachary mail z j gmail com employeenumber x uidnumber gidnumber loginshell sbin nologin homedirectory dev null objectclass top objectclass person objectclass organizationalperson objectclass inetorgperson objectclass posixaccount objectclass shadowaccount o unaffiliated gecos zachary nelson uid zach nelson search result search result success numresponses numentries we need figure out why the hash string didn t match and why the account was activated even though the hash string didn t match
0
3,197
6,261,735,509
IssuesEvent
2017-07-15 02:46:35
P0cL4bs/WiFi-Pumpkin
https://api.github.com/repos/P0cL4bs/WiFi-Pumpkin
closed
IOError: [Errno socket error] [Errno -2] Name or service not known
in process priority solved
LOG After the IO error i dont get any more traffic. When i visit a HTTPS page on victim's device. It redirects it to http and then : Traceback (most recent call last): File "/home/hargerao/Documents/Tools/WiFi-Pumpkin-master/core/servers/proxy/tcp/intercept.py", line 43, in run self.main() File "/home/hargerao/Documents/Tools/WiFi-Pumpkin-master/core/servers/proxy/tcp/intercept.py", line 94, in main self.plugins[Active].filterPackets(pkt) File "/home/hargerao/Documents/Tools/WiFi-Pumpkin-master/plugins/analyzers/image.py", line 40, in filterPackets urlretrieve('http://{}{}'.format(http_layer.fields['Host'], http_layer.fields['Path']),file_name) File "/usr/lib/python2.7/urllib.py", line 98, in urlretrieve return opener.retrieve(url, filename, reporthook, data) File "/usr/lib/python2.7/urllib.py", line 245, in retrieve fp = self.open(url, data) File "/usr/lib/python2.7/urllib.py", line 213, in open return getattr(self, name)(url) File "/usr/lib/python2.7/urllib.py", line 350, in open_http h.endheaders(data) File "/usr/lib/python2.7/httplib.py", line 1038, in endheaders self._send_output(message_body) File "/usr/lib/python2.7/httplib.py", line 882, in _send_output self.send(msg) File "/usr/lib/python2.7/httplib.py", line 844, in send self.connect() File "/usr/lib/python2.7/httplib.py", line 821, in connect self.timeout, self.source_address) File "/usr/lib/python2.7/socket.py", line 557, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): IOError: [Errno socket error] [Errno -2] Name or service not known I get this error then. WiFi-Pumpkin is running fine with Sergioproxy or Pumpkinproxy. This is only with SSLSTRIP+/Dns2Proxy Thankyou
1.0
IOError: [Errno socket error] [Errno -2] Name or service not known - LOG After the IO error i dont get any more traffic. When i visit a HTTPS page on victim's device. It redirects it to http and then : Traceback (most recent call last): File "/home/hargerao/Documents/Tools/WiFi-Pumpkin-master/core/servers/proxy/tcp/intercept.py", line 43, in run self.main() File "/home/hargerao/Documents/Tools/WiFi-Pumpkin-master/core/servers/proxy/tcp/intercept.py", line 94, in main self.plugins[Active].filterPackets(pkt) File "/home/hargerao/Documents/Tools/WiFi-Pumpkin-master/plugins/analyzers/image.py", line 40, in filterPackets urlretrieve('http://{}{}'.format(http_layer.fields['Host'], http_layer.fields['Path']),file_name) File "/usr/lib/python2.7/urllib.py", line 98, in urlretrieve return opener.retrieve(url, filename, reporthook, data) File "/usr/lib/python2.7/urllib.py", line 245, in retrieve fp = self.open(url, data) File "/usr/lib/python2.7/urllib.py", line 213, in open return getattr(self, name)(url) File "/usr/lib/python2.7/urllib.py", line 350, in open_http h.endheaders(data) File "/usr/lib/python2.7/httplib.py", line 1038, in endheaders self._send_output(message_body) File "/usr/lib/python2.7/httplib.py", line 882, in _send_output self.send(msg) File "/usr/lib/python2.7/httplib.py", line 844, in send self.connect() File "/usr/lib/python2.7/httplib.py", line 821, in connect self.timeout, self.source_address) File "/usr/lib/python2.7/socket.py", line 557, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): IOError: [Errno socket error] [Errno -2] Name or service not known I get this error then. WiFi-Pumpkin is running fine with Sergioproxy or Pumpkinproxy. This is only with SSLSTRIP+/Dns2Proxy Thankyou
process
ioerror name or service not known log after the io error i dont get any more traffic when i visit a https page on victim s device it redirects it to http and then traceback most recent call last file home hargerao documents tools wifi pumpkin master core servers proxy tcp intercept py line in run self main file home hargerao documents tools wifi pumpkin master core servers proxy tcp intercept py line in main self plugins filterpackets pkt file home hargerao documents tools wifi pumpkin master plugins analyzers image py line in filterpackets urlretrieve http layer fields file name file usr lib urllib py line in urlretrieve return opener retrieve url filename reporthook data file usr lib urllib py line in retrieve fp self open url data file usr lib urllib py line in open return getattr self name url file usr lib urllib py line in open http h endheaders data file usr lib httplib py line in endheaders self send output message body file usr lib httplib py line in send output self send msg file usr lib httplib py line in send self connect file usr lib httplib py line in connect self timeout self source address file usr lib socket py line in create connection for res in getaddrinfo host port sock stream ioerror name or service not known i get this error then wifi pumpkin is running fine with sergioproxy or pumpkinproxy this is only with sslstrip thankyou
1
235,930
25,962,080,750
IssuesEvent
2022-12-19 01:04:23
mgh3326/making_page
https://api.github.com/repos/mgh3326/making_page
opened
CVE-2022-23517 (High) detected in rails-html-sanitizer-1.3.0.gem
security vulnerability
## CVE-2022-23517 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>rails-html-sanitizer-1.3.0.gem</b></p></summary> <p>HTML sanitization for Rails applications</p> <p>Library home page: <a href="https://rubygems.org/gems/rails-html-sanitizer-1.3.0.gem">https://rubygems.org/gems/rails-html-sanitizer-1.3.0.gem</a></p> <p>Path to dependency file: /Gemfile.lock</p> <p>Path to vulnerable library: /var/lib/gems/2.3.0/cache/rails-html-sanitizer-1.3.0.gem</p> <p> Dependency Hierarchy: - web-console-4.0.1.gem (Root Library) - actionview-6.0.1.gem - :x: **rails-html-sanitizer-1.3.0.gem** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> rails-html-sanitizer is responsible for sanitizing HTML fragments in Rails applications. Certain configurations of rails-html-sanitizer < 1.4.4 use an inefficient regular expression that is susceptible to excessive backtracking when attempting to sanitize certain SVG attributes. This may lead to a denial of service through CPU resource consumption. This issue has been patched in version 1.4.4. <p>Publish Date: 2022-12-14 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23517>CVE-2022-23517</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/rails/rails-html-sanitizer/security/advisories/GHSA-5x79-w82f-gw8w">https://github.com/rails/rails-html-sanitizer/security/advisories/GHSA-5x79-w82f-gw8w</a></p> <p>Release Date: 2022-12-14</p> <p>Fix Resolution: rails-html-sanitizer - 1.4.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-23517 (High) detected in rails-html-sanitizer-1.3.0.gem - ## CVE-2022-23517 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>rails-html-sanitizer-1.3.0.gem</b></p></summary> <p>HTML sanitization for Rails applications</p> <p>Library home page: <a href="https://rubygems.org/gems/rails-html-sanitizer-1.3.0.gem">https://rubygems.org/gems/rails-html-sanitizer-1.3.0.gem</a></p> <p>Path to dependency file: /Gemfile.lock</p> <p>Path to vulnerable library: /var/lib/gems/2.3.0/cache/rails-html-sanitizer-1.3.0.gem</p> <p> Dependency Hierarchy: - web-console-4.0.1.gem (Root Library) - actionview-6.0.1.gem - :x: **rails-html-sanitizer-1.3.0.gem** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> rails-html-sanitizer is responsible for sanitizing HTML fragments in Rails applications. Certain configurations of rails-html-sanitizer < 1.4.4 use an inefficient regular expression that is susceptible to excessive backtracking when attempting to sanitize certain SVG attributes. This may lead to a denial of service through CPU resource consumption. This issue has been patched in version 1.4.4. <p>Publish Date: 2022-12-14 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23517>CVE-2022-23517</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/rails/rails-html-sanitizer/security/advisories/GHSA-5x79-w82f-gw8w">https://github.com/rails/rails-html-sanitizer/security/advisories/GHSA-5x79-w82f-gw8w</a></p> <p>Release Date: 2022-12-14</p> <p>Fix Resolution: rails-html-sanitizer - 1.4.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in rails html sanitizer gem cve high severity vulnerability vulnerable library rails html sanitizer gem html sanitization for rails applications library home page a href path to dependency file gemfile lock path to vulnerable library var lib gems cache rails html sanitizer gem dependency hierarchy web console gem root library actionview gem x rails html sanitizer gem vulnerable library found in base branch master vulnerability details rails html sanitizer is responsible for sanitizing html fragments in rails applications certain configurations of rails html sanitizer use an inefficient regular expression that is susceptible to excessive backtracking when attempting to sanitize certain svg attributes this may lead to a denial of service through cpu resource consumption this issue has been patched in version publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rails html sanitizer step up your open source security game with mend
0
801,345
28,484,642,539
IssuesEvent
2023-04-18 06:57:11
saud-alnasser/starflux
https://api.github.com/repos/saud-alnasser/starflux
opened
feat: logging
✨ type: enhancement 📌 area: framework 🗃️ status: backlog 🏝 priority: low
### Is your feature request related to a problem? Please describe. we need to provide a default logging mechanism. ### Describe the solution you'd like to see we could provide a plugin that will log all the framework events by hooking to the framework events, also it could give the option of using a custom logger. ### Describe alternate solutions _No response_ ### Additional information _No response_ ### 👨‍👧‍👦 Contributing - [X] 🙋‍♂️ Yes, I will create a pull request implementing this feature!
1.0
feat: logging - ### Is your feature request related to a problem? Please describe. we need to provide a default logging mechanism. ### Describe the solution you'd like to see we could provide a plugin that will log all the framework events by hooking to the framework events, also it could give the option of using a custom logger. ### Describe alternate solutions _No response_ ### Additional information _No response_ ### 👨‍👧‍👦 Contributing - [X] 🙋‍♂️ Yes, I will create a pull request implementing this feature!
non_process
feat logging is your feature request related to a problem please describe we need to provide a default logging mechanism describe the solution you d like to see we could provide a plugin that will log all the framework events by hooking to the framework events also it could give the option of using a custom logger describe alternate solutions no response additional information no response 👨‍👧‍👦 contributing 🙋‍♂️ yes i will create a pull request implementing this feature
0
137,131
5,294,234,388
IssuesEvent
2017-02-09 10:12:30
dhis2/settings-app
https://api.github.com/repos/dhis2/settings-app
closed
Replace Form with FormBuilder
help wanted priority:medium refactor
The `DataApprovalLevels` and `Oauth2Clients` components are still using the `Form` component. This should be replaced with the `FormBuilder` component.
1.0
Replace Form with FormBuilder - The `DataApprovalLevels` and `Oauth2Clients` components are still using the `Form` component. This should be replaced with the `FormBuilder` component.
non_process
replace form with formbuilder the dataapprovallevels and components are still using the form component this should be replaced with the formbuilder component
0
16,478
21,413,554,991
IssuesEvent
2022-04-22 08:40:37
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
Write basic tests for cockroachdb in prisma/prisma
process/candidate topic: internal team/schema topic: cockroachdb
We currently do not test on cockroachdb. @jkomyno This is issue is currently mostly a reminder to discuss.
1.0
Write basic tests for cockroachdb in prisma/prisma - We currently do not test on cockroachdb. @jkomyno This is issue is currently mostly a reminder to discuss.
process
write basic tests for cockroachdb in prisma prisma we currently do not test on cockroachdb jkomyno this is issue is currently mostly a reminder to discuss
1
21,117
3,461,696,097
IssuesEvent
2015-12-20 09:26:25
arti01/jkursy
https://api.github.com/repos/arti01/jkursy
closed
nazwenictwo
auto-migrated Priority-High Type-Defect
``` Ze względów marketingowo-public relations itd itp będziemy musieli zwracać szczególną uwagę na wyświetlane nazewnictwo. Np. kursantów nie będziemy nazywali kursantami czy uczniami, ale żeby ich dowartościować czy dopieścić nazwiemy ich "studentami". Prowadzących kursy nazwiemy kadrą, a konkretnych wypadkach będziemy ich nazywać prowadzącymi lub mistrzami kurs, ćwiczenia, warsztaty. Kurs i ćwiczenia raczej z "prowadzącym". Konsultacje czy warsztaty mogą być z "mistrzem", choć z tym trzeba by ostrożnie. Prowadzący jest bezpieczniejszy. Nie chcę używać nazwy wykładowca. Tylko "kadra", "prowadzący", "mistrz", przy czym to ostatnie powinno funkcjonować niemal wyłącznie w połączeniu z powszechnie znanymi osobami. Nie chcę używać słowa "lekcja" tylko w zależności od kontekstu - "etap(y) kursu" albo "materiały teoretyczne", czasem może pasować do kontekstu "artykuł". Najważniejsze, jakie słówka wyświetlają się osobom niezalogowanym oraz kursantom, bo to do nich zaadresowana jest nasza oferta kursowa. Dobrze, jeśli wykładowcy będą widzieli analogiczne "etykiety", żeby nie było problemów komunikacyjnych. ``` Original issue reported on code.google.com by `juko...@gmail.com` on 28 Mar 2011 at 6:23
1.0
nazwenictwo - ``` Ze względów marketingowo-public relations itd itp będziemy musieli zwracać szczególną uwagę na wyświetlane nazewnictwo. Np. kursantów nie będziemy nazywali kursantami czy uczniami, ale żeby ich dowartościować czy dopieścić nazwiemy ich "studentami". Prowadzących kursy nazwiemy kadrą, a konkretnych wypadkach będziemy ich nazywać prowadzącymi lub mistrzami kurs, ćwiczenia, warsztaty. Kurs i ćwiczenia raczej z "prowadzącym". Konsultacje czy warsztaty mogą być z "mistrzem", choć z tym trzeba by ostrożnie. Prowadzący jest bezpieczniejszy. Nie chcę używać nazwy wykładowca. Tylko "kadra", "prowadzący", "mistrz", przy czym to ostatnie powinno funkcjonować niemal wyłącznie w połączeniu z powszechnie znanymi osobami. Nie chcę używać słowa "lekcja" tylko w zależności od kontekstu - "etap(y) kursu" albo "materiały teoretyczne", czasem może pasować do kontekstu "artykuł". Najważniejsze, jakie słówka wyświetlają się osobom niezalogowanym oraz kursantom, bo to do nich zaadresowana jest nasza oferta kursowa. Dobrze, jeśli wykładowcy będą widzieli analogiczne "etykiety", żeby nie było problemów komunikacyjnych. ``` Original issue reported on code.google.com by `juko...@gmail.com` on 28 Mar 2011 at 6:23
non_process
nazwenictwo ze względów marketingowo public relations itd itp będziemy musieli zwracać szczególną uwagę na wyświetlane nazewnictwo np kursantów nie będziemy nazywali kursantami czy uczniami ale żeby ich dowartościować czy dopieścić nazwiemy ich studentami prowadzących kursy nazwiemy kadrą a konkretnych wypadkach będziemy ich nazywać prowadzącymi lub mistrzami kurs ćwiczenia warsztaty kurs i ćwiczenia raczej z prowadzącym konsultacje czy warsztaty mogą być z mistrzem choć z tym trzeba by ostrożnie prowadzący jest bezpieczniejszy nie chcę używać nazwy wykładowca tylko kadra prowadzący mistrz przy czym to ostatnie powinno funkcjonować niemal wyłącznie w połączeniu z powszechnie znanymi osobami nie chcę używać słowa lekcja tylko w zależności od kontekstu etap y kursu albo materiały teoretyczne czasem może pasować do kontekstu artykuł najważniejsze jakie słówka wyświetlają się osobom niezalogowanym oraz kursantom bo to do nich zaadresowana jest nasza oferta kursowa dobrze jeśli wykładowcy będą widzieli analogiczne etykiety żeby nie było problemów komunikacyjnych original issue reported on code google com by juko gmail com on mar at
0
129,783
27,562,001,257
IssuesEvent
2023-03-07 22:59:48
radiantearth/stac-browser
https://api.github.com/repos/radiantearth/stac-browser
closed
Slimmed down production build
codebase
I'm hoping to bundle STAC Browser in an executable and am looking for ways to trim the build size. By default, `npm run build` generates 12MB of files. I can get rid of 8.3MB of that by deleting the `*.map` files. The `report.html` is another 500K. I see that I can configure Vue to configure webpack to skip the sourcemaps in production mode: ```diff --- a/vue.config.js +++ b/vue.config.js @@ -68,6 +68,8 @@ const config = { }; if (process.env.NODE_ENV === 'production') { + config.configureWebpack.devtool = false; + config.configureWebpack.plugins.push(new BundleAnalyzerPlugin({ analyzerMode: 'static', openAnalyzer: false ``` But I imagine that others might want these sourcemaps. Wondering if you have any other ideas for configuring things to get a smaller production build. (As an aside, I'm not able to make any use of the sourcemap files – I see them in the debugger, but they are not useful for debugging – I'm new to Vue, so not sure what is expected here.)
1.0
Slimmed down production build - I'm hoping to bundle STAC Browser in an executable and am looking for ways to trim the build size. By default, `npm run build` generates 12MB of files. I can get rid of 8.3MB of that by deleting the `*.map` files. The `report.html` is another 500K. I see that I can configure Vue to configure webpack to skip the sourcemaps in production mode: ```diff --- a/vue.config.js +++ b/vue.config.js @@ -68,6 +68,8 @@ const config = { }; if (process.env.NODE_ENV === 'production') { + config.configureWebpack.devtool = false; + config.configureWebpack.plugins.push(new BundleAnalyzerPlugin({ analyzerMode: 'static', openAnalyzer: false ``` But I imagine that others might want these sourcemaps. Wondering if you have any other ideas for configuring things to get a smaller production build. (As an aside, I'm not able to make any use of the sourcemap files – I see them in the debugger, but they are not useful for debugging – I'm new to Vue, so not sure what is expected here.)
non_process
slimmed down production build i m hoping to bundle stac browser in an executable and am looking for ways to trim the build size by default npm run build generates of files i can get rid of of that by deleting the map files the report html is another i see that i can configure vue to configure webpack to skip the sourcemaps in production mode diff a vue config js b vue config js const config if process env node env production config configurewebpack devtool false config configurewebpack plugins push new bundleanalyzerplugin analyzermode static openanalyzer false but i imagine that others might want these sourcemaps wondering if you have any other ideas for configuring things to get a smaller production build as an aside i m not able to make any use of the sourcemap files – i see them in the debugger but they are not useful for debugging – i m new to vue so not sure what is expected here
0
292,620
22,032,678,727
IssuesEvent
2022-05-28 04:42:41
nokitakaze/tornado-cash-encrypted-note.net
https://api.github.com/repos/nokitakaze/tornado-cash-encrypted-note.net
closed
Research X25519-XSalsa20-Poly1305 collisions between private 128 bits keys
documentation
Failed unit tests: https://ci.appveyor.com/project/nokitakaze/tornado-cash-encrypted-note-net/builds/43685876 Temporary work-around in 7473a5a
1.0
Research X25519-XSalsa20-Poly1305 collisions between private 128 bits keys - Failed unit tests: https://ci.appveyor.com/project/nokitakaze/tornado-cash-encrypted-note-net/builds/43685876 Temporary work-around in 7473a5a
non_process
research collisions between private bits keys failed unit tests temporary work around in
0
7,192
10,331,794,037
IssuesEvent
2019-09-02 19:53:05
Ultimate-Hosts-Blacklist/whitelist
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
closed
m.facebook.com
whitelisting process
*@xxcriticxx commented on Dec 29, 2018, 2:14 AM UTC:* Match found in [https://hosts.ubuntu101.co.za/hosts](https://hosts.ubuntu101.co.za/hosts): m.facebook.com *This issue was moved by [funilrys](https://github.com/funilrys) from [mitchellkrogza/Ultimate.Hosts.Blacklist#493](https://github.com/mitchellkrogza/Ultimate.Hosts.Blacklist/issues/493).*
1.0
m.facebook.com - *@xxcriticxx commented on Dec 29, 2018, 2:14 AM UTC:* Match found in [https://hosts.ubuntu101.co.za/hosts](https://hosts.ubuntu101.co.za/hosts): m.facebook.com *This issue was moved by [funilrys](https://github.com/funilrys) from [mitchellkrogza/Ultimate.Hosts.Blacklist#493](https://github.com/mitchellkrogza/Ultimate.Hosts.Blacklist/issues/493).*
process
m facebook com xxcriticxx commented on dec am utc match found in m facebook com this issue was moved by from
1
605,737
18,739,922,008
IssuesEvent
2021-11-04 12:27:15
boostcampwm-2021/web14-salondesrefuses
https://api.github.com/repos/boostcampwm-2021/web14-salondesrefuses
closed
(높음)[FE] 전시회 리스트 레이아웃
🚀 Front Priority: High
## 📃 이슈 내용 전시회 리스트 레이아웃 ## ✅ 체크 리스트 - [ ] 전시회 리스트 레이아웃 ## 📌 레퍼런스
1.0
(높음)[FE] 전시회 리스트 레이아웃 - ## 📃 이슈 내용 전시회 리스트 레이아웃 ## ✅ 체크 리스트 - [ ] 전시회 리스트 레이아웃 ## 📌 레퍼런스
non_process
높음 전시회 리스트 레이아웃 📃 이슈 내용 전시회 리스트 레이아웃 ✅ 체크 리스트 전시회 리스트 레이아웃 📌 레퍼런스
0
20,355
27,014,139,524
IssuesEvent
2023-02-10 17:45:22
MPMG-DCC-UFMG/C01
https://api.github.com/repos/MPMG-DCC-UFMG/C01
opened
Bug na seleção de opções no portal da transparência de Recreio
[1] Bug [0] Desenvolvimento [3] Processamento Dinâmico
## Comportamento Esperado Selecionar e baixar os documentos relativos a cada um dos anos de um select no portal da transparência do município de Recereio (http://www.mgcidades.com.br/index.php?option=com_contpubl&idcid=315410&mgcidades=1) na área Tribunal de contas da união / Demonstrativos de Receitas e Despesas. ## Comportamento Atual Diferente do resultado de uma navegação manual que seleciona cada um dos anos disponíveis e realiza o download dos dados de Receitas de cada um, todos os documentos baixados com a configuração na seção Especificação da Coleta são relativos ao ano de 2022. Ou seja, mesmo com um loop que itera por todos os anos do select, a página de dados de 2022 é acessada todas as vezes. ## Passos para reproduzir o erro 1. Criar um coletor com a especificação dada na próxima seção 2. Executar a coleta e aguardar sua finalização 3. Verificar que todos os documentos baixados são relativos ao ano de 2022 ## Especificações da Coleta Arquivo de configuração: ```json { "source_name": "Coleta Teste de Receitas de Recreio", "base_url": "http://www.mgcidades.com.br/index.php?option=com_contpubl&idcid=315410&mgcidades=1", "obey_robots": false, "data_path": "home/isabel/recreio_receitas", "request_type": "GET", "form_request_type": "POST", "antiblock_download_delay": 2, "antiblock_autothrottle_enabled": false, "antiblock_autothrottle_start_delay": 2, "antiblock_autothrottle_max_delay": 10, "antiblock_ip_rotation_enabled": false, "antiblock_ip_rotation_type": "tor", "antiblock_max_reqs_per_ip": 10, "antiblock_max_reuse_rounds": 10, "antiblock_proxy_list": "", "antiblock_user_agent_rotation_enabled": false, "antiblock_reqs_per_user_agent": 100, "antiblock_user_agents_list": "", "antiblock_insert_cookies_enabled": false, "antiblock_cookies_list": "", "captcha": "none", "has_webdriver": false, "webdriver_path": "", "img_xpath": "", "sound_xpath": "", "dynamic_processing": true, "skip_iter_errors": false, "explore_links": false, "link_extractor_max_depth": null, "link_extractor_allow_url": "", "link_extractor_allow_domains": "", "link_extractor_tags": "", "link_extractor_attrs": "", "link_extractor_check_type": false, "link_extractor_process_value": "", "download_files": false, "download_files_allow_url": "", "download_files_allow_extensions": "", "download_files_allow_domains": "", "download_files_tags": "", "download_files_attrs": "", "download_files_process_value": "", "download_files_check_large_content": true, "download_imgs": false, "steps": "{\"step\":\"root\",\"depth\":0,\"children\":[{\"step\":\"espere\",\"depth\":1,\"arguments\":{\"segundos\":\"4\"}},{\"step\":\"screenshot\",\"depth\":1,\"arguments\":{}},{\"step\":\"clique\",\"depth\":1,\"arguments\":{\"elemento\":\"\\\"//h4[a/span='Tribunal de Contas da Uni\u00e3o']\\\"\"}},{\"step\":\"espere\",\"depth\":1,\"arguments\":{\"segundos\":\"3\"}},{\"step\":\"screenshot\",\"depth\":1,\"arguments\":{}},{\"step\":\"clique\",\"depth\":1,\"arguments\":{\"elemento\":\"\\\"//a[text()='Demonstrativo de Receitas e Despesas']\\\"\"}},{\"step\":\"espere\",\"depth\":1,\"arguments\":{\"segundos\":\"3\"}},{\"step\":\"screenshot\",\"depth\":1,\"arguments\":{}},{\"step\":\"para_cada\",\"depth\":1,\"iterator\":\"ano\",\"children\":[{\"step\":\"se\",\"depth\":2,\"children\":[{\"step\":\"selecione\",\"depth\":3,\"arguments\":{\"xpath\":\"\\\"/html/body/section/div/div/section/div[3]/select\\\"\",\"opcao\":\"ano\"}},{\"step\":\"espere\",\"depth\":3,\"arguments\":{\"segundos\":\"6\"}},{\"step\":\"screenshot\",\"depth\":3,\"arguments\":{}},{\"step\":\"clique\",\"depth\":3,\"arguments\":{\"elemento\":\"\\\"//a[text()='Demonstrativo de Receitas']\\\"\"}},{\"step\":\"espere\",\"depth\":3,\"arguments\":{\"segundos\":\"2\"}},{\"step\":\"clique\",\"depth\":3,\"arguments\":{\"elemento\":\"\\\"//h4[a/span='Tribunal de Contas da Uni\u00e3o']\\\"\"}},{\"step\":\"espere\",\"depth\":3,\"arguments\":{\"segundos\":\"3\"}},{\"step\":\"clique\",\"depth\":3,\"arguments\":{\"elemento\":\"\\\"//a[text()='Demonstrativo de Receitas e Despesas']\\\"\"}},{\"step\":\"espere\",\"depth\":3,\"arguments\":{\"segundos\":\"3\"}}],\"condition\":{\"call\":{\"step\":\"objeto\",\"arguments\":{\"objeto\":\"ano != 'Escolha'\"}}}}],\"iterable\":{\"call\":{\"step\":\"opcoes\",\"arguments\":{\"xpath\":\"\\\"/html/body/section/div/div/section/div[3]/select\\\"\"}}}}]}", "encoding_detection_method": 1, "expected_runtime_category": "fast", "templated_url_parameter_handlers": [], "static_form_parameter_handlers": [], "templated_url_response_handlers": [], "static_form_response_handlers": [], "crawler_id": 478, "instance_id": "167605033231804" } ```
1.0
Bug na seleção de opções no portal da transparência de Recreio - ## Comportamento Esperado Selecionar e baixar os documentos relativos a cada um dos anos de um select no portal da transparência do município de Recereio (http://www.mgcidades.com.br/index.php?option=com_contpubl&idcid=315410&mgcidades=1) na área Tribunal de contas da união / Demonstrativos de Receitas e Despesas. ## Comportamento Atual Diferente do resultado de uma navegação manual que seleciona cada um dos anos disponíveis e realiza o download dos dados de Receitas de cada um, todos os documentos baixados com a configuração na seção Especificação da Coleta são relativos ao ano de 2022. Ou seja, mesmo com um loop que itera por todos os anos do select, a página de dados de 2022 é acessada todas as vezes. ## Passos para reproduzir o erro 1. Criar um coletor com a especificação dada na próxima seção 2. Executar a coleta e aguardar sua finalização 3. Verificar que todos os documentos baixados são relativos ao ano de 2022 ## Especificações da Coleta Arquivo de configuração: ```json { "source_name": "Coleta Teste de Receitas de Recreio", "base_url": "http://www.mgcidades.com.br/index.php?option=com_contpubl&idcid=315410&mgcidades=1", "obey_robots": false, "data_path": "home/isabel/recreio_receitas", "request_type": "GET", "form_request_type": "POST", "antiblock_download_delay": 2, "antiblock_autothrottle_enabled": false, "antiblock_autothrottle_start_delay": 2, "antiblock_autothrottle_max_delay": 10, "antiblock_ip_rotation_enabled": false, "antiblock_ip_rotation_type": "tor", "antiblock_max_reqs_per_ip": 10, "antiblock_max_reuse_rounds": 10, "antiblock_proxy_list": "", "antiblock_user_agent_rotation_enabled": false, "antiblock_reqs_per_user_agent": 100, "antiblock_user_agents_list": "", "antiblock_insert_cookies_enabled": false, "antiblock_cookies_list": "", "captcha": "none", "has_webdriver": false, "webdriver_path": "", "img_xpath": "", "sound_xpath": "", "dynamic_processing": true, "skip_iter_errors": false, "explore_links": false, "link_extractor_max_depth": null, "link_extractor_allow_url": "", "link_extractor_allow_domains": "", "link_extractor_tags": "", "link_extractor_attrs": "", "link_extractor_check_type": false, "link_extractor_process_value": "", "download_files": false, "download_files_allow_url": "", "download_files_allow_extensions": "", "download_files_allow_domains": "", "download_files_tags": "", "download_files_attrs": "", "download_files_process_value": "", "download_files_check_large_content": true, "download_imgs": false, "steps": "{\"step\":\"root\",\"depth\":0,\"children\":[{\"step\":\"espere\",\"depth\":1,\"arguments\":{\"segundos\":\"4\"}},{\"step\":\"screenshot\",\"depth\":1,\"arguments\":{}},{\"step\":\"clique\",\"depth\":1,\"arguments\":{\"elemento\":\"\\\"//h4[a/span='Tribunal de Contas da Uni\u00e3o']\\\"\"}},{\"step\":\"espere\",\"depth\":1,\"arguments\":{\"segundos\":\"3\"}},{\"step\":\"screenshot\",\"depth\":1,\"arguments\":{}},{\"step\":\"clique\",\"depth\":1,\"arguments\":{\"elemento\":\"\\\"//a[text()='Demonstrativo de Receitas e Despesas']\\\"\"}},{\"step\":\"espere\",\"depth\":1,\"arguments\":{\"segundos\":\"3\"}},{\"step\":\"screenshot\",\"depth\":1,\"arguments\":{}},{\"step\":\"para_cada\",\"depth\":1,\"iterator\":\"ano\",\"children\":[{\"step\":\"se\",\"depth\":2,\"children\":[{\"step\":\"selecione\",\"depth\":3,\"arguments\":{\"xpath\":\"\\\"/html/body/section/div/div/section/div[3]/select\\\"\",\"opcao\":\"ano\"}},{\"step\":\"espere\",\"depth\":3,\"arguments\":{\"segundos\":\"6\"}},{\"step\":\"screenshot\",\"depth\":3,\"arguments\":{}},{\"step\":\"clique\",\"depth\":3,\"arguments\":{\"elemento\":\"\\\"//a[text()='Demonstrativo de Receitas']\\\"\"}},{\"step\":\"espere\",\"depth\":3,\"arguments\":{\"segundos\":\"2\"}},{\"step\":\"clique\",\"depth\":3,\"arguments\":{\"elemento\":\"\\\"//h4[a/span='Tribunal de Contas da Uni\u00e3o']\\\"\"}},{\"step\":\"espere\",\"depth\":3,\"arguments\":{\"segundos\":\"3\"}},{\"step\":\"clique\",\"depth\":3,\"arguments\":{\"elemento\":\"\\\"//a[text()='Demonstrativo de Receitas e Despesas']\\\"\"}},{\"step\":\"espere\",\"depth\":3,\"arguments\":{\"segundos\":\"3\"}}],\"condition\":{\"call\":{\"step\":\"objeto\",\"arguments\":{\"objeto\":\"ano != 'Escolha'\"}}}}],\"iterable\":{\"call\":{\"step\":\"opcoes\",\"arguments\":{\"xpath\":\"\\\"/html/body/section/div/div/section/div[3]/select\\\"\"}}}}]}", "encoding_detection_method": 1, "expected_runtime_category": "fast", "templated_url_parameter_handlers": [], "static_form_parameter_handlers": [], "templated_url_response_handlers": [], "static_form_response_handlers": [], "crawler_id": 478, "instance_id": "167605033231804" } ```
process
bug na seleção de opções no portal da transparência de recreio comportamento esperado selecionar e baixar os documentos relativos a cada um dos anos de um select no portal da transparência do município de recereio na área tribunal de contas da união demonstrativos de receitas e despesas comportamento atual diferente do resultado de uma navegação manual que seleciona cada um dos anos disponíveis e realiza o download dos dados de receitas de cada um todos os documentos baixados com a configuração na seção especificação da coleta são relativos ao ano de ou seja mesmo com um loop que itera por todos os anos do select a página de dados de é acessada todas as vezes passos para reproduzir o erro criar um coletor com a especificação dada na próxima seção executar a coleta e aguardar sua finalização verificar que todos os documentos baixados são relativos ao ano de especificações da coleta arquivo de configuração json source name coleta teste de receitas de recreio base url obey robots false data path home isabel recreio receitas request type get form request type post antiblock download delay antiblock autothrottle enabled false antiblock autothrottle start delay antiblock autothrottle max delay antiblock ip rotation enabled false antiblock ip rotation type tor antiblock max reqs per ip antiblock max reuse rounds antiblock proxy list antiblock user agent rotation enabled false antiblock reqs per user agent antiblock user agents list antiblock insert cookies enabled false antiblock cookies list captcha none has webdriver false webdriver path img xpath sound xpath dynamic processing true skip iter errors false explore links false link extractor max depth null link extractor allow url link extractor allow domains link extractor tags link extractor attrs link extractor check type false link extractor process value download files false download files allow url download files allow extensions download files allow domains download files tags download files attrs download files process value download files check large content true download imgs false steps step root depth children step espere depth arguments segundos step screenshot depth arguments step clique depth arguments elemento a step espere depth arguments segundos step screenshot depth arguments step para cada depth iterator ano children select opcao ano step espere depth arguments segundos step screenshot depth arguments step clique depth arguments elemento a step espere depth arguments segundos step clique depth arguments elemento step espere depth arguments segundos step clique depth arguments elemento a step espere depth arguments segundos condition call step objeto arguments objeto ano escolha iterable call step opcoes arguments xpath html body section div div section div select encoding detection method expected runtime category fast templated url parameter handlers static form parameter handlers templated url response handlers static form response handlers crawler id instance id
1
813,848
30,475,759,392
IssuesEvent
2023-07-17 16:23:05
GoogleCloudPlatform/cloud-code-vscode
https://api.github.com/repos/GoogleCloudPlatform/cloud-code-vscode
closed
Using too much cpu
kind/bug priority/p1 area/dependency installer
Type: <b>Bug</b> Vscode was using a full cpu thread's capacity. After running bisect, this extension was found to be the one responsible for the performance issues Extension version: 1.21.3 VS Code version: Code 1.76.0 (92da9481c0904c6adfe372c12da3b7748d74bdcb, 2023-03-01T10:25:16.105Z) OS version: Linux x64 6.2.2-arch1-1 Modes: Sandboxed: Yes <details> <summary>System Info</summary> |Item|Value| |---|---| |CPUs|Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz (12 x 3564)| |GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_renderer: enabled_on<br>video_decode: disabled_software<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off| |Load (avg)|2, 2, 2| |Memory (System)|15.48GB (11.51GB free)| |Process Argv|--unity-launch --crash-reporter-id 590f3d56-f46b-4997-aa42-697ca699b87d| |Screen Reader|no| |VM|0%| |DESKTOP_SESSION|xmonad| |XDG_CURRENT_DESKTOP|| |XDG_SESSION_DESKTOP|| |XDG_SESSION_TYPE|x11| </details><details> <summary>A/B Experiments</summary> ``` vsliv368:30146709 vsreu685:30147344 python383cf:30185419 vspor879:30202332 vspor708:30202333 vspor363:30204092 vslsvsres303:30308271 pythonvspyl392:30443607 vserr242:30382549 pythontb:30283811 vsjup518:30340749 pythonptprofiler:30281270 vshan820:30294714 vstes263:30335439 vscorecescf:30445987 pythondataviewer:30285071 vscod805cf:30301675 binariesv615:30325510 bridge0708:30335490 bridge0723:30353136 cmake_vspar411:30581797 vsaa593cf:30376535 pythonvs932:30410667 cppdebug:30492333 vscaac:30438847 vsclangdf:30486550 c4g48928:30535728 dsvsc012:30540252 pynewext54:30669237 azure-dev_surveyone:30548225 pyindex848:30662994 nodejswelcome1:30587005 282f8724:30602487 pyind779:30671433 f6dab269:30613381 pythonsymbol12:30671437 6233i204:30672705 vsctsb:30677850 vscodedisable:30660115 pythonb192cf:30669361 funwalk2:30676043 ``` </details> <!-- generated by issue reporter -->
1.0
Using too much cpu - Type: <b>Bug</b> Vscode was using a full cpu thread's capacity. After running bisect, this extension was found to be the one responsible for the performance issues Extension version: 1.21.3 VS Code version: Code 1.76.0 (92da9481c0904c6adfe372c12da3b7748d74bdcb, 2023-03-01T10:25:16.105Z) OS version: Linux x64 6.2.2-arch1-1 Modes: Sandboxed: Yes <details> <summary>System Info</summary> |Item|Value| |---|---| |CPUs|Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz (12 x 3564)| |GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_renderer: enabled_on<br>video_decode: disabled_software<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off| |Load (avg)|2, 2, 2| |Memory (System)|15.48GB (11.51GB free)| |Process Argv|--unity-launch --crash-reporter-id 590f3d56-f46b-4997-aa42-697ca699b87d| |Screen Reader|no| |VM|0%| |DESKTOP_SESSION|xmonad| |XDG_CURRENT_DESKTOP|| |XDG_SESSION_DESKTOP|| |XDG_SESSION_TYPE|x11| </details><details> <summary>A/B Experiments</summary> ``` vsliv368:30146709 vsreu685:30147344 python383cf:30185419 vspor879:30202332 vspor708:30202333 vspor363:30204092 vslsvsres303:30308271 pythonvspyl392:30443607 vserr242:30382549 pythontb:30283811 vsjup518:30340749 pythonptprofiler:30281270 vshan820:30294714 vstes263:30335439 vscorecescf:30445987 pythondataviewer:30285071 vscod805cf:30301675 binariesv615:30325510 bridge0708:30335490 bridge0723:30353136 cmake_vspar411:30581797 vsaa593cf:30376535 pythonvs932:30410667 cppdebug:30492333 vscaac:30438847 vsclangdf:30486550 c4g48928:30535728 dsvsc012:30540252 pynewext54:30669237 azure-dev_surveyone:30548225 pyindex848:30662994 nodejswelcome1:30587005 282f8724:30602487 pyind779:30671433 f6dab269:30613381 pythonsymbol12:30671437 6233i204:30672705 vsctsb:30677850 vscodedisable:30660115 pythonb192cf:30669361 funwalk2:30676043 ``` </details> <!-- generated by issue reporter -->
non_process
using too much cpu type bug vscode was using a full cpu thread s capacity after running bisect this extension was found to be the one responsible for the performance issues extension version vs code version code os version linux modes sandboxed yes system info item value cpus intel r core tm cpu x gpu status canvas enabled canvas oop rasterization disabled off direct rendering display compositor disabled off ok gpu compositing enabled multiple raster threads enabled on opengl enabled on rasterization enabled raw draw disabled off ok skia renderer enabled on video decode disabled software video encode disabled software vulkan disabled off webgl enabled enabled webgpu disabled off load avg memory system free process argv unity launch crash reporter id screen reader no vm desktop session xmonad xdg current desktop xdg session desktop xdg session type a b experiments pythontb pythonptprofiler vscorecescf pythondataviewer cmake cppdebug vscaac vsclangdf azure dev surveyone vsctsb vscodedisable
0
3,207
6,264,633,135
IssuesEvent
2017-07-16 10:06:22
gaocegege/Processing.R
https://api.github.com/repos/gaocegege/Processing.R
closed
Fix the wrong indent in PDE editor
community/processing community/R difficulty/low priority/p0 size/small status/WIP type/bug
Now if indent has bugs, because the InputHandler is copied from python mode, we need to port R style in it.
1.0
Fix the wrong indent in PDE editor - Now if indent has bugs, because the InputHandler is copied from python mode, we need to port R style in it.
process
fix the wrong indent in pde editor now if indent has bugs because the inputhandler is copied from python mode we need to port r style in it
1
17,432
12,370,062,805
IssuesEvent
2020-05-18 16:11:11
nwfsc-fram/boatnet
https://api.github.com/repos/nwfsc-fram/boatnet
closed
Install HDFS on Old Ironsides
Prj:infrastructure
- [ ] Install hadoop - [ ] Integrate with apache drill/ zeppelin (and document above steps for the future)
1.0
Install HDFS on Old Ironsides - - [ ] Install hadoop - [ ] Integrate with apache drill/ zeppelin (and document above steps for the future)
non_process
install hdfs on old ironsides install hadoop integrate with apache drill zeppelin and document above steps for the future
0
129,315
12,404,389,457
IssuesEvent
2020-05-21 15:27:23
dib-lab/charcoal
https://api.github.com/repos/dib-lab/charcoal
opened
compare and contrast charcoal with other MAG QC tools
documentation evaluation
in particular, * RefineM - https://github.com/dparks1134/RefineM * MAGpurify - https://github.com/snayfach/MAGpurify, which uses refinem among others both use quite different techniques and probably less comprehensive (?) databases. RefineM is probably slower than charcoal and (per author) is very conservative in its choices.
1.0
compare and contrast charcoal with other MAG QC tools - in particular, * RefineM - https://github.com/dparks1134/RefineM * MAGpurify - https://github.com/snayfach/MAGpurify, which uses refinem among others both use quite different techniques and probably less comprehensive (?) databases. RefineM is probably slower than charcoal and (per author) is very conservative in its choices.
non_process
compare and contrast charcoal with other mag qc tools in particular refinem magpurify which uses refinem among others both use quite different techniques and probably less comprehensive databases refinem is probably slower than charcoal and per author is very conservative in its choices
0
11,953
14,713,989,054
IssuesEvent
2021-01-05 11:13:08
2i2c-org/team-compass
https://api.github.com/repos/2i2c-org/team-compass
opened
Team sync for January W1 2021
team-process
Hey @2i2c-org/tech-team 👋 let's try out the first team (a)sync of 2021! - Check out [our team sync process here](https://2i2c.org/team-compass/tech-team-coordination/#bi-weekly-team-syncs) - Fill in your updates in [this HackMD](https://hackmd.io/i2Siurp1TkmPYgn3ZgxFQw) That's it! btw, do people actually get pinged when I @ tech-team? If so can you tell me below? :-)
1.0
Team sync for January W1 2021 - Hey @2i2c-org/tech-team 👋 let's try out the first team (a)sync of 2021! - Check out [our team sync process here](https://2i2c.org/team-compass/tech-team-coordination/#bi-weekly-team-syncs) - Fill in your updates in [this HackMD](https://hackmd.io/i2Siurp1TkmPYgn3ZgxFQw) That's it! btw, do people actually get pinged when I @ tech-team? If so can you tell me below? :-)
process
team sync for january hey org tech team 👋 let s try out the first team a sync of check out fill in your updates in that s it btw do people actually get pinged when i tech team if so can you tell me below
1
55,470
13,637,486,404
IssuesEvent
2020-09-25 07:55:19
virtualsatellite/VirtualSatellite4-Core
https://api.github.com/repos/virtualsatellite/VirtualSatellite4-Core
closed
Create target platform definition for linux
build comfort/usability feature
Add a second target platform definition with linux specific operating- and windowing system.
1.0
Create target platform definition for linux - Add a second target platform definition with linux specific operating- and windowing system.
non_process
create target platform definition for linux add a second target platform definition with linux specific operating and windowing system
0
21,615
30,020,125,645
IssuesEvent
2023-06-26 22:19:00
The-Data-Alchemists-Manipal/MindWave
https://api.github.com/repos/The-Data-Alchemists-Manipal/MindWave
closed
License Plate Number Detection
image-processing
## 💥 Proposal As an active member of GSSOC 23, and working in OpenCV technology, i would love to implement the license plate detection feature in this project, with the actual time, plate image and other details. It would be a auto parking system. So please assign me this task.
1.0
License Plate Number Detection - ## 💥 Proposal As an active member of GSSOC 23, and working in OpenCV technology, i would love to implement the license plate detection feature in this project, with the actual time, plate image and other details. It would be a auto parking system. So please assign me this task.
process
license plate number detection 💥 proposal as an active member of gssoc and working in opencv technology i would love to implement the license plate detection feature in this project with the actual time plate image and other details it would be a auto parking system so please assign me this task
1
460,400
13,209,156,793
IssuesEvent
2020-08-15 09:34:20
Atlantiss/NetherwingBugtracker
https://api.github.com/repos/Atlantiss/NetherwingBugtracker
closed
[Item] Knothide Armor Kit [ID: 25650] does not consume upon applied through trade window
Exploit/Abuse - Priority
**Description**: The armor pack is not consumed upon being applied to another players gear piece through the trade window. The same armor kit can be used to endlessly enchant gear pieces with 8 stamina on different players without ever being consumes. **Current behaviour**: The armor pack is not consumed upon being applied to another players gear piece through the trade window. The same armor kit can be used to endlessly enchant gear pieces with 8 stamina on different players without ever being consumes. **Expected behaviour**: The armor kit should be a one time use and be consumed after being applied to one piece of gear. I have no link or proof of this, but it's pretty obvious to not exploit/dupe endless enchants. **Server Revision** - 3501:
1.0
[Item] Knothide Armor Kit [ID: 25650] does not consume upon applied through trade window - **Description**: The armor pack is not consumed upon being applied to another players gear piece through the trade window. The same armor kit can be used to endlessly enchant gear pieces with 8 stamina on different players without ever being consumes. **Current behaviour**: The armor pack is not consumed upon being applied to another players gear piece through the trade window. The same armor kit can be used to endlessly enchant gear pieces with 8 stamina on different players without ever being consumes. **Expected behaviour**: The armor kit should be a one time use and be consumed after being applied to one piece of gear. I have no link or proof of this, but it's pretty obvious to not exploit/dupe endless enchants. **Server Revision** - 3501:
non_process
knothide armor kit does not consume upon applied through trade window description the armor pack is not consumed upon being applied to another players gear piece through the trade window the same armor kit can be used to endlessly enchant gear pieces with stamina on different players without ever being consumes current behaviour the armor pack is not consumed upon being applied to another players gear piece through the trade window the same armor kit can be used to endlessly enchant gear pieces with stamina on different players without ever being consumes expected behaviour the armor kit should be a one time use and be consumed after being applied to one piece of gear i have no link or proof of this but it s pretty obvious to not exploit dupe endless enchants server revision
0
4,647
7,494,997,540
IssuesEvent
2018-04-07 16:07:34
gkiar/reading
https://api.github.com/repos/gkiar/reading
closed
Paper: testname
imaging processing to read
URL: [http://testurl.io](http://testurl.io) ## This paper does... testdoes ## This paper does not... testdoesnt ## Other comments? testnotes
1.0
Paper: testname - URL: [http://testurl.io](http://testurl.io) ## This paper does... testdoes ## This paper does not... testdoesnt ## Other comments? testnotes
process
paper testname url this paper does testdoes this paper does not testdoesnt other comments testnotes
1
116,461
17,370,029,235
IssuesEvent
2021-07-30 12:49:25
lukebroganws/Java-Demo
https://api.github.com/repos/lukebroganws/Java-Demo
opened
CVE-2015-1832 (High) detected in derby-10.8.3.0.jar
security vulnerability
## CVE-2015-1832 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>derby-10.8.3.0.jar</b></p></summary> <p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p> <p>Library home page: <a href="http://db.apache.org/derby/derby/">http://db.apache.org/derby/derby/</a></p> <p>Path to dependency file: Java-Demo/pom.xml</p> <p>Path to vulnerable library: canner/.m2/repository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar,Java-Demo/target/easybuggy-1-SNAPSHOT/WEB-INF/lib/derby-10.8.3.0.jar</p> <p> Dependency Hierarchy: - :x: **derby-10.8.3.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/lukebroganws/Java-Demo/commit/d73a27e2fea07f94b9c092744aef285ec88e27c4">d73a27e2fea07f94b9c092744aef285ec88e27c4</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> XML external entity (XXE) vulnerability in the SqlXmlUtil code in Apache Derby before 10.12.1.1, when a Java Security Manager is not in place, allows context-dependent attackers to read arbitrary files or cause a denial of service (resource consumption) via vectors involving XmlVTI and the XML datatype. <p>Publish Date: 2016-10-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-1832>CVE-2015-1832</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-1832">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-1832</a></p> <p>Release Date: 2016-10-03</p> <p>Fix Resolution: 10.12.1.1</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.derby","packageName":"derby","packageVersion":"10.8.3.0","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.derby:derby:10.8.3.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"10.12.1.1"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2015-1832","vulnerabilityDetails":"XML external entity (XXE) vulnerability in the SqlXmlUtil code in Apache Derby before 10.12.1.1, when a Java Security Manager is not in place, allows context-dependent attackers to read arbitrary files or cause a denial of service (resource consumption) via vectors involving XmlVTI and the XML datatype.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-1832","cvss3Severity":"high","cvss3Score":"9.1","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2015-1832 (High) detected in derby-10.8.3.0.jar - ## CVE-2015-1832 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>derby-10.8.3.0.jar</b></p></summary> <p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p> <p>Library home page: <a href="http://db.apache.org/derby/derby/">http://db.apache.org/derby/derby/</a></p> <p>Path to dependency file: Java-Demo/pom.xml</p> <p>Path to vulnerable library: canner/.m2/repository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar,Java-Demo/target/easybuggy-1-SNAPSHOT/WEB-INF/lib/derby-10.8.3.0.jar</p> <p> Dependency Hierarchy: - :x: **derby-10.8.3.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/lukebroganws/Java-Demo/commit/d73a27e2fea07f94b9c092744aef285ec88e27c4">d73a27e2fea07f94b9c092744aef285ec88e27c4</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> XML external entity (XXE) vulnerability in the SqlXmlUtil code in Apache Derby before 10.12.1.1, when a Java Security Manager is not in place, allows context-dependent attackers to read arbitrary files or cause a denial of service (resource consumption) via vectors involving XmlVTI and the XML datatype. <p>Publish Date: 2016-10-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-1832>CVE-2015-1832</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-1832">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-1832</a></p> <p>Release Date: 2016-10-03</p> <p>Fix Resolution: 10.12.1.1</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.derby","packageName":"derby","packageVersion":"10.8.3.0","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.derby:derby:10.8.3.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"10.12.1.1"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2015-1832","vulnerabilityDetails":"XML external entity (XXE) vulnerability in the SqlXmlUtil code in Apache Derby before 10.12.1.1, when a Java Security Manager is not in place, allows context-dependent attackers to read arbitrary files or cause a denial of service (resource consumption) via vectors involving XmlVTI and the XML datatype.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-1832","cvss3Severity":"high","cvss3Score":"9.1","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in derby jar cve high severity vulnerability vulnerable library derby jar contains the core apache derby database engine which also includes the embedded jdbc driver library home page a href path to dependency file java demo pom xml path to vulnerable library canner repository org apache derby derby derby jar java demo target easybuggy snapshot web inf lib derby jar dependency hierarchy x derby jar vulnerable library found in head commit a href found in base branch main vulnerability details xml external entity xxe vulnerability in the sqlxmlutil code in apache derby before when a java security manager is not in place allows context dependent attackers to read arbitrary files or cause a denial of service resource consumption via vectors involving xmlvti and the xml datatype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org apache derby derby isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails xml external entity xxe vulnerability in the sqlxmlutil code in apache derby before when a java security manager is not in place allows context dependent attackers to read arbitrary files or cause a denial of service resource consumption via vectors involving xmlvti and the xml datatype vulnerabilityurl
0
4,420
7,300,123,198
IssuesEvent
2018-02-26 22:26:48
nodejs/node
https://api.github.com/repos/nodejs/node
closed
inspector: implement sub process inspection.
child_process inspector
This is an umbrella bug for implementing the functionality outlined here - https://github.com/nodejs/diagnostics/issues/77 This implementation will land in several stages. Current plan is as follows: - [ ] Custom inspector message dispatcher. This stage will allow extending the Inspector protocol beyond domains provided by the V8 - [ ] Provide Node.js implementaion for the `Target` protocol as outlined [here](https://chromedevtools.github.io/devtools-protocol/tot/Target/) - [ ] Provide communication channel between Node.js instances that runs outside the regular event loop (akin to Inspector WS server). This is necessary so the child instances are still accessible while the head instance is busy or is suspended on a breakpoint. - [ ] Integrate with process.fork and process.spawn, if possible. This will make debugging child processes easier.
1.0
inspector: implement sub process inspection. - This is an umbrella bug for implementing the functionality outlined here - https://github.com/nodejs/diagnostics/issues/77 This implementation will land in several stages. Current plan is as follows: - [ ] Custom inspector message dispatcher. This stage will allow extending the Inspector protocol beyond domains provided by the V8 - [ ] Provide Node.js implementaion for the `Target` protocol as outlined [here](https://chromedevtools.github.io/devtools-protocol/tot/Target/) - [ ] Provide communication channel between Node.js instances that runs outside the regular event loop (akin to Inspector WS server). This is necessary so the child instances are still accessible while the head instance is busy or is suspended on a breakpoint. - [ ] Integrate with process.fork and process.spawn, if possible. This will make debugging child processes easier.
process
inspector implement sub process inspection this is an umbrella bug for implementing the functionality outlined here this implementation will land in several stages current plan is as follows custom inspector message dispatcher this stage will allow extending the inspector protocol beyond domains provided by the provide node js implementaion for the target protocol as outlined provide communication channel between node js instances that runs outside the regular event loop akin to inspector ws server this is necessary so the child instances are still accessible while the head instance is busy or is suspended on a breakpoint integrate with process fork and process spawn if possible this will make debugging child processes easier
1
14,226
17,147,367,039
IssuesEvent
2021-07-13 15:58:48
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
Type Choice no longer works - along with others
Bug Process: Fixed
Renaming of the fields failed to also update the text in the code causing several of the selections to be unavailable. "Text Choice" != "Text choice" <img width="1273" alt="Screen Shot 2021-05-10 at 8 59 42 PM" src="https://user-images.githubusercontent.com/4219075/117742515-f5cc7d80-b1d2-11eb-93c8-db0e9783233d.png">
1.0
Type Choice no longer works - along with others - Renaming of the fields failed to also update the text in the code causing several of the selections to be unavailable. "Text Choice" != "Text choice" <img width="1273" alt="Screen Shot 2021-05-10 at 8 59 42 PM" src="https://user-images.githubusercontent.com/4219075/117742515-f5cc7d80-b1d2-11eb-93c8-db0e9783233d.png">
process
type choice no longer works along with others renaming of the fields failed to also update the text in the code causing several of the selections to be unavailable text choice text choice img width alt screen shot at pm src
1
2,251
5,088,651,184
IssuesEvent
2017-01-01 00:02:54
sw4j-org/tool-jpa-processor
https://api.github.com/repos/sw4j-org/tool-jpa-processor
opened
Handle @PrimaryKeyJoinColumn Annotation
annotation processor task
Handle the `@PrimaryKeyJoinColumn` annotation for a property or field. See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf) - 11.1.44 PrimaryKeyJoinColumn Annotation
1.0
Handle @PrimaryKeyJoinColumn Annotation - Handle the `@PrimaryKeyJoinColumn` annotation for a property or field. See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf) - 11.1.44 PrimaryKeyJoinColumn Annotation
process
handle primarykeyjoincolumn annotation handle the primarykeyjoincolumn annotation for a property or field see primarykeyjoincolumn annotation
1
14,495
17,604,292,577
IssuesEvent
2021-08-17 15:13:32
qgis/QGIS-Documentation
https://api.github.com/repos/qgis/QGIS-Documentation
closed
[FEATURE][processing] Add a save features to file algorithm
Automatic new feature Processing Alg 3.16
Original commit: https://github.com/qgis/QGIS/commit/8c61a803fc5482c5c82380a6e25a5d88daa2882b by nirvn Unfortunately this naughty coder did not write a description... :-(
1.0
[FEATURE][processing] Add a save features to file algorithm - Original commit: https://github.com/qgis/QGIS/commit/8c61a803fc5482c5c82380a6e25a5d88daa2882b by nirvn Unfortunately this naughty coder did not write a description... :-(
process
add a save features to file algorithm original commit by nirvn unfortunately this naughty coder did not write a description
1
119,396
15,528,302,770
IssuesEvent
2021-03-13 10:18:55
Moesh/Calamity
https://api.github.com/repos/Moesh/Calamity
closed
Lobby design
feature level design
Entering the game as a spectator creates a disconnect between players and the world they're about to play in. I'd like to recreate the 1/2 sized miniature level found in earlier version of Calamity. To me, this is the most iconic version of the lobby, and should be preserved for future versions of the game. Players should be able to traverse the level and learn important information about how to play the game. Secrets and easter eggs should be scattered about. We can tie some to advancements.
1.0
Lobby design - Entering the game as a spectator creates a disconnect between players and the world they're about to play in. I'd like to recreate the 1/2 sized miniature level found in earlier version of Calamity. To me, this is the most iconic version of the lobby, and should be preserved for future versions of the game. Players should be able to traverse the level and learn important information about how to play the game. Secrets and easter eggs should be scattered about. We can tie some to advancements.
non_process
lobby design entering the game as a spectator creates a disconnect between players and the world they re about to play in i d like to recreate the sized miniature level found in earlier version of calamity to me this is the most iconic version of the lobby and should be preserved for future versions of the game players should be able to traverse the level and learn important information about how to play the game secrets and easter eggs should be scattered about we can tie some to advancements
0
11,193
8,309,394,319
IssuesEvent
2018-09-24 06:18:27
AOSC-Dev/aosc-os-abbs
https://api.github.com/repos/AOSC-Dev/aosc-os-abbs
closed
lcms2: CVE-2018-16435
security to-stable
CVE IDs (if any) --------------------- CVE-2018-16435 Other security advisory IDs (if any) ------------------------------------------------ DSA-4284-1 Patches (if any) ---------------------- https://github.com/mm2/Little-CMS/commit/768f70ca405cd3159d990e962d54456773bb8cf8 PoC(s) (if any) ------------------- ``` #include <stdio.h> #include <lcms2.h> #include "lcms2_internal.h" int main(int argc, char* argv[]){ cmsIT8LoadFromFile(NULL, "AllocateDataSet.crash.IT8"); return 0; } ``` Additional descriptions (if applicable) ---------------------------------------------------- Quang Nguyen discovered an integer overflow in the Little CMS 2 colour management library, which could in denial of service and potentially the execution of arbitrary code if a malformed IT8 calibration file is processed. Architectural progress -------------------------------- *Please remove any architecture to which the security vulnerabilities do not apply.* - [x] AMD64 (`amd64`) - [x] 32-bit Optional Environment (`optenv32`) - [x] AArch64 (`arm64`) - [x] ARMv7 (`armel`) - [x] PowerPC 64-bit BE (`ppc64`) - [x] PowerPC 32-bit BE (`powerpc`) - [x] RISC-V 64-bit (`riscv64`)
True
lcms2: CVE-2018-16435 - CVE IDs (if any) --------------------- CVE-2018-16435 Other security advisory IDs (if any) ------------------------------------------------ DSA-4284-1 Patches (if any) ---------------------- https://github.com/mm2/Little-CMS/commit/768f70ca405cd3159d990e962d54456773bb8cf8 PoC(s) (if any) ------------------- ``` #include <stdio.h> #include <lcms2.h> #include "lcms2_internal.h" int main(int argc, char* argv[]){ cmsIT8LoadFromFile(NULL, "AllocateDataSet.crash.IT8"); return 0; } ``` Additional descriptions (if applicable) ---------------------------------------------------- Quang Nguyen discovered an integer overflow in the Little CMS 2 colour management library, which could in denial of service and potentially the execution of arbitrary code if a malformed IT8 calibration file is processed. Architectural progress -------------------------------- *Please remove any architecture to which the security vulnerabilities do not apply.* - [x] AMD64 (`amd64`) - [x] 32-bit Optional Environment (`optenv32`) - [x] AArch64 (`arm64`) - [x] ARMv7 (`armel`) - [x] PowerPC 64-bit BE (`ppc64`) - [x] PowerPC 32-bit BE (`powerpc`) - [x] RISC-V 64-bit (`riscv64`)
non_process
cve cve ids if any cve other security advisory ids if any dsa patches if any poc s if any include include include internal h int main int argc char argv null allocatedataset crash return additional descriptions if applicable quang nguyen discovered an integer overflow in the little cms colour management library which could in denial of service and potentially the execution of arbitrary code if a malformed calibration file is processed architectural progress please remove any architecture to which the security vulnerabilities do not apply bit optional environment armel powerpc bit be powerpc bit be powerpc risc v bit
0
9,431
12,420,910,645
IssuesEvent
2020-05-23 14:23:01
arunkumar9t2/scabbard
https://api.github.com/repos/arunkumar9t2/scabbard
opened
Explore interactive HTML based reports for visualizing graphs
module:processor needs investigation
Could build interactive web based visualization for larger graphs. - https://d3js.org/ - https://visjs.org/
1.0
Explore interactive HTML based reports for visualizing graphs - Could build interactive web based visualization for larger graphs. - https://d3js.org/ - https://visjs.org/
process
explore interactive html based reports for visualizing graphs could build interactive web based visualization for larger graphs
1
404,309
27,457,664,375
IssuesEvent
2023-03-02 22:59:08
CMU-313/spring23-nodebb-specs
https://api.github.com/repos/CMU-313/spring23-nodebb-specs
opened
Add Milestone Description for Sprint 2
documentation
Describe fulfillment of implementation goals as outlined by the planned milestones or a clearly written justification in the milestone description of why elements fell through.
1.0
Add Milestone Description for Sprint 2 - Describe fulfillment of implementation goals as outlined by the planned milestones or a clearly written justification in the milestone description of why elements fell through.
non_process
add milestone description for sprint describe fulfillment of implementation goals as outlined by the planned milestones or a clearly written justification in the milestone description of why elements fell through
0
6,364
2,841,541,548
IssuesEvent
2015-05-28 01:03:22
Natman64/Trellonos
https://api.github.com/repos/Natman64/Trellonos
closed
Debug logger
feature testing feature worthy challenge
A logger class that provides well-formatted messages (tabs to show messages from embedded blocks)
1.0
Debug logger - A logger class that provides well-formatted messages (tabs to show messages from embedded blocks)
non_process
debug logger a logger class that provides well formatted messages tabs to show messages from embedded blocks
0
14,549
17,668,755,593
IssuesEvent
2021-08-23 00:33:25
lynnandtonic/nestflix.fun
https://api.github.com/repos/lynnandtonic/nestflix.fun
closed
Add Sand Pirates of the Sahara
suggested title in process
Title: Sand Pirates of the Sahara Type (film/tv show): Film Film or show in which it appears: The Majestic Is the parent film/show streaming anywhere? For rent on Amazon, VUDU, Google Play, Redbox and DirecTV About when in the parent film/show does it appear? Approximately 1:33:00 Actual footage of the film/show can be seen (yes/no)? Yes ![SPotS_Title](https://user-images.githubusercontent.com/89033012/129632679-0784f4a3-9fa3-4571-bd8d-9efaf13fd064.png) ![SPotS_Screeenshot4](https://user-images.githubusercontent.com/89033012/129602297-df37d744-3065-4be3-bdcd-d679f2096929.png) ![SPotS_Screeenshot2](https://user-images.githubusercontent.com/89033012/129602303-1a6127b1-38be-4cb8-9502-c8f425fa5df9.png) ![SPotS_Screeenshot1](https://user-images.githubusercontent.com/89033012/129602304-183942a6-f34b-4adc-b772-c4d7de339204.png) ![SPotS_Screeenshot3](https://user-images.githubusercontent.com/89033012/129602306-f23c8866-9221-4333-8fa2-cf5a2f177203.png) ![SPotS_Screeenshot5](https://user-images.githubusercontent.com/89033012/129602309-f57b3b47-b665-4ca7-ae0f-1b500f200853.png)
1.0
Add Sand Pirates of the Sahara - Title: Sand Pirates of the Sahara Type (film/tv show): Film Film or show in which it appears: The Majestic Is the parent film/show streaming anywhere? For rent on Amazon, VUDU, Google Play, Redbox and DirecTV About when in the parent film/show does it appear? Approximately 1:33:00 Actual footage of the film/show can be seen (yes/no)? Yes ![SPotS_Title](https://user-images.githubusercontent.com/89033012/129632679-0784f4a3-9fa3-4571-bd8d-9efaf13fd064.png) ![SPotS_Screeenshot4](https://user-images.githubusercontent.com/89033012/129602297-df37d744-3065-4be3-bdcd-d679f2096929.png) ![SPotS_Screeenshot2](https://user-images.githubusercontent.com/89033012/129602303-1a6127b1-38be-4cb8-9502-c8f425fa5df9.png) ![SPotS_Screeenshot1](https://user-images.githubusercontent.com/89033012/129602304-183942a6-f34b-4adc-b772-c4d7de339204.png) ![SPotS_Screeenshot3](https://user-images.githubusercontent.com/89033012/129602306-f23c8866-9221-4333-8fa2-cf5a2f177203.png) ![SPotS_Screeenshot5](https://user-images.githubusercontent.com/89033012/129602309-f57b3b47-b665-4ca7-ae0f-1b500f200853.png)
process
add sand pirates of the sahara title sand pirates of the sahara type film tv show film film or show in which it appears the majestic is the parent film show streaming anywhere for rent on amazon vudu google play redbox and directv about when in the parent film show does it appear approximately actual footage of the film show can be seen yes no yes
1
54,615
6,397,777,144
IssuesEvent
2017-08-04 18:50:53
JuliaLang/julia
https://api.github.com/repos/JuliaLang/julia
closed
Test failures and crashes with 0.6.0
test
Running the test suite of Julia 0.6.0, we saw several test failures and errors. The full output of the test suite is copied below (including `versioninfo()`). The libgit2 test failures seem to go away when using libgit2 0.25.1 instead of 0.26.0, but I'm not sure yet what the problem is. Do you think these test failures could indicate real problems, or are they spurious? Please let me know what other information I can provide. ``` $ env JULIA_TEST_MAXRSS_MB=10000 HOME=/tmp /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/bin/julia --check-bounds=yes --startup-file=no -e "Base.runtest s([\"all\"], max(Sys.CPU_CORES, 8))" Test (Worker) | Time (s) | GC (s) | GC % | Alloc (MB) | RSS (MB) WARNING: Method definition ambig(Any, Integer) in module Test9Main_ambiguous at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/ambiguous.jl:7 overwritten at /gnu/stor e/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/ambiguous.jl:85. From worker 2: Skipping Base.<| From worker 2: Skipping Base.active_repl From worker 2: Skipping Base.active_repl_backend From worker 2: Skipping Base.<| From worker 2: Skipping Base.active_repl From worker 2: Skipping Base.active_repl_backend ambiguous (2) | 5.46 | 0.27 | 4.9 | 39.13 | 231.93 linalg/givens (17) | 24.38 | 0.61 | 2.5 | 275.19 | 242.72 linalg/pinv (16) | 35.88 | 0.97 | 2.7 | 834.22 | 346.95 linalg/special (8) | 48.22 | 0.87 | 1.8 | 582.25 | 251.52 linalg/schur (7) | 48.50 | 0.85 | 1.8 | 666.94 | 265.63 linalg/svd (11) | 56.92 | 0.87 | 1.5 | 564.80 | 260.13 linalg/generic (8) | 33.68 | 0.25 | 0.7 | 302.23 | 277.00 linalg/uniformscaling (7) | 33.83 | 0.30 | 0.9 | 407.13 | 320.52 linalg/hessenberg (8) | 14.55 | 0.13 | 0.9 | 176.96 | 295.94 linalg/lapack (12) | 97.06 | 1.35 | 1.4 | 1398.99 | 285.15 linalg/conjarray (8) | 3.80 | 0.02 | 0.4 | 34.90 | 298.19 linalg/tridiag (13) | 107.56 | 1.65 | 1.5 | 1809.29 | 330.27 linalg/symmetric (16) | 74.07 | 0.64 | 0.9 | 891.73 | 380.45 linalg/bunchkaufman (10) | 117.52 | 1.09 | 0.9 | 1095.93 | 277.06 linalg/eigen (9) | 118.15 | 1.15 | 1.0 | 1234.60 | 296.09 linalg/rowvector (7) | 45.50 | 0.32 | 0.7 | 454.94 | 333.14 linalg/bidiag (14) | 128.23 | 1.65 | 1.3 | 1575.76 | 346.30 sparse/spqr (14) | 15.84 | 0.37 | 2.3 | 128.18 | 348.36 sparse/umfpack (9) | 27.35 | 0.14 | 0.5 | 199.72 | 304.25 strings/search (9) | 2.58 | 0.00 | 0.0 | 16.32 | 306.96 strings/util (9) | 1.39 | 0.02 | 1.2 | 9.25 | 307.11 strings/io (9) | 2.52 | 0.02 | 0.7 | 34.60 | 311.30 linalg/arnoldi (12) | 55.74 | 0.57 | 1.0 | 643.48 | 338.54 unicode/UnicodeError (12) | 0.08 | 0.00 | 0.0 | 0.29 | 338.54 strings/types (9) | 2.64 | 0.02 | 0.6 | 19.84 | 311.50 strings/basic (14) | 11.69 | 0.15 | 1.2 | 150.08 | 348.36 unicode/utf8 (9) | 3.20 | 0.36 | 11.3 | 930.43 | 314.91 unicode/utf8proc (12) | 7.45 | 0.09 | 1.2 | 47.61 | 344.36 dates/query (12) | 1.58 | 0.00 | 0.0 | 11.12 | 344.36 dates/adjusters (9) | 4.42 | 0.03 | 0.8 | 41.48 | 317.53 linalg/lu (17) | 141.27 | 1.68 | 1.2 | 1682.91 | 348.21 dates/rounding (17) | 1.54 | 0.00 | 0.0 | 8.67 | 350.30 dates/types (17) | 2.13 | 0.02 | 1.1 | 14.14 | 350.61 dates/accessors (14) | 13.68 | 1.44 | 10.5 | 3478.80 | 348.36 linalg/cholesky (2) | 167.07 | 1.30 | 0.8 | 1488.05 | 309.98 dates/conversions (2) | 1.71 | 0.00 | 0.0 | 13.06 | 310.36 dates/arithmetic (14) | 12.62 | 0.13 | 1.0 | 121.50 | 348.36 sparse/cholmod (7) | 56.97 | 0.88 | 1.5 | 475.48 | 360.45 WARNING: Method definition f265a(Any) in module Test58Main_worlds at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/worlds.jl:12 overwritten at /gnu/store/26g8sbh008y b1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/worlds.jl:17. dates/ranges (9) | 23.12 | 0.29 | 1.3 | 424.46 | 369.61 linalg/diagonal (15) | 186.59 | 1.87 | 1.0 | 1838.90 | 344.81 worlds (7) | 3.89 | 0.04 | 1.0 | 30.77 | 360.45 dates/periods (12) | 26.69 | 0.44 | 1.7 | 202.50 | 344.36 keywordargs (9) | 3.34 | 0.02 | 0.6 | 14.21 | 369.79 char (9) | 1.86 | 0.02 | 1.1 | 16.77 | 371.84 triplequote (9) | 0.03 | 0.00 | 0.0 | 0.28 | 371.84 intrinsics (9) | 0.51 | 0.00 | 0.0 | 3.36 | 371.88 WARNING: Method definition f() in module JLCall14301 at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:3384 overwritten at /gnu/store/26g8sbh008yb1k17cw1y660i ynmbavp6-julia-0.6.0/share/julia/test/core.jl:3394. linalg/qr (4) | 198.14 | 2.69 | 1.4 | 3049.29 | 396.99 dates/io (17) | 29.68 | 0.19 | 0.6 | 191.54 | 356.41 iobuffer (17) | 5.10 | 0.00 | 0.0 | 15.84 | 363.13 inference (14) | 20.85 | 0.15 | 0.7 | 135.02 | 367.69 WARNING: Method definition test(Type{Tuple{V<:Union{Tuple{Int64, Int64}, Tuple{Int32, Int32}, Tuple{UInt64, UInt64}, Tuple{UInt32, UInt32}, Tuple{Int64, Int64, Int64}, Tuple{Int32, Int32, Int3 2}, Tuple{UInt64, UInt64, UInt64}, Tuple{UInt32, UInt32, UInt32}, Tuple{Int64, Int64, Int64, Int64}, Tuple{Int32, Int32, Int32, Int32}, Tuple{UInt64, UInt64, UInt64, UInt64}, Tuple{UInt32, UIn t32, UInt32, UInt32}}, I<:Union{Int64, Int32, UInt64, UInt32}}}) in module Test26Main_subtype at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/subtype.jl:1093 overwr itten at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/subtype.jl:1094. subtype (7) | 16.54 | 0.22 | 1.3 | 276.27 | 414.63 WARNING: Method definition f10178(X) in module Test21Main_staged at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/staged.jl:219 overwritten at /gnu/store/26g8sbh008y b1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/staged.jl:224. WARNING: Method definition g10178(Any) in module Test21Main_staged at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/staged.jl:221 overwritten at /gnu/store/26g8sbh00 8yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/staged.jl:226. staged (17) | 2.34 | 0.02 | 1.0 | 14.02 | 363.34 tuple (17) | 4.61 | 0.05 | 1.1 | 41.48 | 365.75 hashing (4) | 13.31 | 0.11 | 0.9 | 109.34 | 404.73 reduce (17) | 12.14 | 0.11 | 0.9 | 107.28 | 387.20 printf (12) | 35.93 | 0.17 | 0.5 | 199.98 | 347.36 WARNING: static parameter T does not occur in signature for bad_tvars at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4526. The method will not be callable. linalg/lq (11) | 171.85 | 1.71 | 1.0 | 3838.02 | 393.94 WARNING: Method definition (::Type{Test67Main_core.A16424})(Any, Any) in module Test67Main_core at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4729 overwri tten at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4734. WARNING: Method definition (::Type{Test67Main_core.B16424{T}})(Any) in module Test67Main_core at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4749 overwritt en at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4753. dict (9) | 37.12 | 0.33 | 0.9 | 349.53 | 385.58 WARNING: Method definition (::Type{Test67Main_core.C16424{T, S}})(Any, Any) in module Test67Main_core at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4761 o verwritten at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4766. WARNING: Method definition (::Type{Test67Main_core.C16424{T, S} where S where T})(T, S) in module Test67Main_core at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/co re.jl:4761 overwritten at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4766. WARNING: Method definition (::Type{Test67Main_core.D16424{T<:Real, S<:T<:Real}})(Any, Any) in module Test67Main_core at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test /core.jl:4776 overwritten at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4781. WARNING: Method definition (::Type{Test67Main_core.D16424{T, S} where S<:T where T<:Real})(Array{S<:T<:Real, 1}, Array{T<:Real, 1}) in module Test67Main_core at /gnu/store/26g8sbh008yb1k17cw1y 660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4776 overwritten at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4781. WARNING: Method definition (::Type{Test67Main_core.T20999})(Array{T, N} where N where T<:Real) in module Test67Main_core at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/ test/core.jl:4792 overwritten at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4796. WARNING: Method definition (::Type{Test67Main_core.T20999})(Any) in module Test67Main_core at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4792 overwritten at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4796. core (2) | 55.65 | 10.27 | 18.5 | 5040.32 | 701.75 intfuncs (11) | 2.51 | 0.01 | 0.4 | 15.84 | 396.78 simdloop (9) | 3.10 | 0.02 | 0.7 | 24.53 | 385.71 linalg/matmul (6) | 243.16 | 2.29 | 0.9 | 3340.12 | 327.48 vecelement (2) | 14.95 | 0.09 | 0.6 | 92.95 | 701.75 copy (6) | 4.94 | 0.02 | 0.3 | 32.76 | 328.19 random (17) | 26.59 | 0.47 | 1.8 | 252.94 | 400.37 reducedim (4) | 39.34 | 0.61 | 1.5 | 372.89 | 411.52 functional (17) | 8.03 | 0.09 | 1.1 | 72.36 | 405.96 fastmath (6) | 11.08 | 0.06 | 0.5 | 65.69 | 336.12 path (6) | 3.57 | 0.02 | 0.6 | 15.42 | 350.03 operators (17) | 7.13 | 0.07 | 1.0 | 83.60 | 413.77 parse (17) | 8.17 | 0.07 | 0.8 | 64.21 | 420.23 loading (17) | 0.61 | 0.03 | 4.3 | 4.21 | 420.26 bigint (17) | 3.89 | 0.03 | 0.8 | 35.03 | 424.68 blas (11) | 46.92 | 0.73 | 1.6 | 528.90 | 442.54 bigfloat (17) | 0.49 | 0.00 | 0.0 | 3.84 | 424.70 ccall (6) | 16.95 | 0.08 | 0.5 | 84.85 | 359.08 From worker 6: [stdio passthrough ok] math (2) | 44.87 | 0.94 | 2.1 | 1258.60 | 701.75 linalg/dense (5) | 293.41 | 3.50 | 1.2 | 3789.25 | 399.24 iterators (4) | 52.80 | 0.41 | 0.8 | 370.96 | 413.70 spawn (6) | 26.08 | 0.08 | 0.3 | 83.00 | 361.63 statistics (17) | 30.06 | 0.41 | 1.4 | 310.43 | 440.69 version (6) | 4.28 | 0.06 | 1.4 | 69.39 | 374.01 numbers (15) | 125.32 | 1.64 | 1.3 | 1509.89 | 451.63 pollfd (6) | 5.10 | 0.25 | 4.9 | 13.39 | 374.07 mpfr (15) | 5.10 | 0.06 | 1.2 | 27.94 | 456.13 read (5) | 24.98 | 0.56 | 2.2 | 310.27 | 458.05 mmap (4) | 16.01 | 9.25 | 57.7 | 49.57 | 413.71 floatapprox (4) | 2.45 | 0.00 | 0.0 | 19.41 | 413.71 socket (5) | 4.14 | 0.02 | 0.5 | 22.58 | 458.05 sparse/sparse (13) | 223.35 | 41.85 | 18.7 | 1428.78 | 418.87 abstractarray (12) | 105.67 | 1.23 | 1.2 | 1010.83 | 417.84 regex (13) | 1.07 | 0.00 | 0.0 | 6.87 | 418.87 datafmt (4) | 10.92 | 0.12 | 1.1 | 106.64 | 413.71 float16 (12) | 2.50 | 0.03 | 1.3 | 15.62 | 420.42 file (2) | 42.23 | 4.19 | 9.9 | 46.27 | 701.75 combinatorics (13) | 2.30 | 0.03 | 1.1 | 13.56 | 418.87 sysinfo (4) | 1.19 | 0.03 | 2.3 | 12.51 | 413.71 env (12) | 0.96 | 0.00 | 0.0 | 5.59 | 421.17 mod2pi (4) | 0.66 | 0.00 | 0.0 | 4.41 | 413.71 rounding (2) | 1.53 | 0.00 | 0.0 | 9.78 | 701.75 euler (12) | 2.12 | 0.06 | 2.8 | 31.46 | 428.81 reflection (5) | 14.52 | 0.15 | 1.0 | 135.36 | 460.11 offsetarray (14) | 132.54 | 1.48 | 1.1 | 1192.71 | 495.83 complex (15) | 24.04 | 0.18 | 0.7 | 137.63 | 478.52 lineedit (2) | 7.96 | 0.08 | 1.1 | 71.74 | 701.75 replcompletions (12) | 8.86 | 0.39 | 4.4 | 67.67 | 429.05 goto (12) | 0.05 | 0.00 | 0.0 | 0.27 | 429.05 llvmcall (12) | 0.55 | 0.00 | 0.0 | 2.67 | 429.05 llvmcall2 (12) | 0.05 | 0.00 | 0.0 | 0.24 | 429.05 resolve (17) | 35.81 | 2.17 | 6.1 | 2439.37 | 505.95 sets (15) | 7.71 | 0.06 | 0.8 | 52.39 | 480.75 meta (15) | 1.09 | 0.03 | 3.0 | 5.76 | 484.07 grisu (12) | 4.20 | 0.03 | 0.7 | 32.69 | 429.05 stacktraces (15) | 3.87 | 0.03 | 0.8 | 28.11 | 486.47 sparse/sparsevector (16) | 244.96 | 1.90 | 0.8 | 1664.31 | 472.65 profile (12) | 8.29 | 0.09 | 1.0 | 56.50 | 431.62 base64 (14) | 0.55 | 0.00 | 0.0 | 2.31 | 507.70 repl (5) | 24.61 | 0.23 | 0.9 | 208.87 | 473.89 docs (16) | 9.98 | 0.17 | 1.7 | 139.99 | 472.65 Warning: threaded loop executed in order markdown (12) | 10.36 | 0.12 | 1.1 | 80.27 | 433.44 show (4) | 34.92 | 0.32 | 0.9 | 271.51 | 432.83 serialize (14) | 10.80 | 0.15 | 1.4 | 93.93 | 510.01 i18n (14) | 0.03 | 0.00 | 0.0 | 0.07 | 510.01 sorting (11) | 94.39 | 0.53 | 0.6 | 532.89 | 444.80 broadcast (6) | 58.12 | 0.50 | 0.9 | 606.16 | 374.07 test (2) | 27.62 | 0.15 | 0.5 | 108.01 | 701.75 libdl (11) | 1.03 | 0.00 | 0.0 | 4.39 | 447.24 threads (16) | 11.60 | 0.34 | 2.9 | 93.34 | 477.59 workspace (14) | 5.96 | 0.00 | 0.0 | 10.76 | 510.04 intset (11) | 6.26 | 0.03 | 0.5 | 28.90 | 447.27 int (6) | 8.55 | 0.05 | 0.6 | 44.28 | 379.12 error (6) | 1.95 | 0.03 | 1.3 | 7.78 | 381.36 cartesian (6) | 0.01 | 0.00 | 0.0 | 0.05 | 381.36 asmvariant (6) | 0.04 | 0.00 | 0.0 | 0.10 | 381.36 osutils (6) | 0.04 | 0.00 | 0.0 | 0.25 | 381.36 inline (14) | 6.28 | 0.00 | 0.0 | 4.95 | 510.46 enums (12) | 15.18 | 0.23 | 1.5 | 166.51 | 443.57 WARNING: Method definition f(Tuple{Vararg{Int64, N}}, AbstractArray{T, N}) in module Test58Main_specificity at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/specific ity.jl:87 overwritten at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/specificity.jl:93. iostream (14) | 0.53 | 0.04 | 7.2 | 3.02 | 510.46 specificity (12) | 0.17 | 0.00 | 0.0 | 0.88 | 443.68 checked (2) | 11.92 | 0.12 | 1.0 | 54.56 | 701.75 floatfuncs (16) | 11.41 | 0.15 | 1.3 | 128.50 | 489.27 boundscheck (11) | 13.66 | 0.00 | 0.0 | 2.52 | 447.27 arrayops (7) | 190.08 | 2.75 | 1.4 | 1811.71 | 572.80 nullable (17) | 51.88 | 0.48 | 0.9 | 390.88 | 522.82 channels (6) | 15.42 | 1.61 | 10.4 | 455.07 | 558.50 dsp (12) | 17.14 | 0.44 | 2.6 | 314.56 | 461.06 WARNING: readuntil(IO,AbstractString) will perform poorly with a long string misc (5) | 44.25 | 2.06 | 4.7 | 1418.72 | 543.63 WARNING: readuntil(IO,AbstractString) will perform poorly with a long string WARNING: readuntil(IO,AbstractString) will perform poorly with a long string examples (2) | 26.56 | 0.69 | 2.6 | 628.06 | 701.75 fft (14) | 32.36 | 0.88 | 2.7 | 565.71 | 536.65 bitarray (9) | 185.25 | 2.60 | 1.4 | 2752.79 | 487.18 sparse/higherorderfns (10) | 300.80 | 2.71 | 0.9 | 3549.06 | 425.82 ranges (13) | 99.47 | 4.51 | 4.5 | 7130.23 | 453.54 subarray (8) | 410.52 | 9.36 | 2.3 | 5678.40 | 710.36 linalg/triangular (3) | 531.20 | 13.73 | 2.6 | 9086.70 | 706.64 From worker 1: compile: Test Failed Expression: (Base.Test.ismatch_warn)("ERROR: LoadError: Declaring __precompile__(false) is not allowed in files that are being precompiled.\nStacktrace:\n [1] __precompile__", (Base.Test.rea dstring)(#107#fname)) Stacktrace: [1] macro expansion at ./test.jl:433 [inlined] [2] (::Test26Main_compile.##1#13)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/compile.jl:256 compile: Test Failed Expression: (Base.Test.ismatch_warn)("ERROR: LoadError: break me\nStacktrace:\n [1] error", (Base.Test.readstring)(#167#fname)) Stacktrace: [1] macro expansion at ./test.jl:433 [inlined] [2] (::Test26Main_compile.##1#13)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/compile.jl:335 compile: Error During Test Got an exception of type LoadError outside of a @test LoadError: open: permission denied (EACCES) Stacktrace: [1] uv_error at ./libuv.jl:68 [inlined] [2] open(::String, ::UInt16, ::UInt16) at ./filesystem.jl:81 [3] touch(::String) at ./file.jl:248 [4] (::Test26Main_compile.##1#13)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/compile.jl:532 [5] withenv(::Test26Main_compile.##1#13, ::Pair{String,Void}, ::Vararg{Pair{String,Void},N} where N) at ./env.jl:157 [6] macro expansion at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/testdefs.jl:18 [inlined] [7] macro expansion at ./test.jl:860 [inlined] [8] macro expansion at ./util.jl:378 [inlined] [9] macro expansion at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/testdefs.jl:17 [inlined] [10] anonymous at ./<missing>:? [11] runtests(::String, ::Bool) at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/testdefs.jl:21 [12] (::##45#51)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/runtests.jl:103 [13] (::##40#46)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/runtests.jl:103 [14] cd(::##40#46, ::String) at ./file.jl:70 while loading /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/compile.jl, in expression starting on line 17 From worker 1: Worker 2 failed running test backtrace: Some tests did not pass: 20 passed, 1 failed, 0 errored, 1 broken.backtrace: Test Failed Expression: have_backtrace Stacktrace: [1] record(::Base.Test.DefaultTestSet, ::Base.Test.Fail) at ./test.jl:568 [2] (::##40#46)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/runtests.jl:160 [3] cd(::##40#46, ::String) at ./file.jl:70 Worker 14 failed running test replutil: Some tests did not pass: 166 passed, 1 failed, 0 errored, 0 broken.replutil: Test Failed Expression: contains(err_str, "Cannot raise an integer x to a negative power -n") Stacktrace: [1] record(::Base.Test.DefaultTestSet, ::Base.Test.Fail) at ./test.jl:568 [2] (::##40#46)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/runtests.jl:160 [3] cd(::##40#46, ::String) at ./file.jl:70 Worker 15 failed running test libgit2: Some tests did not pass: 391 passed, 2 failed, 0 errored, 0 broken.libgit2: Test Failed Expression: startswith(sprint(show, e), "GitError(Code:ENOTFOUND, Class:OS, Failed to resolve path") Stacktrace: [1] record(::Base.Test.DefaultTestSet, ::Base.Test.Fail) at ./test.jl:568 [2] (::##40#46)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/runtests.jl:160 [3] cd(::##40#46, ::String) at ./file.jl:70 libgit2: Test Failed Expression: err.msg == "Invalid Content-Type: text/plain" Evaluated: "invalid Content-Type: text/plain" == "Invalid Content-Type: text/plain" Stacktrace: [1] record(::Base.Test.DefaultTestSet, ::Base.Test.Fail) at ./test.jl:568 [2] (::##40#46)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/runtests.jl:160 [3] cd(::##40#46, ::String) at ./file.jl:70 Worker 4 failed running test cmdlineargs: Some tests did not pass: 133 passed, 2 failed, 2 errored, 0 broken.cmdlineargs: Test Failed Expression: contains(bt, "include_from_node1") Stacktrace: [1] record(::Base.Test.DefaultTestSet, ::Base.Test.Fail) at ./test.jl:568 [2] (::##40#46)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/runtests.jl:160 [3] cd(::##40#46, ::String) at ./file.jl:70 cmdlineargs: Test Failed Expression: contains(bt, "include_from_node1(::String) at $(joinpath(".", "loading.jl"))") Stacktrace: [1] record(::Base.Test.DefaultTestSet, ::Base.Test.Fail) at ./test.jl:568 [2] (::##40#46)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/runtests.jl:160 [3] cd(::##40#46, ::String) at ./file.jl:70 cmdlineargs: Error During Test Test threw an exception of type ErrorException Expression: length(lno.captures) == 1 type Void has no field captures cmdlineargs: Error During Test Test threw an exception of type ErrorException Expression: parse(Int, lno.captures[1]) > 0 type Void has no field captures compile: Error During Test Test threw an exception of type Base.Test.TestSetException Expression: compile Some tests did not pass: 84 passed, 2 failed, 1 errored, 0 broken. Test Summary: | Pass Fail Error Broken Total Overall | 21627577 6 3 1311352 22938938 ambiguous | 51 51 linalg/givens | 1552 1552 linalg/pinv | 232 232 linalg/special | 942 942 linalg/schur | 300 300 linalg/svd | 244 244 linalg/generic | 206 206 linalg/uniformscaling | 247 247 linalg/hessenberg | 40 40 linalg/lapack | 628 628 linalg/conjarray | 10 10 linalg/tridiag | 843 843 linalg/symmetric | 1386 1386 linalg/bunchkaufman | 2519 2519 linalg/eigen | 381 381 linalg/rowvector | 135 135 linalg/bidiag | 1592 1592 sparse/spqr | 53 53 sparse/umfpack | 137 137 strings/search | 549 549 strings/util | 341 341 strings/io | 12536 12536 linalg/arnoldi | 76 76 unicode/UnicodeError | 1 1 strings/types | 8905 8905 strings/basic | 26976 26976 unicode/utf8 | 1048595 1048595 unicode/utf8proc | 765 765 dates/query | 988 988 dates/adjusters | 3147 3147 linalg/lu | 1149 1149 dates/rounding | 157 157 dates/types | 167 167 dates/accessors | 7723858 7723858 linalg/cholesky | 2104 2104 dates/conversions | 159 159 dates/arithmetic | 312 312 sparse/cholmod | 359 359 dates/ranges | 348530 348530 linalg/diagonal | 1140 1140 worlds | 61 61 dates/periods | 316 316 keywordargs | 109 109 char | 1707 1707 triplequote | 28 28 intrinsics | 29 29 linalg/qr | 3054 3054 dates/io | 241 241 iobuffer | 135 135 inference | 167 167 subtype | 337157 15 337172 staged | 48 48 tuple | 382 382 hashing | 14453 14453 reduce | 235 235 printf | 596 596 linalg/lq | 1360 1360 dict | 134165 134165 core | 35531 35531 intfuncs | 4137 4137 simdloop | 207 207 linalg/matmul | 527 527 vecelement | 533 533 copy | 312 312 random | 200711 200711 reducedim | 461 461 functional | 61 61 fastmath | 790 790 path | 208 12 220 operators | 59 59 parse | 1094 1094 loading | 23 23 bigint | 2391 2391 blas | 486 486 bigfloat | 24 24 ccall | 3852 3852 math | 551275 551275 backtrace | 20 1 1 22 linalg/dense | 6324 6324 iterators | 916 916 spawn | 107 4 111 statistics | 357 357 version | 124465 124465 numbers | 1477170 1477170 pollfd | 344 344 mpfr | 702 702 read | 2071 2071 mmap | 131 131 floatapprox | 49 49 socket | 77 77 sparse/sparse | 1518 1518 abstractarray | 2465 2465 regex | 34 34 datafmt | 83 83 float16 | 124 124 file | 804 804 combinatorics | 44 44 sysinfo | 2 2 env | 61 61 mod2pi | 9 9 rounding | 327 327 euler | 12 12 reflection | 275 275 offsetarray | 726 726 complex | 1094 1094 lineedit | 175 175 replcompletions | 268 2 270 goto | 13 13 llvmcall | 13 13 llvmcall2 | 6 6 resolve | 2648 2648 sets | 216 216 meta | 36 36 grisu | 683 683 stacktraces | 42 42 sparse/sparsevector | 8163 8163 profile | 7 7 replutil | 166 1 167 base64 | 9 9 repl | 113 113 docs | 193 193 markdown | 210 210 show | 146 146 serialize | 90 90 i18n | 2 2 sorting | 4864 4864 broadcast | 306 306 test | 201 14 215 libdl | 198 198 threads | 190563 190563 workspace | 1 1 intset | 148 148 int | 10138 10138 error | 28 28 cartesian | 2 2 asmvariant | 3 3 osutils | 21 21 inline | 23 23 enums | 79 79 iostream | 21 21 specificity | 100 100 checked | 1211 1211 floatfuncs | 122 122 boundscheck | No tests arrayops | 1556 1556 nullable | 98030 98030 channels | 187 187 dsp | 365 365 misc | 1279536 1279536 examples | 22 22 fft | 2026 2026 bitarray | 892433 892433 sparse/higherorderfns | 6377 584 6961 libgit2 | 391 2 393 ranges | 6981911 1310720 8292631 subarray | 200 200 cmdlineargs | 133 2 2 137 linalg/triangular | 33634 33634 compile | 1 1 distributed | No tests FAILURE Error in testset backtrace: Test Failed Expression: have_backtrace Error in testset replutil: Test Failed Expression: contains(err_str, "Cannot raise an integer x to a negative power -n") Error in testset libgit2: Test Failed Expression: startswith(sprint(show, e), "GitError(Code:ENOTFOUND, Class:OS, Failed to resolve path") Error in testset libgit2: Test Failed Expression: err.msg == "Invalid Content-Type: text/plain" Evaluated: "invalid Content-Type: text/plain" == "Invalid Content-Type: text/plain" Error in testset cmdlineargs: Test Failed Expression: contains(bt, "include_from_node1") Error in testset cmdlineargs: Test Failed Expression: contains(bt, "include_from_node1(::String) at $(joinpath(".", "loading.jl"))") Error in testset cmdlineargs: Error During Test Test threw an exception of type ErrorException Expression: length(lno.captures) == 1 type Void has no field captures Error in testset cmdlineargs: Error During Test Test threw an exception of type ErrorException Expression: parse(Int, lno.captures[1]) > 0 type Void has no field captures Error in testset compile: Error During Test Test threw an exception of type Base.Test.TestSetException Expression: compile Some tests did not pass: 84 passed, 2 failed, 1 errored, 0 broken. ERROR: LoadError: Test run finished with errors while loading /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/runtests.jl, in expression starting on line 29 ERROR: A test has failed. Please submit a bug report (https://github.com/JuliaLang/julia/issues) including error messages above and the output of versioninfo(): Julia Version 0.6.0 Commit 903644385b* (2017-06-19 13:05 UTC) Platform Info: OS: Linux (x86_64-unknown-linux-gnu) CPU: Intel(R) Xeon(R) CPU @ 2.50GHz WORD_SIZE: 64 BLAS: libopenblas (NO_LAPACK NO_LAPACKE DYNAMIC_ARCH NO_AFFINITY Sandybridge) LAPACK: liblapack LIBM: libopenlibm LLVM: libLLVM-3.8.1 (ORCJIT, ivybridge) Stacktrace: [1] runtests(::Array{String,1}, ::Int64) at ./interactiveutil.jl:670 ```
1.0
Test failures and crashes with 0.6.0 - Running the test suite of Julia 0.6.0, we saw several test failures and errors. The full output of the test suite is copied below (including `versioninfo()`). The libgit2 test failures seem to go away when using libgit2 0.25.1 instead of 0.26.0, but I'm not sure yet what the problem is. Do you think these test failures could indicate real problems, or are they spurious? Please let me know what other information I can provide. ``` $ env JULIA_TEST_MAXRSS_MB=10000 HOME=/tmp /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/bin/julia --check-bounds=yes --startup-file=no -e "Base.runtest s([\"all\"], max(Sys.CPU_CORES, 8))" Test (Worker) | Time (s) | GC (s) | GC % | Alloc (MB) | RSS (MB) WARNING: Method definition ambig(Any, Integer) in module Test9Main_ambiguous at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/ambiguous.jl:7 overwritten at /gnu/stor e/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/ambiguous.jl:85. From worker 2: Skipping Base.<| From worker 2: Skipping Base.active_repl From worker 2: Skipping Base.active_repl_backend From worker 2: Skipping Base.<| From worker 2: Skipping Base.active_repl From worker 2: Skipping Base.active_repl_backend ambiguous (2) | 5.46 | 0.27 | 4.9 | 39.13 | 231.93 linalg/givens (17) | 24.38 | 0.61 | 2.5 | 275.19 | 242.72 linalg/pinv (16) | 35.88 | 0.97 | 2.7 | 834.22 | 346.95 linalg/special (8) | 48.22 | 0.87 | 1.8 | 582.25 | 251.52 linalg/schur (7) | 48.50 | 0.85 | 1.8 | 666.94 | 265.63 linalg/svd (11) | 56.92 | 0.87 | 1.5 | 564.80 | 260.13 linalg/generic (8) | 33.68 | 0.25 | 0.7 | 302.23 | 277.00 linalg/uniformscaling (7) | 33.83 | 0.30 | 0.9 | 407.13 | 320.52 linalg/hessenberg (8) | 14.55 | 0.13 | 0.9 | 176.96 | 295.94 linalg/lapack (12) | 97.06 | 1.35 | 1.4 | 1398.99 | 285.15 linalg/conjarray (8) | 3.80 | 0.02 | 0.4 | 34.90 | 298.19 linalg/tridiag (13) | 107.56 | 1.65 | 1.5 | 1809.29 | 330.27 linalg/symmetric (16) | 74.07 | 0.64 | 0.9 | 891.73 | 380.45 linalg/bunchkaufman (10) | 117.52 | 1.09 | 0.9 | 1095.93 | 277.06 linalg/eigen (9) | 118.15 | 1.15 | 1.0 | 1234.60 | 296.09 linalg/rowvector (7) | 45.50 | 0.32 | 0.7 | 454.94 | 333.14 linalg/bidiag (14) | 128.23 | 1.65 | 1.3 | 1575.76 | 346.30 sparse/spqr (14) | 15.84 | 0.37 | 2.3 | 128.18 | 348.36 sparse/umfpack (9) | 27.35 | 0.14 | 0.5 | 199.72 | 304.25 strings/search (9) | 2.58 | 0.00 | 0.0 | 16.32 | 306.96 strings/util (9) | 1.39 | 0.02 | 1.2 | 9.25 | 307.11 strings/io (9) | 2.52 | 0.02 | 0.7 | 34.60 | 311.30 linalg/arnoldi (12) | 55.74 | 0.57 | 1.0 | 643.48 | 338.54 unicode/UnicodeError (12) | 0.08 | 0.00 | 0.0 | 0.29 | 338.54 strings/types (9) | 2.64 | 0.02 | 0.6 | 19.84 | 311.50 strings/basic (14) | 11.69 | 0.15 | 1.2 | 150.08 | 348.36 unicode/utf8 (9) | 3.20 | 0.36 | 11.3 | 930.43 | 314.91 unicode/utf8proc (12) | 7.45 | 0.09 | 1.2 | 47.61 | 344.36 dates/query (12) | 1.58 | 0.00 | 0.0 | 11.12 | 344.36 dates/adjusters (9) | 4.42 | 0.03 | 0.8 | 41.48 | 317.53 linalg/lu (17) | 141.27 | 1.68 | 1.2 | 1682.91 | 348.21 dates/rounding (17) | 1.54 | 0.00 | 0.0 | 8.67 | 350.30 dates/types (17) | 2.13 | 0.02 | 1.1 | 14.14 | 350.61 dates/accessors (14) | 13.68 | 1.44 | 10.5 | 3478.80 | 348.36 linalg/cholesky (2) | 167.07 | 1.30 | 0.8 | 1488.05 | 309.98 dates/conversions (2) | 1.71 | 0.00 | 0.0 | 13.06 | 310.36 dates/arithmetic (14) | 12.62 | 0.13 | 1.0 | 121.50 | 348.36 sparse/cholmod (7) | 56.97 | 0.88 | 1.5 | 475.48 | 360.45 WARNING: Method definition f265a(Any) in module Test58Main_worlds at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/worlds.jl:12 overwritten at /gnu/store/26g8sbh008y b1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/worlds.jl:17. dates/ranges (9) | 23.12 | 0.29 | 1.3 | 424.46 | 369.61 linalg/diagonal (15) | 186.59 | 1.87 | 1.0 | 1838.90 | 344.81 worlds (7) | 3.89 | 0.04 | 1.0 | 30.77 | 360.45 dates/periods (12) | 26.69 | 0.44 | 1.7 | 202.50 | 344.36 keywordargs (9) | 3.34 | 0.02 | 0.6 | 14.21 | 369.79 char (9) | 1.86 | 0.02 | 1.1 | 16.77 | 371.84 triplequote (9) | 0.03 | 0.00 | 0.0 | 0.28 | 371.84 intrinsics (9) | 0.51 | 0.00 | 0.0 | 3.36 | 371.88 WARNING: Method definition f() in module JLCall14301 at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:3384 overwritten at /gnu/store/26g8sbh008yb1k17cw1y660i ynmbavp6-julia-0.6.0/share/julia/test/core.jl:3394. linalg/qr (4) | 198.14 | 2.69 | 1.4 | 3049.29 | 396.99 dates/io (17) | 29.68 | 0.19 | 0.6 | 191.54 | 356.41 iobuffer (17) | 5.10 | 0.00 | 0.0 | 15.84 | 363.13 inference (14) | 20.85 | 0.15 | 0.7 | 135.02 | 367.69 WARNING: Method definition test(Type{Tuple{V<:Union{Tuple{Int64, Int64}, Tuple{Int32, Int32}, Tuple{UInt64, UInt64}, Tuple{UInt32, UInt32}, Tuple{Int64, Int64, Int64}, Tuple{Int32, Int32, Int3 2}, Tuple{UInt64, UInt64, UInt64}, Tuple{UInt32, UInt32, UInt32}, Tuple{Int64, Int64, Int64, Int64}, Tuple{Int32, Int32, Int32, Int32}, Tuple{UInt64, UInt64, UInt64, UInt64}, Tuple{UInt32, UIn t32, UInt32, UInt32}}, I<:Union{Int64, Int32, UInt64, UInt32}}}) in module Test26Main_subtype at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/subtype.jl:1093 overwr itten at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/subtype.jl:1094. subtype (7) | 16.54 | 0.22 | 1.3 | 276.27 | 414.63 WARNING: Method definition f10178(X) in module Test21Main_staged at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/staged.jl:219 overwritten at /gnu/store/26g8sbh008y b1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/staged.jl:224. WARNING: Method definition g10178(Any) in module Test21Main_staged at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/staged.jl:221 overwritten at /gnu/store/26g8sbh00 8yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/staged.jl:226. staged (17) | 2.34 | 0.02 | 1.0 | 14.02 | 363.34 tuple (17) | 4.61 | 0.05 | 1.1 | 41.48 | 365.75 hashing (4) | 13.31 | 0.11 | 0.9 | 109.34 | 404.73 reduce (17) | 12.14 | 0.11 | 0.9 | 107.28 | 387.20 printf (12) | 35.93 | 0.17 | 0.5 | 199.98 | 347.36 WARNING: static parameter T does not occur in signature for bad_tvars at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4526. The method will not be callable. linalg/lq (11) | 171.85 | 1.71 | 1.0 | 3838.02 | 393.94 WARNING: Method definition (::Type{Test67Main_core.A16424})(Any, Any) in module Test67Main_core at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4729 overwri tten at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4734. WARNING: Method definition (::Type{Test67Main_core.B16424{T}})(Any) in module Test67Main_core at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4749 overwritt en at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4753. dict (9) | 37.12 | 0.33 | 0.9 | 349.53 | 385.58 WARNING: Method definition (::Type{Test67Main_core.C16424{T, S}})(Any, Any) in module Test67Main_core at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4761 o verwritten at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4766. WARNING: Method definition (::Type{Test67Main_core.C16424{T, S} where S where T})(T, S) in module Test67Main_core at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/co re.jl:4761 overwritten at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4766. WARNING: Method definition (::Type{Test67Main_core.D16424{T<:Real, S<:T<:Real}})(Any, Any) in module Test67Main_core at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test /core.jl:4776 overwritten at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4781. WARNING: Method definition (::Type{Test67Main_core.D16424{T, S} where S<:T where T<:Real})(Array{S<:T<:Real, 1}, Array{T<:Real, 1}) in module Test67Main_core at /gnu/store/26g8sbh008yb1k17cw1y 660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4776 overwritten at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4781. WARNING: Method definition (::Type{Test67Main_core.T20999})(Array{T, N} where N where T<:Real) in module Test67Main_core at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/ test/core.jl:4792 overwritten at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4796. WARNING: Method definition (::Type{Test67Main_core.T20999})(Any) in module Test67Main_core at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4792 overwritten at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/core.jl:4796. core (2) | 55.65 | 10.27 | 18.5 | 5040.32 | 701.75 intfuncs (11) | 2.51 | 0.01 | 0.4 | 15.84 | 396.78 simdloop (9) | 3.10 | 0.02 | 0.7 | 24.53 | 385.71 linalg/matmul (6) | 243.16 | 2.29 | 0.9 | 3340.12 | 327.48 vecelement (2) | 14.95 | 0.09 | 0.6 | 92.95 | 701.75 copy (6) | 4.94 | 0.02 | 0.3 | 32.76 | 328.19 random (17) | 26.59 | 0.47 | 1.8 | 252.94 | 400.37 reducedim (4) | 39.34 | 0.61 | 1.5 | 372.89 | 411.52 functional (17) | 8.03 | 0.09 | 1.1 | 72.36 | 405.96 fastmath (6) | 11.08 | 0.06 | 0.5 | 65.69 | 336.12 path (6) | 3.57 | 0.02 | 0.6 | 15.42 | 350.03 operators (17) | 7.13 | 0.07 | 1.0 | 83.60 | 413.77 parse (17) | 8.17 | 0.07 | 0.8 | 64.21 | 420.23 loading (17) | 0.61 | 0.03 | 4.3 | 4.21 | 420.26 bigint (17) | 3.89 | 0.03 | 0.8 | 35.03 | 424.68 blas (11) | 46.92 | 0.73 | 1.6 | 528.90 | 442.54 bigfloat (17) | 0.49 | 0.00 | 0.0 | 3.84 | 424.70 ccall (6) | 16.95 | 0.08 | 0.5 | 84.85 | 359.08 From worker 6: [stdio passthrough ok] math (2) | 44.87 | 0.94 | 2.1 | 1258.60 | 701.75 linalg/dense (5) | 293.41 | 3.50 | 1.2 | 3789.25 | 399.24 iterators (4) | 52.80 | 0.41 | 0.8 | 370.96 | 413.70 spawn (6) | 26.08 | 0.08 | 0.3 | 83.00 | 361.63 statistics (17) | 30.06 | 0.41 | 1.4 | 310.43 | 440.69 version (6) | 4.28 | 0.06 | 1.4 | 69.39 | 374.01 numbers (15) | 125.32 | 1.64 | 1.3 | 1509.89 | 451.63 pollfd (6) | 5.10 | 0.25 | 4.9 | 13.39 | 374.07 mpfr (15) | 5.10 | 0.06 | 1.2 | 27.94 | 456.13 read (5) | 24.98 | 0.56 | 2.2 | 310.27 | 458.05 mmap (4) | 16.01 | 9.25 | 57.7 | 49.57 | 413.71 floatapprox (4) | 2.45 | 0.00 | 0.0 | 19.41 | 413.71 socket (5) | 4.14 | 0.02 | 0.5 | 22.58 | 458.05 sparse/sparse (13) | 223.35 | 41.85 | 18.7 | 1428.78 | 418.87 abstractarray (12) | 105.67 | 1.23 | 1.2 | 1010.83 | 417.84 regex (13) | 1.07 | 0.00 | 0.0 | 6.87 | 418.87 datafmt (4) | 10.92 | 0.12 | 1.1 | 106.64 | 413.71 float16 (12) | 2.50 | 0.03 | 1.3 | 15.62 | 420.42 file (2) | 42.23 | 4.19 | 9.9 | 46.27 | 701.75 combinatorics (13) | 2.30 | 0.03 | 1.1 | 13.56 | 418.87 sysinfo (4) | 1.19 | 0.03 | 2.3 | 12.51 | 413.71 env (12) | 0.96 | 0.00 | 0.0 | 5.59 | 421.17 mod2pi (4) | 0.66 | 0.00 | 0.0 | 4.41 | 413.71 rounding (2) | 1.53 | 0.00 | 0.0 | 9.78 | 701.75 euler (12) | 2.12 | 0.06 | 2.8 | 31.46 | 428.81 reflection (5) | 14.52 | 0.15 | 1.0 | 135.36 | 460.11 offsetarray (14) | 132.54 | 1.48 | 1.1 | 1192.71 | 495.83 complex (15) | 24.04 | 0.18 | 0.7 | 137.63 | 478.52 lineedit (2) | 7.96 | 0.08 | 1.1 | 71.74 | 701.75 replcompletions (12) | 8.86 | 0.39 | 4.4 | 67.67 | 429.05 goto (12) | 0.05 | 0.00 | 0.0 | 0.27 | 429.05 llvmcall (12) | 0.55 | 0.00 | 0.0 | 2.67 | 429.05 llvmcall2 (12) | 0.05 | 0.00 | 0.0 | 0.24 | 429.05 resolve (17) | 35.81 | 2.17 | 6.1 | 2439.37 | 505.95 sets (15) | 7.71 | 0.06 | 0.8 | 52.39 | 480.75 meta (15) | 1.09 | 0.03 | 3.0 | 5.76 | 484.07 grisu (12) | 4.20 | 0.03 | 0.7 | 32.69 | 429.05 stacktraces (15) | 3.87 | 0.03 | 0.8 | 28.11 | 486.47 sparse/sparsevector (16) | 244.96 | 1.90 | 0.8 | 1664.31 | 472.65 profile (12) | 8.29 | 0.09 | 1.0 | 56.50 | 431.62 base64 (14) | 0.55 | 0.00 | 0.0 | 2.31 | 507.70 repl (5) | 24.61 | 0.23 | 0.9 | 208.87 | 473.89 docs (16) | 9.98 | 0.17 | 1.7 | 139.99 | 472.65 Warning: threaded loop executed in order markdown (12) | 10.36 | 0.12 | 1.1 | 80.27 | 433.44 show (4) | 34.92 | 0.32 | 0.9 | 271.51 | 432.83 serialize (14) | 10.80 | 0.15 | 1.4 | 93.93 | 510.01 i18n (14) | 0.03 | 0.00 | 0.0 | 0.07 | 510.01 sorting (11) | 94.39 | 0.53 | 0.6 | 532.89 | 444.80 broadcast (6) | 58.12 | 0.50 | 0.9 | 606.16 | 374.07 test (2) | 27.62 | 0.15 | 0.5 | 108.01 | 701.75 libdl (11) | 1.03 | 0.00 | 0.0 | 4.39 | 447.24 threads (16) | 11.60 | 0.34 | 2.9 | 93.34 | 477.59 workspace (14) | 5.96 | 0.00 | 0.0 | 10.76 | 510.04 intset (11) | 6.26 | 0.03 | 0.5 | 28.90 | 447.27 int (6) | 8.55 | 0.05 | 0.6 | 44.28 | 379.12 error (6) | 1.95 | 0.03 | 1.3 | 7.78 | 381.36 cartesian (6) | 0.01 | 0.00 | 0.0 | 0.05 | 381.36 asmvariant (6) | 0.04 | 0.00 | 0.0 | 0.10 | 381.36 osutils (6) | 0.04 | 0.00 | 0.0 | 0.25 | 381.36 inline (14) | 6.28 | 0.00 | 0.0 | 4.95 | 510.46 enums (12) | 15.18 | 0.23 | 1.5 | 166.51 | 443.57 WARNING: Method definition f(Tuple{Vararg{Int64, N}}, AbstractArray{T, N}) in module Test58Main_specificity at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/specific ity.jl:87 overwritten at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/specificity.jl:93. iostream (14) | 0.53 | 0.04 | 7.2 | 3.02 | 510.46 specificity (12) | 0.17 | 0.00 | 0.0 | 0.88 | 443.68 checked (2) | 11.92 | 0.12 | 1.0 | 54.56 | 701.75 floatfuncs (16) | 11.41 | 0.15 | 1.3 | 128.50 | 489.27 boundscheck (11) | 13.66 | 0.00 | 0.0 | 2.52 | 447.27 arrayops (7) | 190.08 | 2.75 | 1.4 | 1811.71 | 572.80 nullable (17) | 51.88 | 0.48 | 0.9 | 390.88 | 522.82 channels (6) | 15.42 | 1.61 | 10.4 | 455.07 | 558.50 dsp (12) | 17.14 | 0.44 | 2.6 | 314.56 | 461.06 WARNING: readuntil(IO,AbstractString) will perform poorly with a long string misc (5) | 44.25 | 2.06 | 4.7 | 1418.72 | 543.63 WARNING: readuntil(IO,AbstractString) will perform poorly with a long string WARNING: readuntil(IO,AbstractString) will perform poorly with a long string examples (2) | 26.56 | 0.69 | 2.6 | 628.06 | 701.75 fft (14) | 32.36 | 0.88 | 2.7 | 565.71 | 536.65 bitarray (9) | 185.25 | 2.60 | 1.4 | 2752.79 | 487.18 sparse/higherorderfns (10) | 300.80 | 2.71 | 0.9 | 3549.06 | 425.82 ranges (13) | 99.47 | 4.51 | 4.5 | 7130.23 | 453.54 subarray (8) | 410.52 | 9.36 | 2.3 | 5678.40 | 710.36 linalg/triangular (3) | 531.20 | 13.73 | 2.6 | 9086.70 | 706.64 From worker 1: compile: Test Failed Expression: (Base.Test.ismatch_warn)("ERROR: LoadError: Declaring __precompile__(false) is not allowed in files that are being precompiled.\nStacktrace:\n [1] __precompile__", (Base.Test.rea dstring)(#107#fname)) Stacktrace: [1] macro expansion at ./test.jl:433 [inlined] [2] (::Test26Main_compile.##1#13)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/compile.jl:256 compile: Test Failed Expression: (Base.Test.ismatch_warn)("ERROR: LoadError: break me\nStacktrace:\n [1] error", (Base.Test.readstring)(#167#fname)) Stacktrace: [1] macro expansion at ./test.jl:433 [inlined] [2] (::Test26Main_compile.##1#13)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/compile.jl:335 compile: Error During Test Got an exception of type LoadError outside of a @test LoadError: open: permission denied (EACCES) Stacktrace: [1] uv_error at ./libuv.jl:68 [inlined] [2] open(::String, ::UInt16, ::UInt16) at ./filesystem.jl:81 [3] touch(::String) at ./file.jl:248 [4] (::Test26Main_compile.##1#13)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/compile.jl:532 [5] withenv(::Test26Main_compile.##1#13, ::Pair{String,Void}, ::Vararg{Pair{String,Void},N} where N) at ./env.jl:157 [6] macro expansion at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/testdefs.jl:18 [inlined] [7] macro expansion at ./test.jl:860 [inlined] [8] macro expansion at ./util.jl:378 [inlined] [9] macro expansion at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/testdefs.jl:17 [inlined] [10] anonymous at ./<missing>:? [11] runtests(::String, ::Bool) at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/testdefs.jl:21 [12] (::##45#51)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/runtests.jl:103 [13] (::##40#46)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/runtests.jl:103 [14] cd(::##40#46, ::String) at ./file.jl:70 while loading /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/compile.jl, in expression starting on line 17 From worker 1: Worker 2 failed running test backtrace: Some tests did not pass: 20 passed, 1 failed, 0 errored, 1 broken.backtrace: Test Failed Expression: have_backtrace Stacktrace: [1] record(::Base.Test.DefaultTestSet, ::Base.Test.Fail) at ./test.jl:568 [2] (::##40#46)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/runtests.jl:160 [3] cd(::##40#46, ::String) at ./file.jl:70 Worker 14 failed running test replutil: Some tests did not pass: 166 passed, 1 failed, 0 errored, 0 broken.replutil: Test Failed Expression: contains(err_str, "Cannot raise an integer x to a negative power -n") Stacktrace: [1] record(::Base.Test.DefaultTestSet, ::Base.Test.Fail) at ./test.jl:568 [2] (::##40#46)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/runtests.jl:160 [3] cd(::##40#46, ::String) at ./file.jl:70 Worker 15 failed running test libgit2: Some tests did not pass: 391 passed, 2 failed, 0 errored, 0 broken.libgit2: Test Failed Expression: startswith(sprint(show, e), "GitError(Code:ENOTFOUND, Class:OS, Failed to resolve path") Stacktrace: [1] record(::Base.Test.DefaultTestSet, ::Base.Test.Fail) at ./test.jl:568 [2] (::##40#46)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/runtests.jl:160 [3] cd(::##40#46, ::String) at ./file.jl:70 libgit2: Test Failed Expression: err.msg == "Invalid Content-Type: text/plain" Evaluated: "invalid Content-Type: text/plain" == "Invalid Content-Type: text/plain" Stacktrace: [1] record(::Base.Test.DefaultTestSet, ::Base.Test.Fail) at ./test.jl:568 [2] (::##40#46)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/runtests.jl:160 [3] cd(::##40#46, ::String) at ./file.jl:70 Worker 4 failed running test cmdlineargs: Some tests did not pass: 133 passed, 2 failed, 2 errored, 0 broken.cmdlineargs: Test Failed Expression: contains(bt, "include_from_node1") Stacktrace: [1] record(::Base.Test.DefaultTestSet, ::Base.Test.Fail) at ./test.jl:568 [2] (::##40#46)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/runtests.jl:160 [3] cd(::##40#46, ::String) at ./file.jl:70 cmdlineargs: Test Failed Expression: contains(bt, "include_from_node1(::String) at $(joinpath(".", "loading.jl"))") Stacktrace: [1] record(::Base.Test.DefaultTestSet, ::Base.Test.Fail) at ./test.jl:568 [2] (::##40#46)() at /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/runtests.jl:160 [3] cd(::##40#46, ::String) at ./file.jl:70 cmdlineargs: Error During Test Test threw an exception of type ErrorException Expression: length(lno.captures) == 1 type Void has no field captures cmdlineargs: Error During Test Test threw an exception of type ErrorException Expression: parse(Int, lno.captures[1]) > 0 type Void has no field captures compile: Error During Test Test threw an exception of type Base.Test.TestSetException Expression: compile Some tests did not pass: 84 passed, 2 failed, 1 errored, 0 broken. Test Summary: | Pass Fail Error Broken Total Overall | 21627577 6 3 1311352 22938938 ambiguous | 51 51 linalg/givens | 1552 1552 linalg/pinv | 232 232 linalg/special | 942 942 linalg/schur | 300 300 linalg/svd | 244 244 linalg/generic | 206 206 linalg/uniformscaling | 247 247 linalg/hessenberg | 40 40 linalg/lapack | 628 628 linalg/conjarray | 10 10 linalg/tridiag | 843 843 linalg/symmetric | 1386 1386 linalg/bunchkaufman | 2519 2519 linalg/eigen | 381 381 linalg/rowvector | 135 135 linalg/bidiag | 1592 1592 sparse/spqr | 53 53 sparse/umfpack | 137 137 strings/search | 549 549 strings/util | 341 341 strings/io | 12536 12536 linalg/arnoldi | 76 76 unicode/UnicodeError | 1 1 strings/types | 8905 8905 strings/basic | 26976 26976 unicode/utf8 | 1048595 1048595 unicode/utf8proc | 765 765 dates/query | 988 988 dates/adjusters | 3147 3147 linalg/lu | 1149 1149 dates/rounding | 157 157 dates/types | 167 167 dates/accessors | 7723858 7723858 linalg/cholesky | 2104 2104 dates/conversions | 159 159 dates/arithmetic | 312 312 sparse/cholmod | 359 359 dates/ranges | 348530 348530 linalg/diagonal | 1140 1140 worlds | 61 61 dates/periods | 316 316 keywordargs | 109 109 char | 1707 1707 triplequote | 28 28 intrinsics | 29 29 linalg/qr | 3054 3054 dates/io | 241 241 iobuffer | 135 135 inference | 167 167 subtype | 337157 15 337172 staged | 48 48 tuple | 382 382 hashing | 14453 14453 reduce | 235 235 printf | 596 596 linalg/lq | 1360 1360 dict | 134165 134165 core | 35531 35531 intfuncs | 4137 4137 simdloop | 207 207 linalg/matmul | 527 527 vecelement | 533 533 copy | 312 312 random | 200711 200711 reducedim | 461 461 functional | 61 61 fastmath | 790 790 path | 208 12 220 operators | 59 59 parse | 1094 1094 loading | 23 23 bigint | 2391 2391 blas | 486 486 bigfloat | 24 24 ccall | 3852 3852 math | 551275 551275 backtrace | 20 1 1 22 linalg/dense | 6324 6324 iterators | 916 916 spawn | 107 4 111 statistics | 357 357 version | 124465 124465 numbers | 1477170 1477170 pollfd | 344 344 mpfr | 702 702 read | 2071 2071 mmap | 131 131 floatapprox | 49 49 socket | 77 77 sparse/sparse | 1518 1518 abstractarray | 2465 2465 regex | 34 34 datafmt | 83 83 float16 | 124 124 file | 804 804 combinatorics | 44 44 sysinfo | 2 2 env | 61 61 mod2pi | 9 9 rounding | 327 327 euler | 12 12 reflection | 275 275 offsetarray | 726 726 complex | 1094 1094 lineedit | 175 175 replcompletions | 268 2 270 goto | 13 13 llvmcall | 13 13 llvmcall2 | 6 6 resolve | 2648 2648 sets | 216 216 meta | 36 36 grisu | 683 683 stacktraces | 42 42 sparse/sparsevector | 8163 8163 profile | 7 7 replutil | 166 1 167 base64 | 9 9 repl | 113 113 docs | 193 193 markdown | 210 210 show | 146 146 serialize | 90 90 i18n | 2 2 sorting | 4864 4864 broadcast | 306 306 test | 201 14 215 libdl | 198 198 threads | 190563 190563 workspace | 1 1 intset | 148 148 int | 10138 10138 error | 28 28 cartesian | 2 2 asmvariant | 3 3 osutils | 21 21 inline | 23 23 enums | 79 79 iostream | 21 21 specificity | 100 100 checked | 1211 1211 floatfuncs | 122 122 boundscheck | No tests arrayops | 1556 1556 nullable | 98030 98030 channels | 187 187 dsp | 365 365 misc | 1279536 1279536 examples | 22 22 fft | 2026 2026 bitarray | 892433 892433 sparse/higherorderfns | 6377 584 6961 libgit2 | 391 2 393 ranges | 6981911 1310720 8292631 subarray | 200 200 cmdlineargs | 133 2 2 137 linalg/triangular | 33634 33634 compile | 1 1 distributed | No tests FAILURE Error in testset backtrace: Test Failed Expression: have_backtrace Error in testset replutil: Test Failed Expression: contains(err_str, "Cannot raise an integer x to a negative power -n") Error in testset libgit2: Test Failed Expression: startswith(sprint(show, e), "GitError(Code:ENOTFOUND, Class:OS, Failed to resolve path") Error in testset libgit2: Test Failed Expression: err.msg == "Invalid Content-Type: text/plain" Evaluated: "invalid Content-Type: text/plain" == "Invalid Content-Type: text/plain" Error in testset cmdlineargs: Test Failed Expression: contains(bt, "include_from_node1") Error in testset cmdlineargs: Test Failed Expression: contains(bt, "include_from_node1(::String) at $(joinpath(".", "loading.jl"))") Error in testset cmdlineargs: Error During Test Test threw an exception of type ErrorException Expression: length(lno.captures) == 1 type Void has no field captures Error in testset cmdlineargs: Error During Test Test threw an exception of type ErrorException Expression: parse(Int, lno.captures[1]) > 0 type Void has no field captures Error in testset compile: Error During Test Test threw an exception of type Base.Test.TestSetException Expression: compile Some tests did not pass: 84 passed, 2 failed, 1 errored, 0 broken. ERROR: LoadError: Test run finished with errors while loading /gnu/store/26g8sbh008yb1k17cw1y660iynmbavp6-julia-0.6.0/share/julia/test/runtests.jl, in expression starting on line 29 ERROR: A test has failed. Please submit a bug report (https://github.com/JuliaLang/julia/issues) including error messages above and the output of versioninfo(): Julia Version 0.6.0 Commit 903644385b* (2017-06-19 13:05 UTC) Platform Info: OS: Linux (x86_64-unknown-linux-gnu) CPU: Intel(R) Xeon(R) CPU @ 2.50GHz WORD_SIZE: 64 BLAS: libopenblas (NO_LAPACK NO_LAPACKE DYNAMIC_ARCH NO_AFFINITY Sandybridge) LAPACK: liblapack LIBM: libopenlibm LLVM: libLLVM-3.8.1 (ORCJIT, ivybridge) Stacktrace: [1] runtests(::Array{String,1}, ::Int64) at ./interactiveutil.jl:670 ```
non_process
test failures and crashes with running the test suite of julia we saw several test failures and errors the full output of the test suite is copied below including versioninfo the test failures seem to go away when using instead of but i m not sure yet what the problem is do you think these test failures could indicate real problems or are they spurious please let me know what other information i can provide env julia test maxrss mb home tmp gnu store julia bin julia check bounds yes startup file no e base runtest s max sys cpu cores test worker time s gc s gc alloc mb rss mb warning method definition ambig any integer in module ambiguous at gnu store julia share julia test ambiguous jl overwritten at gnu stor e julia share julia test ambiguous jl from worker skipping base from worker skipping base active repl from worker skipping base active repl backend from worker skipping base from worker skipping base active repl from worker skipping base active repl backend ambiguous linalg givens linalg pinv linalg special linalg schur linalg svd linalg generic linalg uniformscaling linalg hessenberg linalg lapack linalg conjarray linalg tridiag linalg symmetric linalg bunchkaufman linalg eigen linalg rowvector linalg bidiag sparse spqr sparse umfpack strings search strings util strings io linalg arnoldi unicode unicodeerror strings types strings basic unicode unicode dates query dates adjusters linalg lu dates rounding dates types dates accessors linalg cholesky dates conversions dates arithmetic sparse cholmod warning method definition any in module worlds at gnu store julia share julia test worlds jl overwritten at gnu store julia share julia test worlds jl dates ranges linalg diagonal worlds dates periods keywordargs char triplequote intrinsics warning method definition f in module at gnu store julia share julia test core jl overwritten at gnu store julia share julia test core jl linalg qr dates io iobuffer inference warning method definition test type tuple v union tuple tuple tuple tuple tuple tuple tuple tuple tuple tuple tuple tuple uin i union in module subtype at gnu store julia share julia test subtype jl overwr itten at gnu store julia share julia test subtype jl subtype warning method definition x in module staged at gnu store julia share julia test staged jl overwritten at gnu store julia share julia test staged jl warning method definition any in module staged at gnu store julia share julia test staged jl overwritten at gnu store julia share julia test staged jl staged tuple hashing reduce printf warning static parameter t does not occur in signature for bad tvars at gnu store julia share julia test core jl the method will not be callable linalg lq warning method definition type core any any in module core at gnu store julia share julia test core jl overwri tten at gnu store julia share julia test core jl warning method definition type core t any in module core at gnu store julia share julia test core jl overwritt en at gnu store julia share julia test core jl dict warning method definition type core t s any any in module core at gnu store julia share julia test core jl o verwritten at gnu store julia share julia test core jl warning method definition type core t s where s where t t s in module core at gnu store julia share julia test co re jl overwritten at gnu store julia share julia test core jl warning method definition type core t real s t real any any in module core at gnu store julia share julia test core jl overwritten at gnu store julia share julia test core jl warning method definition type core t s where s t where t real array s t real array t real in module core at gnu store julia share julia test core jl overwritten at gnu store julia share julia test core jl warning method definition type core array t n where n where t real in module core at gnu store julia share julia test core jl overwritten at gnu store julia share julia test core jl warning method definition type core any in module core at gnu store julia share julia test core jl overwritten at gnu store julia share julia test core jl core intfuncs simdloop linalg matmul vecelement copy random reducedim functional fastmath path operators parse loading bigint blas bigfloat ccall from worker math linalg dense iterators spawn statistics version numbers pollfd mpfr read mmap floatapprox socket sparse sparse abstractarray regex datafmt file combinatorics sysinfo env rounding euler reflection offsetarray complex lineedit replcompletions goto llvmcall resolve sets meta grisu stacktraces sparse sparsevector profile repl docs warning threaded loop executed in order markdown show serialize sorting broadcast test libdl threads workspace intset int error cartesian asmvariant osutils inline enums warning method definition f tuple vararg n abstractarray t n in module specificity at gnu store julia share julia test specific ity jl overwritten at gnu store julia share julia test specificity jl iostream specificity checked floatfuncs boundscheck arrayops nullable channels dsp warning readuntil io abstractstring will perform poorly with a long string misc warning readuntil io abstractstring will perform poorly with a long string warning readuntil io abstractstring will perform poorly with a long string examples fft bitarray sparse higherorderfns ranges subarray linalg triangular from worker compile test failed expression base test ismatch warn error loaderror declaring precompile false is not allowed in files that are being precompiled nstacktrace n precompile base test rea dstring fname stacktrace macro expansion at test jl compile at gnu store julia share julia test compile jl compile test failed expression base test ismatch warn error loaderror break me nstacktrace n error base test readstring fname stacktrace macro expansion at test jl compile at gnu store julia share julia test compile jl compile error during test got an exception of type loaderror outside of a test loaderror open permission denied eacces stacktrace uv error at libuv jl open string at filesystem jl touch string at file jl compile at gnu store julia share julia test compile jl withenv compile pair string void vararg pair string void n where n at env jl macro expansion at gnu store julia share julia test testdefs jl macro expansion at test jl macro expansion at util jl macro expansion at gnu store julia share julia test testdefs jl anonymous at runtests string bool at gnu store julia share julia test testdefs jl at gnu store julia share julia test runtests jl at gnu store julia share julia test runtests jl cd string at file jl while loading gnu store julia share julia test compile jl in expression starting on line from worker worker failed running test backtrace some tests did not pass passed failed errored broken backtrace test failed expression have backtrace stacktrace record base test defaulttestset base test fail at test jl at gnu store julia share julia test runtests jl cd string at file jl worker failed running test replutil some tests did not pass passed failed errored broken replutil test failed expression contains err str cannot raise an integer x to a negative power n stacktrace record base test defaulttestset base test fail at test jl at gnu store julia share julia test runtests jl cd string at file jl worker failed running test some tests did not pass passed failed errored broken test failed expression startswith sprint show e giterror code enotfound class os failed to resolve path stacktrace record base test defaulttestset base test fail at test jl at gnu store julia share julia test runtests jl cd string at file jl test failed expression err msg invalid content type text plain evaluated invalid content type text plain invalid content type text plain stacktrace record base test defaulttestset base test fail at test jl at gnu store julia share julia test runtests jl cd string at file jl worker failed running test cmdlineargs some tests did not pass passed failed errored broken cmdlineargs test failed expression contains bt include from stacktrace record base test defaulttestset base test fail at test jl at gnu store julia share julia test runtests jl cd string at file jl cmdlineargs test failed expression contains bt include from string at joinpath loading jl stacktrace record base test defaulttestset base test fail at test jl at gnu store julia share julia test runtests jl cd string at file jl cmdlineargs error during test test threw an exception of type errorexception expression length lno captures type void has no field captures cmdlineargs error during test test threw an exception of type errorexception expression parse int lno captures type void has no field captures compile error during test test threw an exception of type base test testsetexception expression compile some tests did not pass passed failed errored broken test summary pass fail error broken total overall ambiguous linalg givens linalg pinv linalg special linalg schur linalg svd linalg generic linalg uniformscaling linalg hessenberg linalg lapack linalg conjarray linalg tridiag linalg symmetric linalg bunchkaufman linalg eigen linalg rowvector linalg bidiag sparse spqr sparse umfpack strings search strings util strings io linalg arnoldi unicode unicodeerror strings types strings basic unicode unicode dates query dates adjusters linalg lu dates rounding dates types dates accessors linalg cholesky dates conversions dates arithmetic sparse cholmod dates ranges linalg diagonal worlds dates periods keywordargs char triplequote intrinsics linalg qr dates io iobuffer inference subtype staged tuple hashing reduce printf linalg lq dict core intfuncs simdloop linalg matmul vecelement copy random reducedim functional fastmath path operators parse loading bigint blas bigfloat ccall math backtrace linalg dense iterators spawn statistics version numbers pollfd mpfr read mmap floatapprox socket sparse sparse abstractarray regex datafmt file combinatorics sysinfo env rounding euler reflection offsetarray complex lineedit replcompletions goto llvmcall resolve sets meta grisu stacktraces sparse sparsevector profile replutil repl docs markdown show serialize sorting broadcast test libdl threads workspace intset int error cartesian asmvariant osutils inline enums iostream specificity checked floatfuncs boundscheck no tests arrayops nullable channels dsp misc examples fft bitarray sparse higherorderfns ranges subarray cmdlineargs linalg triangular compile distributed no tests failure error in testset backtrace test failed expression have backtrace error in testset replutil test failed expression contains err str cannot raise an integer x to a negative power n error in testset test failed expression startswith sprint show e giterror code enotfound class os failed to resolve path error in testset test failed expression err msg invalid content type text plain evaluated invalid content type text plain invalid content type text plain error in testset cmdlineargs test failed expression contains bt include from error in testset cmdlineargs test failed expression contains bt include from string at joinpath loading jl error in testset cmdlineargs error during test test threw an exception of type errorexception expression length lno captures type void has no field captures error in testset cmdlineargs error during test test threw an exception of type errorexception expression parse int lno captures type void has no field captures error in testset compile error during test test threw an exception of type base test testsetexception expression compile some tests did not pass passed failed errored broken error loaderror test run finished with errors while loading gnu store julia share julia test runtests jl in expression starting on line error a test has failed please submit a bug report including error messages above and the output of versioninfo julia version commit utc platform info os linux unknown linux gnu cpu intel r xeon r cpu word size blas libopenblas no lapack no lapacke dynamic arch no affinity sandybridge lapack liblapack libm libopenlibm llvm libllvm orcjit ivybridge stacktrace runtests array string at interactiveutil jl
0
10,731
4,081,203,105
IssuesEvent
2016-05-31 07:56:27
oppia/oppia
https://api.github.com/repos/oppia/oppia
reopened
Create a Top Rated Category in Library
feature: important library index page (@kevinlee12) loc: full-stack starter project TODO: code
Create a new category group in library which will show the top 4-5 rated explorations. - [x] Have the gallery display the Top Rated category group - #1910 - [ ] Implement the [Wilson Score](http://www.goproblems.com/test/wilson/wilson.php?v1=0&v2=0&v3=3&v4=0&v5=0) calculations in SearchRanker
1.0
Create a Top Rated Category in Library - Create a new category group in library which will show the top 4-5 rated explorations. - [x] Have the gallery display the Top Rated category group - #1910 - [ ] Implement the [Wilson Score](http://www.goproblems.com/test/wilson/wilson.php?v1=0&v2=0&v3=3&v4=0&v5=0) calculations in SearchRanker
non_process
create a top rated category in library create a new category group in library which will show the top rated explorations have the gallery display the top rated category group implement the calculations in searchranker
0
13,621
16,236,484,557
IssuesEvent
2021-05-07 01:48:59
Amr-Aboshama/XGeN
https://api.github.com/repos/Amr-Aboshama/XGeN
closed
Requirements of Preprocessor and Information Extraction
Information Extraction Preprocessor urgent
Preprocessor: - [ ] Search on the features of OCR library. - [x] Word Segmentation. - [x] Word Segmentation Logic. - [ ] Text Cleaning. - [x] Coreference. - [ ] More English Books(optional). Information Extraction: - [x] Check why NER isn't working well (maybe we will change the library to spacy). - [x] Frequent Item Set Mining.
1.0
Requirements of Preprocessor and Information Extraction - Preprocessor: - [ ] Search on the features of OCR library. - [x] Word Segmentation. - [x] Word Segmentation Logic. - [ ] Text Cleaning. - [x] Coreference. - [ ] More English Books(optional). Information Extraction: - [x] Check why NER isn't working well (maybe we will change the library to spacy). - [x] Frequent Item Set Mining.
process
requirements of preprocessor and information extraction preprocessor search on the features of ocr library word segmentation word segmentation logic text cleaning coreference more english books optional information extraction check why ner isn t working well maybe we will change the library to spacy frequent item set mining
1
75,520
7,476,213,745
IssuesEvent
2018-04-04 01:45:45
rancher/rancher
https://api.github.com/repos/rancher/rancher
closed
fluentd fail to start when kafka logging endpoint.
area/tools kind/bug status/resolved status/to-test version/2.0
**Rancher versions:** rancher 2.0 master on 29/03 **Steps to Reproduce:** Configure kafa based logging target **Results:** fluentd pods are failing with this error: ``` 2018-03-29 20:14:00 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluent.conf" 2018-03-29 20:14:00 +0000 [error]: config error file="/fluentd/etc/fluent.conf" error_class=Fluent::ConfigError error="Other 'kafka_buffered' plugin already use same buffer path: type = kafka_buffered, buffer path = /fluentd/etc/buffer/cluster.buffer" ```
1.0
fluentd fail to start when kafka logging endpoint. - **Rancher versions:** rancher 2.0 master on 29/03 **Steps to Reproduce:** Configure kafa based logging target **Results:** fluentd pods are failing with this error: ``` 2018-03-29 20:14:00 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluent.conf" 2018-03-29 20:14:00 +0000 [error]: config error file="/fluentd/etc/fluent.conf" error_class=Fluent::ConfigError error="Other 'kafka_buffered' plugin already use same buffer path: type = kafka_buffered, buffer path = /fluentd/etc/buffer/cluster.buffer" ```
non_process
fluentd fail to start when kafka logging endpoint rancher versions rancher master on steps to reproduce configure kafa based logging target results fluentd pods are failing with this error parsing config file is succeeded path fluentd etc fluent conf config error file fluentd etc fluent conf error class fluent configerror error other kafka buffered plugin already use same buffer path type kafka buffered buffer path fluentd etc buffer cluster buffer
0
3,359
6,487,723,337
IssuesEvent
2017-08-20 10:46:47
Great-Hill-Corporation/quickBlocks
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
closed
Empty Config File causes core dump
apps-all status-inprocess type-bug
Check for at least one group and/or make sure if no group is found don't continue searching. This last is better. From https://github.com/Great-Hill-Corporation/ethslurp/issues/103
1.0
Empty Config File causes core dump - Check for at least one group and/or make sure if no group is found don't continue searching. This last is better. From https://github.com/Great-Hill-Corporation/ethslurp/issues/103
process
empty config file causes core dump check for at least one group and or make sure if no group is found don t continue searching this last is better from
1
28,681
12,907,892,865
IssuesEvent
2020-07-15 06:15:31
cityofaustin/atd-data-tech
https://api.github.com/repos/cityofaustin/atd-data-tech
closed
Post-migration housekeeping in the mono-repo mansion 🧹🧼 🏰
Project: DTS Service Delivery Service: Product Type: Operations Workgroup: DTS
For each product/project: - (If migrated) review all issues on board & make sure nothing looks too amiss in terms of pipelines - (If migrated) recompose open epics: - Open the epic & follow link to original issue - Open all issues in original issue and assign to new `atd-data-tech` epic - Add a `Product` label to projects if applicable, e.g. `Project: SMB 311 Module` gets `Product: SMB Data Tracker` - Consolidate labels, if necessary, e.g. `Product: VZA`, `Project: VZA App` - Make sure there is 100% overlap between issues assigned to each label (I use the Zenhub board view for this bulk counting/filtering/editing) - Delete the one we don’t want from the [labels](https://github.com/cityofaustin/atd-data-tech/labels) page. - Make sure the label what’s left has a nice description and the correct hex code: - Product: #3D3D3D - Project: #86B1C6 Jace - [x] Product: Vision Zero Crash Data System, Project: Vision Zero Crash Data System - [x] Product: Vision Zero Editor, Product: VZE - [x] Product: Vision Zero Viewer, Project: Vision Zero Viewer, Product: VZV - [x] Product: Mobile Signal Work Orders, Project: Mobile Signal Work Order - [x] Product: Signs Mobile Work Orders, Project: Signs Mobile Work Orders - [x] Product: Dockless Dataviz - [ ] Product: Dockless Licensing - [ ] Product: Micromobility - [ ] Product: Mobility Services - [x] Project: MDS Upgrades - [x] Project: Micromobility Dashboard Updates - [ ] Project: Mobility Data Privacy & Security - [ ] Project: BI Discovery Amenity - [x] Product: Vision Zero in Action, Project: VZA App, Project: VZA Citation App, Product: VZA App, Product: VZA - [x] Product: SMB Data Tracker, Product: Signs & Markings, - [x] Product: AMD Data Tracker, Product: AMD Tracker, - [x] Product: AMD Fulcrum - [x] Product: ATD Forms - [x] Product: Banners, Product: Signs & Markings, Project: Signs Migration, Project: SMB 311 Module - [x] Product: Mobility Project Database, Product: MPD, Project: Mobility Project Database, Project: Mobility Project Database - [x] Product: MPV, Product: Mobility Project Explorer - [x] Product: Mobility Project Editor - [x] Product: Finance & Inventory - [x] Product: Residential Parking Permit Digitization, Project: Residential Parking Permits, Product: RPP - [x] Product: Visitor Log - [x] Project: Beacon Data Tracking - [x] Project: Work Order Extension - [ ] Project: Warehouse Inventory - [x] Project: Paperless Hiring - [x] Project: Branding and Identity - [x] Project: Job Calendar Work Schedule - [x] Project: DTS Service Delivery Jaime - [x] Project: Online Bike Map - [x] Project: AGOL Audit - [x] Project: CTN - [x] Project: Parking Inventory - [x] Project: Print Bike Map - [x] Project: Data Driven PHB Ranking - [x] Project: Bike Rack Data Inventory - [x] Project: OSM Bicycle Data Update John (or reassign) - [x] Project: Development Review Workflow - [x] Project: TDSD Phase 2 - [x] Product: Data Lake, Project: Data Lake - [x] Product: ROW Portal - [x] Project: AWS Migration - [x] Project: DAPCZ Agenda Tool - [x] Project: AULCC Utility Mapping - [x] Project: B-Cycle Station Planning - [x] Project: Inspector Prioritization Automation - [ ] Project: Parking Enterprise Work Order System - [x] Project: Parking Technology Procurement - [x] Project: Sign Data Collection - [x] Project: Traffic Counts - [x] Project: Traffic Registry Digitization - [x] Project: TxDOT Center-to-Center - [x] Product: DTS Portal, Product: Service Requests - [ ] Product: Data & Performance Hub Tracy - [x] Project: ATD AMANDA Backlog - [x] Project: ROW Activity Dashboard - [x] Project: ROW Attendance Tracker - [x] Project: ROW Wishlist - [x] Project: ROWMAN Ph 2 - [x] Project: ROWMAN Ph 3 - [x] Project: ROWMAN Ph 4 - [x] Project: ROWMAN Ph 5 - [x] Project: ROWMAN Ph 6 - [x] Project: SCP3 - [x] Product: Transportation Development Services
2.0
Post-migration housekeeping in the mono-repo mansion 🧹🧼 🏰 - For each product/project: - (If migrated) review all issues on board & make sure nothing looks too amiss in terms of pipelines - (If migrated) recompose open epics: - Open the epic & follow link to original issue - Open all issues in original issue and assign to new `atd-data-tech` epic - Add a `Product` label to projects if applicable, e.g. `Project: SMB 311 Module` gets `Product: SMB Data Tracker` - Consolidate labels, if necessary, e.g. `Product: VZA`, `Project: VZA App` - Make sure there is 100% overlap between issues assigned to each label (I use the Zenhub board view for this bulk counting/filtering/editing) - Delete the one we don’t want from the [labels](https://github.com/cityofaustin/atd-data-tech/labels) page. - Make sure the label what’s left has a nice description and the correct hex code: - Product: #3D3D3D - Project: #86B1C6 Jace - [x] Product: Vision Zero Crash Data System, Project: Vision Zero Crash Data System - [x] Product: Vision Zero Editor, Product: VZE - [x] Product: Vision Zero Viewer, Project: Vision Zero Viewer, Product: VZV - [x] Product: Mobile Signal Work Orders, Project: Mobile Signal Work Order - [x] Product: Signs Mobile Work Orders, Project: Signs Mobile Work Orders - [x] Product: Dockless Dataviz - [ ] Product: Dockless Licensing - [ ] Product: Micromobility - [ ] Product: Mobility Services - [x] Project: MDS Upgrades - [x] Project: Micromobility Dashboard Updates - [ ] Project: Mobility Data Privacy & Security - [ ] Project: BI Discovery Amenity - [x] Product: Vision Zero in Action, Project: VZA App, Project: VZA Citation App, Product: VZA App, Product: VZA - [x] Product: SMB Data Tracker, Product: Signs & Markings, - [x] Product: AMD Data Tracker, Product: AMD Tracker, - [x] Product: AMD Fulcrum - [x] Product: ATD Forms - [x] Product: Banners, Product: Signs & Markings, Project: Signs Migration, Project: SMB 311 Module - [x] Product: Mobility Project Database, Product: MPD, Project: Mobility Project Database, Project: Mobility Project Database - [x] Product: MPV, Product: Mobility Project Explorer - [x] Product: Mobility Project Editor - [x] Product: Finance & Inventory - [x] Product: Residential Parking Permit Digitization, Project: Residential Parking Permits, Product: RPP - [x] Product: Visitor Log - [x] Project: Beacon Data Tracking - [x] Project: Work Order Extension - [ ] Project: Warehouse Inventory - [x] Project: Paperless Hiring - [x] Project: Branding and Identity - [x] Project: Job Calendar Work Schedule - [x] Project: DTS Service Delivery Jaime - [x] Project: Online Bike Map - [x] Project: AGOL Audit - [x] Project: CTN - [x] Project: Parking Inventory - [x] Project: Print Bike Map - [x] Project: Data Driven PHB Ranking - [x] Project: Bike Rack Data Inventory - [x] Project: OSM Bicycle Data Update John (or reassign) - [x] Project: Development Review Workflow - [x] Project: TDSD Phase 2 - [x] Product: Data Lake, Project: Data Lake - [x] Product: ROW Portal - [x] Project: AWS Migration - [x] Project: DAPCZ Agenda Tool - [x] Project: AULCC Utility Mapping - [x] Project: B-Cycle Station Planning - [x] Project: Inspector Prioritization Automation - [ ] Project: Parking Enterprise Work Order System - [x] Project: Parking Technology Procurement - [x] Project: Sign Data Collection - [x] Project: Traffic Counts - [x] Project: Traffic Registry Digitization - [x] Project: TxDOT Center-to-Center - [x] Product: DTS Portal, Product: Service Requests - [ ] Product: Data & Performance Hub Tracy - [x] Project: ATD AMANDA Backlog - [x] Project: ROW Activity Dashboard - [x] Project: ROW Attendance Tracker - [x] Project: ROW Wishlist - [x] Project: ROWMAN Ph 2 - [x] Project: ROWMAN Ph 3 - [x] Project: ROWMAN Ph 4 - [x] Project: ROWMAN Ph 5 - [x] Project: ROWMAN Ph 6 - [x] Project: SCP3 - [x] Product: Transportation Development Services
non_process
post migration housekeeping in the mono repo mansion 🧹🧼 🏰 for each product project if migrated review all issues on board make sure nothing looks too amiss in terms of pipelines if migrated recompose open epics open the epic follow link to original issue open all issues in original issue and assign to new atd data tech epic add a product label to projects if applicable e g project smb module gets product smb data tracker consolidate labels if necessary e g product vza project vza app make sure there is overlap between issues assigned to each label i use the zenhub board view for this bulk counting filtering editing delete the one we don’t want from the page make sure the label what’s left has a nice description and the correct hex code product project jace product vision zero crash data system project vision zero crash data system product vision zero editor product vze product vision zero viewer project vision zero viewer product vzv product mobile signal work orders project mobile signal work order product signs mobile work orders project signs mobile work orders product dockless dataviz product dockless licensing product micromobility product mobility services project mds upgrades project micromobility dashboard updates project mobility data privacy security project bi discovery amenity product vision zero in action project vza app project vza citation app product vza app product vza product smb data tracker product signs markings product amd data tracker product amd tracker product amd fulcrum product atd forms product banners product signs markings project signs migration project smb module product mobility project database product mpd project mobility project database project mobility project database product mpv product mobility project explorer product mobility project editor product finance inventory product residential parking permit digitization project residential parking permits product rpp product visitor log project beacon data tracking project work order extension project warehouse inventory project paperless hiring project branding and identity project job calendar work schedule project dts service delivery jaime project online bike map project agol audit project ctn project parking inventory project print bike map project data driven phb ranking project bike rack data inventory project osm bicycle data update john or reassign project development review workflow project tdsd phase product data lake project data lake product row portal project aws migration project dapcz agenda tool project aulcc utility mapping project b cycle station planning project inspector prioritization automation project parking enterprise work order system project parking technology procurement project sign data collection project traffic counts project traffic registry digitization project txdot center to center product dts portal product service requests product data performance hub tracy project atd amanda backlog project row activity dashboard project row attendance tracker project row wishlist project rowman ph project rowman ph project rowman ph project rowman ph project rowman ph project product transportation development services
0
11,448
14,269,791,303
IssuesEvent
2020-11-21 03:11:40
kubeflow/kubeflow
https://api.github.com/repos/kubeflow/kubeflow
closed
[Release 1.1] Release Jupyter
area/jupyter kind/process lifecycle/stale priority/p0
/kind process Opening this issue to track releasing jupyter for Kubeflow 1.1 Per #5022 we need the following * Updated docker images (for controller, JWA, and notebook images kubeflow/kubeflow#4789) * Updated manifests * Updated docs
1.0
[Release 1.1] Release Jupyter - /kind process Opening this issue to track releasing jupyter for Kubeflow 1.1 Per #5022 we need the following * Updated docker images (for controller, JWA, and notebook images kubeflow/kubeflow#4789) * Updated manifests * Updated docs
process
release jupyter kind process opening this issue to track releasing jupyter for kubeflow per we need the following updated docker images for controller jwa and notebook images kubeflow kubeflow updated manifests updated docs
1
47
2,491,833,984
IssuesEvent
2015-01-04 02:20:09
cakephp/cakephp
https://api.github.com/repos/cakephp/cakephp
closed
3.0 - Contain examples unclear/not working properly
documentation
The examples in the docs and the doc block for `Query::contain()` are a little unclear if you ask me: ```php // Set options for the articles that will be eagerly loaded for an author $query->contain([ 'Articles' => [ 'fields' => ['title'] ] ]); ``` Without any additional information, I would assume this to be a `Authors hasMany Articles` association, and that would trigger an error saying that is required to select the `Articles.author_id`. Assuming this is the correct behavior, I guess the example should either mention that and include the foreign key column, or it should use an association that can be clearly identified as `belongsTo`. --- ```php // Use special join conditions for getting an article author's 'likes' $query->contain([ 'Likes' => [ 'foreignKey' => false, 'queryBuilder' => function ($q) { return $q->where(...); // Add full filtering conditions } ] ]); ``` Again the association has to be guessed, and to me it sounds like `hasMany`, ie `Authors/Articles hasMany Likes`. That would however result in an SQL error, as the `WHERE` part would look somewhat like this: ```SQL WHERE (Likes. in (:c0,:c1,:c2) AND custom_filtering_conditions_here) ``` having an `IN()` condition without a column name.
1.0
3.0 - Contain examples unclear/not working properly - The examples in the docs and the doc block for `Query::contain()` are a little unclear if you ask me: ```php // Set options for the articles that will be eagerly loaded for an author $query->contain([ 'Articles' => [ 'fields' => ['title'] ] ]); ``` Without any additional information, I would assume this to be a `Authors hasMany Articles` association, and that would trigger an error saying that is required to select the `Articles.author_id`. Assuming this is the correct behavior, I guess the example should either mention that and include the foreign key column, or it should use an association that can be clearly identified as `belongsTo`. --- ```php // Use special join conditions for getting an article author's 'likes' $query->contain([ 'Likes' => [ 'foreignKey' => false, 'queryBuilder' => function ($q) { return $q->where(...); // Add full filtering conditions } ] ]); ``` Again the association has to be guessed, and to me it sounds like `hasMany`, ie `Authors/Articles hasMany Likes`. That would however result in an SQL error, as the `WHERE` part would look somewhat like this: ```SQL WHERE (Likes. in (:c0,:c1,:c2) AND custom_filtering_conditions_here) ``` having an `IN()` condition without a column name.
non_process
contain examples unclear not working properly the examples in the docs and the doc block for query contain are a little unclear if you ask me php set options for the articles that will be eagerly loaded for an author query contain articles fields without any additional information i would assume this to be a authors hasmany articles association and that would trigger an error saying that is required to select the articles author id assuming this is the correct behavior i guess the example should either mention that and include the foreign key column or it should use an association that can be clearly identified as belongsto php use special join conditions for getting an article author s likes query contain likes foreignkey false querybuilder function q return q where add full filtering conditions again the association has to be guessed and to me it sounds like hasmany ie authors articles hasmany likes that would however result in an sql error as the where part would look somewhat like this sql where likes in and custom filtering conditions here having an in condition without a column name
0
2,267
5,102,447,393
IssuesEvent
2017-01-04 18:20:29
LazyTroll/WikiCode
https://api.github.com/repos/LazyTroll/WikiCode
closed
Внедрение WikiComments.
introduction process task
### Внедрить в платформу новый модуль управления комментариями. Внедрить как в абзацы, так и в общие комментарии. Фронтенд к этому мероприятию - условный. Главное, чтобы работало.
1.0
Внедрение WikiComments. - ### Внедрить в платформу новый модуль управления комментариями. Внедрить как в абзацы, так и в общие комментарии. Фронтенд к этому мероприятию - условный. Главное, чтобы работало.
process
внедрение wikicomments внедрить в платформу новый модуль управления комментариями внедрить как в абзацы так и в общие комментарии фронтенд к этому мероприятию условный главное чтобы работало
1
8,818
11,936,420,081
IssuesEvent
2020-04-02 10:17:28
prisma/prisma
https://api.github.com/repos/prisma/prisma
closed
Wrong versions of binaries are downloaded during generation
bug/2-confirmed kind/bug process/candidate
## Bug description Wrong versions of binaries are downloaded during generation ## How to reproduce I've set up a repository with a detailed README that precisely describes and demonstrates the problem. It is a fairly long, but the problem is pretty hard to describe, so please read through the whole thing and let me know if anything is unclear: https://github.com/madebysid/prisma2-client-binaries ## Expected behavior A proposed solution is included in the README. ## Prisma information Irrelevant ## Environment & setup Irrelevant Also, here is this problem demonstrated in the wild: https://github.com/prisma/studio/issues/391
1.0
Wrong versions of binaries are downloaded during generation - ## Bug description Wrong versions of binaries are downloaded during generation ## How to reproduce I've set up a repository with a detailed README that precisely describes and demonstrates the problem. It is a fairly long, but the problem is pretty hard to describe, so please read through the whole thing and let me know if anything is unclear: https://github.com/madebysid/prisma2-client-binaries ## Expected behavior A proposed solution is included in the README. ## Prisma information Irrelevant ## Environment & setup Irrelevant Also, here is this problem demonstrated in the wild: https://github.com/prisma/studio/issues/391
process
wrong versions of binaries are downloaded during generation bug description wrong versions of binaries are downloaded during generation how to reproduce i ve set up a repository with a detailed readme that precisely describes and demonstrates the problem it is a fairly long but the problem is pretty hard to describe so please read through the whole thing and let me know if anything is unclear expected behavior a proposed solution is included in the readme prisma information irrelevant environment setup irrelevant also here is this problem demonstrated in the wild
1
254,568
21,794,181,782
IssuesEvent
2022-05-15 11:27:32
tijlleenders/ZinZen
https://api.github.com/repos/tijlleenders/ZinZen
closed
Make unit test for parser for goals
test 1 point
We need a function that parses a goal object. Use test-driven development - so make the empty function and a unit test first. This issue is not about actually developing the function. The goal object has two key-values by default: - title : "" - lang : "XX" (whatever the ISO 2 alpha code is for the language chisen by the user) - color : "HEXCODE" If a number+"h" is detected it should add a duration suggestion to the suggestion array. If it detects "daily" it should add a repetion suggestion to the suggestion array. Input: - title : "Walk 1h daily" - lang : "EN" - color : "HEXCODE" Output: - title : "Walk 1h daily" - lang : "EN" - color : "HEXCODE" - suggestions : [ duration: 1, repetition : "daily" ]
1.0
Make unit test for parser for goals - We need a function that parses a goal object. Use test-driven development - so make the empty function and a unit test first. This issue is not about actually developing the function. The goal object has two key-values by default: - title : "" - lang : "XX" (whatever the ISO 2 alpha code is for the language chisen by the user) - color : "HEXCODE" If a number+"h" is detected it should add a duration suggestion to the suggestion array. If it detects "daily" it should add a repetion suggestion to the suggestion array. Input: - title : "Walk 1h daily" - lang : "EN" - color : "HEXCODE" Output: - title : "Walk 1h daily" - lang : "EN" - color : "HEXCODE" - suggestions : [ duration: 1, repetition : "daily" ]
non_process
make unit test for parser for goals we need a function that parses a goal object use test driven development so make the empty function and a unit test first this issue is not about actually developing the function the goal object has two key values by default title lang xx whatever the iso alpha code is for the language chisen by the user color hexcode if a number h is detected it should add a duration suggestion to the suggestion array if it detects daily it should add a repetion suggestion to the suggestion array input title walk daily lang en color hexcode output title walk daily lang en color hexcode suggestions
0
5,219
8,017,742,841
IssuesEvent
2018-07-25 16:50:37
DynareTeam/dynare
https://api.github.com/repos/DynareTeam/dynare
closed
allow length operator in macroprocessor to return the length of strings
enhancement preprocessor
At present, the length operator applied to a string always returns 1. It would be good if it returned the length of the string. Try e.g.: ``` @#define Numbers = [ "1", "2", "3", "4", "5", "6" ] @#define TestString = "Hello" @#echo Numbers[ length( TestString ) ] parameters a; a = 1; ``` Incidentally, it would be good if there was a number to string function in the pre-processor!
1.0
allow length operator in macroprocessor to return the length of strings - At present, the length operator applied to a string always returns 1. It would be good if it returned the length of the string. Try e.g.: ``` @#define Numbers = [ "1", "2", "3", "4", "5", "6" ] @#define TestString = "Hello" @#echo Numbers[ length( TestString ) ] parameters a; a = 1; ``` Incidentally, it would be good if there was a number to string function in the pre-processor!
process
allow length operator in macroprocessor to return the length of strings at present the length operator applied to a string always returns it would be good if it returned the length of the string try e g define numbers define teststring hello echo numbers parameters a a incidentally it would be good if there was a number to string function in the pre processor
1
79,114
7,696,341,563
IssuesEvent
2018-05-18 15:02:13
atom/atom
https://api.github.com/repos/atom/atom
closed
Flaky tests: line ending selector Status bar tile ...
flaky-test
We've seen the line-ending-selector tests below fail twice in the past week: https://circleci.com/gh/atom/atom/7475 https://circleci.com/gh/atom/atom/7499 Rebuilding can resolve the failures: <img width="1065" alt="circleci" src="https://user-images.githubusercontent.com/2988/40060813-7467c768-5825-11e8-8bad-0836c878e6b5.png"> These recent failures are correlated with the recent Electron 2.0 upgrade (#17273), although we've seen these failures before (e.g., https://circleci.com/gh/atom/atom/7267). ### Failures ``` Package tests failed for line-ending-selector: ..........FFF. line ending selector Status bar tile clicking the tile when selecting a different line ending for the file it changes the line endings in the buffer timeout: timed out after 60000 msec waiting for something to happen when modal is exited it leaves the tile selection as-is Expected 'CRLF' to be ''. at jasmine.Spec.runs (/Users/distiller/atom/node_modules/line-ending-selector/spec/line-ending-selector-spec.js:24:50) timeout: timed out after 60000 msec waiting for something to happen closing the last text editor it displays no line ending in the status bar Expected 'CRLF' to be ''. at jasmine.Spec.runs (/Users/distiller/atom/node_modules/line-ending-selector/spec/line-ending-selector-spec.js:24:50) Expected '' to be 'CRLF'. at lineEndingTile.onDidChange (/Users/distiller/atom/node_modules/line-ending-selector/spec/line-ending-selector-spec.js:232:58) at Function.module.exports.Emitter.simpleDispatch (/Users/distiller/atom/node_modules/event-kit/lib/emitter.js:25:14) at Emitter.module.exports.Emitter.emit (/Users/distiller/atom/node_modules/event-kit/lib/emitter.js:141:28) at StatusBarItem.setLineEndings (/Users/distiller/atom/node_modules/line-ending-selector/lib/status-bar-item.js:15:18) at /Users/distiller/atom/out/app/node_modules/line-ending-selector/lib/main.js:104:21 at Function.module.exports.Emitter.simpleDispatch (/Users/distiller/atom/node_modules/event-kit/lib/emitter.js:25:14) at Emitter.module.exports.Emitter.emit (/Users/distiller/atom/node_modules/event-kit/lib/emitter.js:141:28) at Workspace.didChangeActivePaneItemOnPaneContainer (/Users/distiller/atom/src/workspace.js:438:22) at WorkspaceCenter.paneContainer.onDidChangeActivePaneItem (/Users/distiller/atom/src/workspace-center.js:17:14) at Function.module.exports.Emitter.simpleDispatch (/Users/distiller/atom/node_modules/event-kit/lib/emitter.js:25:14) at Emitter.module.exports.Emitter.emit (/Users/distiller/atom/node_modules/event-kit/lib/emitter.js:141:28) at PaneContainer.didChangeActiveItemOnPane (/Users/distiller/atom/src/pane-container.js:281:20) at Pane.setActiveItem (/Users/distiller/atom/src/pane.js:440:42) at Pane.removeItem (/Users/distiller/atom/src/pane.js:694:14) at Pane.destroyItem (/Users/distiller/atom/src/pane.js:779:10) at Promise.all.getItems.map.item (/Users/distiller/atom/src/pane.js:787:40) at Array.map (<anonymous>) at Pane.destroyItems (/Users/distiller/atom/src/pane.js:787:23) at Pane.destroy (/Users/distiller/atom/src/pane.js:1024:19) at atom.workspace.open.then (/Users/distiller/atom/node_modules/line-ending-selector/spec/line-ending-selector-spec.js:258:44) at <anonymous> Finished in 121.406 seconds 14 tests, 70 assertions, 5 failures, 0 skipped ```
1.0
Flaky tests: line ending selector Status bar tile ... - We've seen the line-ending-selector tests below fail twice in the past week: https://circleci.com/gh/atom/atom/7475 https://circleci.com/gh/atom/atom/7499 Rebuilding can resolve the failures: <img width="1065" alt="circleci" src="https://user-images.githubusercontent.com/2988/40060813-7467c768-5825-11e8-8bad-0836c878e6b5.png"> These recent failures are correlated with the recent Electron 2.0 upgrade (#17273), although we've seen these failures before (e.g., https://circleci.com/gh/atom/atom/7267). ### Failures ``` Package tests failed for line-ending-selector: ..........FFF. line ending selector Status bar tile clicking the tile when selecting a different line ending for the file it changes the line endings in the buffer timeout: timed out after 60000 msec waiting for something to happen when modal is exited it leaves the tile selection as-is Expected 'CRLF' to be ''. at jasmine.Spec.runs (/Users/distiller/atom/node_modules/line-ending-selector/spec/line-ending-selector-spec.js:24:50) timeout: timed out after 60000 msec waiting for something to happen closing the last text editor it displays no line ending in the status bar Expected 'CRLF' to be ''. at jasmine.Spec.runs (/Users/distiller/atom/node_modules/line-ending-selector/spec/line-ending-selector-spec.js:24:50) Expected '' to be 'CRLF'. at lineEndingTile.onDidChange (/Users/distiller/atom/node_modules/line-ending-selector/spec/line-ending-selector-spec.js:232:58) at Function.module.exports.Emitter.simpleDispatch (/Users/distiller/atom/node_modules/event-kit/lib/emitter.js:25:14) at Emitter.module.exports.Emitter.emit (/Users/distiller/atom/node_modules/event-kit/lib/emitter.js:141:28) at StatusBarItem.setLineEndings (/Users/distiller/atom/node_modules/line-ending-selector/lib/status-bar-item.js:15:18) at /Users/distiller/atom/out/app/node_modules/line-ending-selector/lib/main.js:104:21 at Function.module.exports.Emitter.simpleDispatch (/Users/distiller/atom/node_modules/event-kit/lib/emitter.js:25:14) at Emitter.module.exports.Emitter.emit (/Users/distiller/atom/node_modules/event-kit/lib/emitter.js:141:28) at Workspace.didChangeActivePaneItemOnPaneContainer (/Users/distiller/atom/src/workspace.js:438:22) at WorkspaceCenter.paneContainer.onDidChangeActivePaneItem (/Users/distiller/atom/src/workspace-center.js:17:14) at Function.module.exports.Emitter.simpleDispatch (/Users/distiller/atom/node_modules/event-kit/lib/emitter.js:25:14) at Emitter.module.exports.Emitter.emit (/Users/distiller/atom/node_modules/event-kit/lib/emitter.js:141:28) at PaneContainer.didChangeActiveItemOnPane (/Users/distiller/atom/src/pane-container.js:281:20) at Pane.setActiveItem (/Users/distiller/atom/src/pane.js:440:42) at Pane.removeItem (/Users/distiller/atom/src/pane.js:694:14) at Pane.destroyItem (/Users/distiller/atom/src/pane.js:779:10) at Promise.all.getItems.map.item (/Users/distiller/atom/src/pane.js:787:40) at Array.map (<anonymous>) at Pane.destroyItems (/Users/distiller/atom/src/pane.js:787:23) at Pane.destroy (/Users/distiller/atom/src/pane.js:1024:19) at atom.workspace.open.then (/Users/distiller/atom/node_modules/line-ending-selector/spec/line-ending-selector-spec.js:258:44) at <anonymous> Finished in 121.406 seconds 14 tests, 70 assertions, 5 failures, 0 skipped ```
non_process
flaky tests line ending selector status bar tile we ve seen the line ending selector tests below fail twice in the past week rebuilding can resolve the failures img width alt circleci src these recent failures are correlated with the recent electron upgrade although we ve seen these failures before e g failures package tests failed for line ending selector fff line ending selector status bar tile clicking the tile when selecting a different line ending for the file it changes the line endings in the buffer timeout timed out after msec waiting for something to happen when modal is exited it leaves the tile selection as is expected crlf to be at jasmine spec runs users distiller atom node modules line ending selector spec line ending selector spec js timeout timed out after msec waiting for something to happen closing the last text editor it displays no line ending in the status bar expected crlf to be at jasmine spec runs users distiller atom node modules line ending selector spec line ending selector spec js expected to be crlf at lineendingtile ondidchange users distiller atom node modules line ending selector spec line ending selector spec js at function module exports emitter simpledispatch users distiller atom node modules event kit lib emitter js at emitter module exports emitter emit users distiller atom node modules event kit lib emitter js at statusbaritem setlineendings users distiller atom node modules line ending selector lib status bar item js at users distiller atom out app node modules line ending selector lib main js at function module exports emitter simpledispatch users distiller atom node modules event kit lib emitter js at emitter module exports emitter emit users distiller atom node modules event kit lib emitter js at workspace didchangeactivepaneitemonpanecontainer users distiller atom src workspace js at workspacecenter panecontainer ondidchangeactivepaneitem users distiller atom src workspace center js at function module exports emitter simpledispatch users distiller atom node modules event kit lib emitter js at emitter module exports emitter emit users distiller atom node modules event kit lib emitter js at panecontainer didchangeactiveitemonpane users distiller atom src pane container js at pane setactiveitem users distiller atom src pane js at pane removeitem users distiller atom src pane js at pane destroyitem users distiller atom src pane js at promise all getitems map item users distiller atom src pane js at array map at pane destroyitems users distiller atom src pane js at pane destroy users distiller atom src pane js at atom workspace open then users distiller atom node modules line ending selector spec line ending selector spec js at finished in seconds tests assertions failures skipped
0
3,332
6,451,825,837
IssuesEvent
2017-08-15 00:38:45
sysown/proxysql
https://api.github.com/repos/sysown/proxysql
opened
Support SELECT CURRENT_USER
QUERY PROCESSOR
Statements `SELECT CURRENT_USER()` needs to be supported in: * [ ] MySQL Session * [ ] Admin Session * [ ] SQLite3 Session * [ ] ClickHouse Sesson
1.0
Support SELECT CURRENT_USER - Statements `SELECT CURRENT_USER()` needs to be supported in: * [ ] MySQL Session * [ ] Admin Session * [ ] SQLite3 Session * [ ] ClickHouse Sesson
process
support select current user statements select current user needs to be supported in mysql session admin session session clickhouse sesson
1
7,929
11,104,412,582
IssuesEvent
2019-12-17 07:28:28
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Translate (Convert Format) produces a blank image
Bug Feedback Processing
When I trying to convert a Landsat-8 imagery from UInt16 (Unsigned Integer 16 Bit) to Byte (Unsigned Integer 8 Bit), the algorithm produces a blank image. This video shows the issue: https://drive.google.com/file/d/1MnEaYDVF4b4363RTN05ZLsNb4UyHj-aT/view?usp=sharing **Another question:** I was unable to include the parameter **-scale** to stretch data correcty as I do via GDAL command prompt QGIS version 3.4.11 64 Bit, Operational System Windows 10 Pro.
1.0
Translate (Convert Format) produces a blank image - When I trying to convert a Landsat-8 imagery from UInt16 (Unsigned Integer 16 Bit) to Byte (Unsigned Integer 8 Bit), the algorithm produces a blank image. This video shows the issue: https://drive.google.com/file/d/1MnEaYDVF4b4363RTN05ZLsNb4UyHj-aT/view?usp=sharing **Another question:** I was unable to include the parameter **-scale** to stretch data correcty as I do via GDAL command prompt QGIS version 3.4.11 64 Bit, Operational System Windows 10 Pro.
process
translate convert format produces a blank image when i trying to convert a landsat imagery from unsigned integer bit to byte unsigned integer bit the algorithm produces a blank image this video shows the issue another question i was unable to include the parameter scale to stretch data correcty as i do via gdal command prompt qgis version bit operational system windows pro
1
20,742
27,446,133,523
IssuesEvent
2023-03-02 14:24:47
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
Obsoletion request: [GO:0043927 exonucleolytic nuclear-transcribed mRNA catabolic process involved in endonucleolytic cleavage-dependent decay has no annotation and no reference]
RNA processes obsoletion
Please provide as much information as you can: * **GO term ID and Label** GO:0043927 exonucleolytic nuclear-transcribed mRNA catabolic process involved in endonucleolytic cleavage-dependent decay has no annotation and no reference * **Reason for deprecation** Put an x in the appropriate box: - [x] The reason for obsoletion is that the term is not clearly defined and usage has been inconsistent. Also merges multiple pathway concepts * **"Replace by" term (ID and label)** If all annotations can safely be moved to that term GO:0043927 exonucleolytic nuclear-transcribed mRNA catabolic process * **Are there annotations to this term?** - How many EXP: NONE * **Are there mappings and cross references to this term? (InterPro, Keywords; check QuickGO cross-references section)** * **Is this term in a subset? (check the AmiGO page for that term)** * **Any other information** ---- Checklist for ontology editor ***Check term usage and metadata in Protégé*** - [x] check term usage in the ontology - [x] check internal mappings: RHEA, EC, MetaCyc - [x] check subset usage - [x] check taxon constraints ***Check annotations*** - [x] find external mappings (via IEAS), include in obsoletion notice ***Notification*** - [x] create [obsoletion announcement](https://github.com/geneontology/go-announcements/issues/new?assignees=&labels=obsoletion&template=obsoletion-notice.md&title=Obsoletion+notice%3A+%5BGO+ID%3A+term+label%5D) - [x] announce to GO friends (go-friends@mailman.stanford.edu) - [ ] paste the text in the ontology ticket
1.0
Obsoletion request: [GO:0043927 exonucleolytic nuclear-transcribed mRNA catabolic process involved in endonucleolytic cleavage-dependent decay has no annotation and no reference] - Please provide as much information as you can: * **GO term ID and Label** GO:0043927 exonucleolytic nuclear-transcribed mRNA catabolic process involved in endonucleolytic cleavage-dependent decay has no annotation and no reference * **Reason for deprecation** Put an x in the appropriate box: - [x] The reason for obsoletion is that the term is not clearly defined and usage has been inconsistent. Also merges multiple pathway concepts * **"Replace by" term (ID and label)** If all annotations can safely be moved to that term GO:0043927 exonucleolytic nuclear-transcribed mRNA catabolic process * **Are there annotations to this term?** - How many EXP: NONE * **Are there mappings and cross references to this term? (InterPro, Keywords; check QuickGO cross-references section)** * **Is this term in a subset? (check the AmiGO page for that term)** * **Any other information** ---- Checklist for ontology editor ***Check term usage and metadata in Protégé*** - [x] check term usage in the ontology - [x] check internal mappings: RHEA, EC, MetaCyc - [x] check subset usage - [x] check taxon constraints ***Check annotations*** - [x] find external mappings (via IEAS), include in obsoletion notice ***Notification*** - [x] create [obsoletion announcement](https://github.com/geneontology/go-announcements/issues/new?assignees=&labels=obsoletion&template=obsoletion-notice.md&title=Obsoletion+notice%3A+%5BGO+ID%3A+term+label%5D) - [x] announce to GO friends (go-friends@mailman.stanford.edu) - [ ] paste the text in the ontology ticket
process
obsoletion request please provide as much information as you can go term id and label go exonucleolytic nuclear transcribed mrna catabolic process involved in endonucleolytic cleavage dependent decay has no annotation and no reference reason for deprecation put an x in the appropriate box the reason for obsoletion is that the term is not clearly defined and usage has been inconsistent also merges multiple pathway concepts replace by term id and label if all annotations can safely be moved to that term go exonucleolytic nuclear transcribed mrna catabolic process are there annotations to this term how many exp none are there mappings and cross references to this term interpro keywords check quickgo cross references section is this term in a subset check the amigo page for that term any other information checklist for ontology editor check term usage and metadata in protégé check term usage in the ontology check internal mappings rhea ec metacyc check subset usage check taxon constraints check annotations find external mappings via ieas include in obsoletion notice notification create announce to go friends go friends mailman stanford edu paste the text in the ontology ticket
1
15,217
19,072,808,768
IssuesEvent
2021-11-27 07:34:55
sumneko/lua-language-server
https://api.github.com/repos/sumneko/lua-language-server
closed
Add support for `?` in nonstandard symbols
preprocess
If possible, can support be added for a new nonstandard symbol, the `?` operator? Used by certain runtime to minic the conditional/safe access by typescript as `object?.property?.property[index]?.value` as an example. It is a fairly common syntax, used widely in other languages so it's probably on the list of "more common to be used in a custom runtime" Currently implemented/used in the FiveM/CFX runtime, see: https://github.com/citizenfx/lua/blob/luaglm-dev/cfx/README.md#power-patches Essentially, when parsing the file just ignore them, and treat the property access as normal. Perhaps when parsing it should be typed as always possibly null, like `boolean | undefined` in Typescript? Not sure of the equivalent
1.0
Add support for `?` in nonstandard symbols - If possible, can support be added for a new nonstandard symbol, the `?` operator? Used by certain runtime to minic the conditional/safe access by typescript as `object?.property?.property[index]?.value` as an example. It is a fairly common syntax, used widely in other languages so it's probably on the list of "more common to be used in a custom runtime" Currently implemented/used in the FiveM/CFX runtime, see: https://github.com/citizenfx/lua/blob/luaglm-dev/cfx/README.md#power-patches Essentially, when parsing the file just ignore them, and treat the property access as normal. Perhaps when parsing it should be typed as always possibly null, like `boolean | undefined` in Typescript? Not sure of the equivalent
process
add support for in nonstandard symbols if possible can support be added for a new nonstandard symbol the operator used by certain runtime to minic the conditional safe access by typescript as object property property value as an example it is a fairly common syntax used widely in other languages so it s probably on the list of more common to be used in a custom runtime currently implemented used in the fivem cfx runtime see essentially when parsing the file just ignore them and treat the property access as normal perhaps when parsing it should be typed as always possibly null like boolean undefined in typescript not sure of the equivalent
1
15,836
10,351,499,099
IssuesEvent
2019-09-05 07:02:54
kyma-project/console
https://api.github.com/repos/kyma-project/console
closed
Disable Bind application button in case no applications are available
area/console area/service-catalog enhancement
**Description** - disable bind application button in instance details in case of no applications available - enrich experience by explaining to the user why button is disable, how he can create new application, and what `kind` of applications are those **Reasons** If you provision first instance in new env with no applications, you can click `Bind application` and in a modal try to select existing application to bind to, problem is there are no applications. This is confusing and not user friendly approach. **Attachments** ![screen shot 2019-01-29 at 09 49 34](https://user-images.githubusercontent.com/6995927/51895758-3c539080-23ab-11e9-8385-8df5ae139cf4.png)
1.0
Disable Bind application button in case no applications are available - **Description** - disable bind application button in instance details in case of no applications available - enrich experience by explaining to the user why button is disable, how he can create new application, and what `kind` of applications are those **Reasons** If you provision first instance in new env with no applications, you can click `Bind application` and in a modal try to select existing application to bind to, problem is there are no applications. This is confusing and not user friendly approach. **Attachments** ![screen shot 2019-01-29 at 09 49 34](https://user-images.githubusercontent.com/6995927/51895758-3c539080-23ab-11e9-8385-8df5ae139cf4.png)
non_process
disable bind application button in case no applications are available description disable bind application button in instance details in case of no applications available enrich experience by explaining to the user why button is disable how he can create new application and what kind of applications are those reasons if you provision first instance in new env with no applications you can click bind application and in a modal try to select existing application to bind to problem is there are no applications this is confusing and not user friendly approach attachments
0
84,062
24,216,177,083
IssuesEvent
2022-09-26 06:59:26
sandboxie-plus/Sandboxie
https://api.github.com/repos/sandboxie-plus/Sandboxie
closed
[Plus 1.3.4] SBIE2205 Service not implemented: Manifest
fixed in next build Regression ToDo ASAP
### Describe what you noticed and did PreferExternalManifest=y |Message| SBIE2205 Service not implemented: Manifest1.txt SBIE2205 Service not implemented: Manifest2.txt Run sandboxed explorer or anything else ### How often did you encounter it so far? _No response_ ### Affected program . ### Download link . ### Where is the program located? Not relevant to my request. ### Expected behavior No SBIE2205 Service not implemented: Manifest messages ### What is your Windows edition and version? Windows 7 Ultimate SP1 x64 ### In which Windows account you have this problem? I use the built-in Administrator account. ### Please mention any installed security software None/WD disabled ### What version of Sandboxie are you running? Plus 1.3.4 x64 ### Is it a new installation of Sandboxie? I just updated Sandboxie from a previous version (to be specified). ### Is it a regression? Plus 1.3.3 x64 ### In which sandbox type you have this problem? In a Standard isolation sandbox (yellow sandbox icon). ### Can you reproduce this problem on an empty sandbox? I can confirm it also on an empty sandbox. ### Did you previously enable some security policy settings outside Sandboxie? _No response_ ### Crash dump _No response_ ### Trace log _No response_ ### Sandboxie.ini configuration ```shell Enabled=y BlockNetworkFiles=y RecoverFolder=%{374DE290-123F-4565-9164-39C4925E467B}% RecoverFolder=%Personal% RecoverFolder=%Desktop% BorderColor=#00FFFF,ttl Template=OpenBluetooth Template=SkipHook Template=FileCopy Template=qWave Template=BlockPorts Template=LingerPrograms Template=AutoRecoverIgnore ConfigLevel=9 PreferExternalManifest=y *Clean config with nothing in global and with just defaultbox + PreferExternalManifest ```
1.0
[Plus 1.3.4] SBIE2205 Service not implemented: Manifest - ### Describe what you noticed and did PreferExternalManifest=y |Message| SBIE2205 Service not implemented: Manifest1.txt SBIE2205 Service not implemented: Manifest2.txt Run sandboxed explorer or anything else ### How often did you encounter it so far? _No response_ ### Affected program . ### Download link . ### Where is the program located? Not relevant to my request. ### Expected behavior No SBIE2205 Service not implemented: Manifest messages ### What is your Windows edition and version? Windows 7 Ultimate SP1 x64 ### In which Windows account you have this problem? I use the built-in Administrator account. ### Please mention any installed security software None/WD disabled ### What version of Sandboxie are you running? Plus 1.3.4 x64 ### Is it a new installation of Sandboxie? I just updated Sandboxie from a previous version (to be specified). ### Is it a regression? Plus 1.3.3 x64 ### In which sandbox type you have this problem? In a Standard isolation sandbox (yellow sandbox icon). ### Can you reproduce this problem on an empty sandbox? I can confirm it also on an empty sandbox. ### Did you previously enable some security policy settings outside Sandboxie? _No response_ ### Crash dump _No response_ ### Trace log _No response_ ### Sandboxie.ini configuration ```shell Enabled=y BlockNetworkFiles=y RecoverFolder=%{374DE290-123F-4565-9164-39C4925E467B}% RecoverFolder=%Personal% RecoverFolder=%Desktop% BorderColor=#00FFFF,ttl Template=OpenBluetooth Template=SkipHook Template=FileCopy Template=qWave Template=BlockPorts Template=LingerPrograms Template=AutoRecoverIgnore ConfigLevel=9 PreferExternalManifest=y *Clean config with nothing in global and with just defaultbox + PreferExternalManifest ```
non_process
service not implemented manifest describe what you noticed and did preferexternalmanifest y message service not implemented txt service not implemented txt run sandboxed explorer or anything else how often did you encounter it so far no response affected program download link where is the program located not relevant to my request expected behavior no service not implemented manifest messages what is your windows edition and version windows ultimate in which windows account you have this problem i use the built in administrator account please mention any installed security software none wd disabled what version of sandboxie are you running plus is it a new installation of sandboxie i just updated sandboxie from a previous version to be specified is it a regression plus in which sandbox type you have this problem in a standard isolation sandbox yellow sandbox icon can you reproduce this problem on an empty sandbox i can confirm it also on an empty sandbox did you previously enable some security policy settings outside sandboxie no response crash dump no response trace log no response sandboxie ini configuration shell enabled y blocknetworkfiles y recoverfolder recoverfolder personal recoverfolder desktop bordercolor ttl template openbluetooth template skiphook template filecopy template qwave template blockports template lingerprograms template autorecoverignore configlevel preferexternalmanifest y clean config with nothing in global and with just defaultbox preferexternalmanifest
0
3,514
6,561,499,871
IssuesEvent
2017-09-07 13:33:08
openvstorage/alba
https://api.github.com/repos/openvstorage/alba
reopened
Add osd returns information about the added osd
process_wontfix type_feature
### Feature description Add json support for the add-osd which returns: - osd_id - osd_type Optionally introduce a new call `osd-info` which can return - osd_id: id of the osd -osd_type: ASD or AD - in_use_by: albabackend id Related https://github.com/openvstorage/alba/issues/748
1.0
Add osd returns information about the added osd - ### Feature description Add json support for the add-osd which returns: - osd_id - osd_type Optionally introduce a new call `osd-info` which can return - osd_id: id of the osd -osd_type: ASD or AD - in_use_by: albabackend id Related https://github.com/openvstorage/alba/issues/748
process
add osd returns information about the added osd feature description add json support for the add osd which returns osd id osd type optionally introduce a new call osd info which can return osd id id of the osd osd type asd or ad in use by albabackend id related
1
46,012
7,229,025,941
IssuesEvent
2018-02-11 15:59:08
SWE-574-Spring-2018/Spring2018-SWE574
https://api.github.com/repos/SWE-574-Spring-2018/Spring2018-SWE574
opened
Wiki Page for Weekly Meeting Notes and Weekly Status Report
documentation high
A wiki page must be created in order to document weekly meeting notes and weekly status report
1.0
Wiki Page for Weekly Meeting Notes and Weekly Status Report - A wiki page must be created in order to document weekly meeting notes and weekly status report
non_process
wiki page for weekly meeting notes and weekly status report a wiki page must be created in order to document weekly meeting notes and weekly status report
0
56,598
11,611,952,554
IssuesEvent
2020-02-26 07:55:55
wazuh/wazuh-ruleset
https://api.github.com/repos/wazuh/wazuh-ruleset
closed
HELP WITH CUSTOM RULES
community decoders question rules
Hello, I would really need a hand formulating custom fortigate rules. The following files are the logs to help in the definition of the rules. [IPS.txt](https://github.com/wazuh/wazuh-kibana-app/files/4185572/IPS.txt) [TRAFFIC.txt](https://github.com/wazuh/wazuh-kibana-app/files/4185574/TRAFFIC.txt) [VIRUS.txt](https://github.com/wazuh/wazuh-kibana-app/files/4185575/VIRUS.txt) [WEB FILTER.txt](https://github.com/wazuh/wazuh-kibana-app/files/4185576/WEB.FILTER.txt) [ANOMAlY.txt](https://github.com/wazuh/wazuh-kibana-app/files/4185577/ANOMAlY.txt) [EVENT.txt](https://github.com/wazuh/wazuh-kibana-app/files/4185578/EVENT.txt)
1.0
HELP WITH CUSTOM RULES - Hello, I would really need a hand formulating custom fortigate rules. The following files are the logs to help in the definition of the rules. [IPS.txt](https://github.com/wazuh/wazuh-kibana-app/files/4185572/IPS.txt) [TRAFFIC.txt](https://github.com/wazuh/wazuh-kibana-app/files/4185574/TRAFFIC.txt) [VIRUS.txt](https://github.com/wazuh/wazuh-kibana-app/files/4185575/VIRUS.txt) [WEB FILTER.txt](https://github.com/wazuh/wazuh-kibana-app/files/4185576/WEB.FILTER.txt) [ANOMAlY.txt](https://github.com/wazuh/wazuh-kibana-app/files/4185577/ANOMAlY.txt) [EVENT.txt](https://github.com/wazuh/wazuh-kibana-app/files/4185578/EVENT.txt)
non_process
help with custom rules hello i would really need a hand formulating custom fortigate rules the following files are the logs to help in the definition of the rules
0
34,379
16,540,196,003
IssuesEvent
2021-05-27 15:53:01
SciTools/iris
https://api.github.com/repos/SciTools/iris
closed
Load is VERY slow for a NetCDF multi-variable file
New: Issue Type: Performance
## 📰 Custom Issue When loading a single variable from a quite small NetCDF file which includes 300 variables, the load time is very large : around 100 seconds (while it is less than 0.1s for a similar single variable file). This is a bottleneck for trying to use Iris (trough [ESMValTool](https://github.com/ESMValGroup/ESMValTool)) for handling some climate model native data format. The attached notebook [load_time_histmth.pdf](https://github.com/SciTools/iris/files/6477355/load_time_histmth.pdf) demonstrates the issue and includes a profiling, which shows that the most time consuming function is (by large) [NetCDFDataProxy.__getitem__](https://github.com/SciTools/iris/blob/83309c32a6c9cfbd4603732a4075bae0f6f07d45/lib/iris/fileformats/netcdf.py#L434) The data file is available [here](https://drive.google.com/file/d/1_M7Hno-FFnkfU88jVMNCHvzyPTZWsbQA/view?usp=sharing) System info is : > uname -a > Linux ciclad-ng.private.ipsl.fr 2.6.32-754.35.1.el6.x86_64 #1 SMP Wed Oct 7 03:47:54 CDT 2020 x86_64 x86_64 x86_64 GNU/Linux >
True
Load is VERY slow for a NetCDF multi-variable file - ## 📰 Custom Issue When loading a single variable from a quite small NetCDF file which includes 300 variables, the load time is very large : around 100 seconds (while it is less than 0.1s for a similar single variable file). This is a bottleneck for trying to use Iris (trough [ESMValTool](https://github.com/ESMValGroup/ESMValTool)) for handling some climate model native data format. The attached notebook [load_time_histmth.pdf](https://github.com/SciTools/iris/files/6477355/load_time_histmth.pdf) demonstrates the issue and includes a profiling, which shows that the most time consuming function is (by large) [NetCDFDataProxy.__getitem__](https://github.com/SciTools/iris/blob/83309c32a6c9cfbd4603732a4075bae0f6f07d45/lib/iris/fileformats/netcdf.py#L434) The data file is available [here](https://drive.google.com/file/d/1_M7Hno-FFnkfU88jVMNCHvzyPTZWsbQA/view?usp=sharing) System info is : > uname -a > Linux ciclad-ng.private.ipsl.fr 2.6.32-754.35.1.el6.x86_64 #1 SMP Wed Oct 7 03:47:54 CDT 2020 x86_64 x86_64 x86_64 GNU/Linux >
non_process
load is very slow for a netcdf multi variable file 📰 custom issue when loading a single variable from a quite small netcdf file which includes variables the load time is very large around seconds while it is less than for a similar single variable file this is a bottleneck for trying to use iris trough for handling some climate model native data format the attached notebook demonstrates the issue and includes a profiling which shows that the most time consuming function is by large the data file is available system info is uname a linux ciclad ng private ipsl fr smp wed oct cdt gnu linux
0
6,431
9,532,307,588
IssuesEvent
2019-04-29 18:15:12
meumobi/sitebuilder
https://api.github.com/repos/meumobi/sitebuilder
closed
Media served by a 30x http response code get wrong Content-Type in GenericMediaHandler
duplicate fix process-remote-media
When remote media links are being processed by `ProcessRemoteMedia/GenericMediaHandler.php` service. It's returning the wrong content type when processing shortened links (goo.gl, bit.ly, etc). Instead of get the content type of the original file (to be redirected), it's getting text/plain or text/html. Probably needs to follow some redirect location when the response http status code equals 301 or others. But needs to be aware to not enter in many redirect levels.
1.0
Media served by a 30x http response code get wrong Content-Type in GenericMediaHandler - When remote media links are being processed by `ProcessRemoteMedia/GenericMediaHandler.php` service. It's returning the wrong content type when processing shortened links (goo.gl, bit.ly, etc). Instead of get the content type of the original file (to be redirected), it's getting text/plain or text/html. Probably needs to follow some redirect location when the response http status code equals 301 or others. But needs to be aware to not enter in many redirect levels.
process
media served by a http response code get wrong content type in genericmediahandler when remote media links are being processed by processremotemedia genericmediahandler php service it s returning the wrong content type when processing shortened links goo gl bit ly etc instead of get the content type of the original file to be redirected it s getting text plain or text html probably needs to follow some redirect location when the response http status code equals or others but needs to be aware to not enter in many redirect levels
1
16,483
21,443,122,618
IssuesEvent
2022-04-25 01:12:56
huutho77/CNPMNC_ThayAi
https://api.github.com/repos/huutho77/CNPMNC_ThayAi
closed
[Broswer UI] Coding Home page
processing dev/quocky2211 dev/haichao784 dev/phamtan
- Khu vực thanh điều hướng - Các submenu - Hiển thị danh sách sản phẩm - Phân trang mỗi lần load 10 hoặc 20 sản phẩm
1.0
[Broswer UI] Coding Home page - - Khu vực thanh điều hướng - Các submenu - Hiển thị danh sách sản phẩm - Phân trang mỗi lần load 10 hoặc 20 sản phẩm
process
coding home page khu vực thanh điều hướng các submenu hiển thị danh sách sản phẩm phân trang mỗi lần load hoặc sản phẩm
1
167,166
20,725,873,950
IssuesEvent
2022-03-14 01:44:48
vlaship/websocket-stomp
https://api.github.com/repos/vlaship/websocket-stomp
opened
CVE-2020-36518 (Medium) detected in jackson-databind-2.8.11.3.jar
security vulnerability
## CVE-2020-36518 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.11.3.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/ws/build.gradle</p> <p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.11.3/844df5aba5a1a56e00905b165b12bb34116ee858/jackson-databind-2.8.11.3.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.11.3/844df5aba5a1a56e00905b165b12bb34116ee858/jackson-databind-2.8.11.3.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-websocket-1.5.22.RELEASE.jar (Root Library) - spring-boot-starter-web-1.5.22.RELEASE.jar - :x: **jackson-databind-2.8.11.3.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jackson-databind before 2.13.0 allows a Java StackOverflow exception and denial of service via a large depth of nested objects. WhiteSource Note: After conducting further research, WhiteSource has determined that all versions of com.fasterxml.jackson.core:jackson-databind up to version 2.13.2 are vulnerable to CVE-2020-36518. <p>Publish Date: 2022-03-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36518>CVE-2020-36518</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-36518">https://nvd.nist.gov/vuln/detail/CVE-2020-36518</a></p> <p>Release Date: 2022-03-11</p> <p>Fix Resolution: jackson-databind-2.10 - 2.10.1;com.fasterxml.jackson.core.jackson-databind - 2.6.2.v20161117-2150</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-36518 (Medium) detected in jackson-databind-2.8.11.3.jar - ## CVE-2020-36518 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.11.3.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/ws/build.gradle</p> <p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.11.3/844df5aba5a1a56e00905b165b12bb34116ee858/jackson-databind-2.8.11.3.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.11.3/844df5aba5a1a56e00905b165b12bb34116ee858/jackson-databind-2.8.11.3.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-websocket-1.5.22.RELEASE.jar (Root Library) - spring-boot-starter-web-1.5.22.RELEASE.jar - :x: **jackson-databind-2.8.11.3.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jackson-databind before 2.13.0 allows a Java StackOverflow exception and denial of service via a large depth of nested objects. WhiteSource Note: After conducting further research, WhiteSource has determined that all versions of com.fasterxml.jackson.core:jackson-databind up to version 2.13.2 are vulnerable to CVE-2020-36518. <p>Publish Date: 2022-03-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36518>CVE-2020-36518</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-36518">https://nvd.nist.gov/vuln/detail/CVE-2020-36518</a></p> <p>Release Date: 2022-03-11</p> <p>Fix Resolution: jackson-databind-2.10 - 2.10.1;com.fasterxml.jackson.core.jackson-databind - 2.6.2.v20161117-2150</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm ws build gradle path to vulnerable library root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter websocket release jar root library spring boot starter web release jar x jackson databind jar vulnerable library vulnerability details jackson databind before allows a java stackoverflow exception and denial of service via a large depth of nested objects whitesource note after conducting further research whitesource has determined that all versions of com fasterxml jackson core jackson databind up to version are vulnerable to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jackson databind com fasterxml jackson core jackson databind step up your open source security game with whitesource
0
77,167
3,506,267,925
IssuesEvent
2016-01-08 05:08:17
OregonCore/OregonCore
https://api.github.com/repos/OregonCore/OregonCore
closed
DND (BB #228)
migrated Priority: Medium Type: Bug
This issue was migrated from bitbucket. **Original Reporter:** **Original Date:** 21.07.2010 04:29:29 GMT+0000 **Original Priority:** major **Original Type:** bug **Original State:** invalid **Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/228 <hr> When An player Set to Dnd mark Other player Cand send any whisper To the DND Player , But here Player Can send whisper To DND Player !
1.0
DND (BB #228) - This issue was migrated from bitbucket. **Original Reporter:** **Original Date:** 21.07.2010 04:29:29 GMT+0000 **Original Priority:** major **Original Type:** bug **Original State:** invalid **Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/228 <hr> When An player Set to Dnd mark Other player Cand send any whisper To the DND Player , But here Player Can send whisper To DND Player !
non_process
dnd bb this issue was migrated from bitbucket original reporter original date gmt original priority major original type bug original state invalid direct link when an player set to dnd mark other player cand send any whisper to the dnd player but here player can send whisper to dnd player
0
389,561
11,503,909,006
IssuesEvent
2020-02-12 22:04:02
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
m.clien.net - GIF Image (or converted mp4 video) cannot play
browser-fenix engine-gecko priority-normal
<!-- @browser: Firefox Mobile 74.0 --> <!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:74.0) Gecko/74.0 Firefox/74.0 --> <!-- @reported_with: --> <!-- @extra_labels: browser-fenix --> **URL**: https://m.clien.net/service/board/park/14574106?od=T31 **Browser / Version**: Firefox Mobile 74.0 **Operating System**: Android **Tested Another Browser**: Yes **Problem type**: Video or audio doesn't play **Description**: GIF Image (or converted mp4 video) cannot play **Steps to Reproduce**: Just click gif image to try play gif or video that converted from gif. It doesn't work in Fenix only. Old version of Firefox Mobile(Fennec), Desktop Firefox (Quantum), Chrome Mobile: Works correctly. <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
m.clien.net - GIF Image (or converted mp4 video) cannot play - <!-- @browser: Firefox Mobile 74.0 --> <!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:74.0) Gecko/74.0 Firefox/74.0 --> <!-- @reported_with: --> <!-- @extra_labels: browser-fenix --> **URL**: https://m.clien.net/service/board/park/14574106?od=T31 **Browser / Version**: Firefox Mobile 74.0 **Operating System**: Android **Tested Another Browser**: Yes **Problem type**: Video or audio doesn't play **Description**: GIF Image (or converted mp4 video) cannot play **Steps to Reproduce**: Just click gif image to try play gif or video that converted from gif. It doesn't work in Fenix only. Old version of Firefox Mobile(Fennec), Desktop Firefox (Quantum), Chrome Mobile: Works correctly. <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
m clien net gif image or converted video cannot play url browser version firefox mobile operating system android tested another browser yes problem type video or audio doesn t play description gif image or converted video cannot play steps to reproduce just click gif image to try play gif or video that converted from gif it doesn t work in fenix only old version of firefox mobile fennec desktop firefox quantum chrome mobile works correctly browser configuration none from with ❤️
0
408,089
11,941,503,242
IssuesEvent
2020-04-02 18:33:47
fgpv-vpgf/contributed-plugins
https://api.github.com/repos/fgpv-vpgf/contributed-plugins
opened
Turn on and off dataset visibility
enhancement plugin-chart priority - medium
When you have a graph with many datasets, it would be great if we can set the visibility of each dataset. When visibility is modified, the graph should not just remove the dataset but should resample all the values.
1.0
Turn on and off dataset visibility - When you have a graph with many datasets, it would be great if we can set the visibility of each dataset. When visibility is modified, the graph should not just remove the dataset but should resample all the values.
non_process
turn on and off dataset visibility when you have a graph with many datasets it would be great if we can set the visibility of each dataset when visibility is modified the graph should not just remove the dataset but should resample all the values
0
182,615
21,673,909,940
IssuesEvent
2022-05-08 12:03:19
turkdevops/electron-api-demos
https://api.github.com/repos/turkdevops/electron-api-demos
closed
CVE-2021-23807 (High) detected in jsonpointer-4.0.1.tgz - autoclosed
security vulnerability
## CVE-2021-23807 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jsonpointer-4.0.1.tgz</b></p></summary> <p>Simple JSON Addressing.</p> <p>Library home page: <a href="https://registry.npmjs.org/jsonpointer/-/jsonpointer-4.0.1.tgz">https://registry.npmjs.org/jsonpointer/-/jsonpointer-4.0.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/jsonpointer/package.json</p> <p> Dependency Hierarchy: - standard-8.6.0.tgz (Root Library) - eslint-3.10.2.tgz - is-my-json-valid-2.19.0.tgz - :x: **jsonpointer-4.0.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/turkdevops/electron-api-demos/commit/8b3c67fde2016f47e681b745f49afdea23a50ed4">8b3c67fde2016f47e681b745f49afdea23a50ed4</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects the package jsonpointer before 5.0.0. A type confusion vulnerability can lead to a bypass of a previous Prototype Pollution fix when the pointer components are arrays. <p>Publish Date: 2021-11-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23807>CVE-2021-23807</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23807">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23807</a></p> <p>Release Date: 2021-11-03</p> <p>Fix Resolution (jsonpointer): 5.0.0</p> <p>Direct dependency fix Resolution (standard): 9.0.0-beta.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-23807 (High) detected in jsonpointer-4.0.1.tgz - autoclosed - ## CVE-2021-23807 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jsonpointer-4.0.1.tgz</b></p></summary> <p>Simple JSON Addressing.</p> <p>Library home page: <a href="https://registry.npmjs.org/jsonpointer/-/jsonpointer-4.0.1.tgz">https://registry.npmjs.org/jsonpointer/-/jsonpointer-4.0.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/jsonpointer/package.json</p> <p> Dependency Hierarchy: - standard-8.6.0.tgz (Root Library) - eslint-3.10.2.tgz - is-my-json-valid-2.19.0.tgz - :x: **jsonpointer-4.0.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/turkdevops/electron-api-demos/commit/8b3c67fde2016f47e681b745f49afdea23a50ed4">8b3c67fde2016f47e681b745f49afdea23a50ed4</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects the package jsonpointer before 5.0.0. A type confusion vulnerability can lead to a bypass of a previous Prototype Pollution fix when the pointer components are arrays. <p>Publish Date: 2021-11-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23807>CVE-2021-23807</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23807">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23807</a></p> <p>Release Date: 2021-11-03</p> <p>Fix Resolution (jsonpointer): 5.0.0</p> <p>Direct dependency fix Resolution (standard): 9.0.0-beta.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in jsonpointer tgz autoclosed cve high severity vulnerability vulnerable library jsonpointer tgz simple json addressing library home page a href path to dependency file package json path to vulnerable library node modules jsonpointer package json dependency hierarchy standard tgz root library eslint tgz is my json valid tgz x jsonpointer tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package jsonpointer before a type confusion vulnerability can lead to a bypass of a previous prototype pollution fix when the pointer components are arrays publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jsonpointer direct dependency fix resolution standard beta step up your open source security game with whitesource
0
17,224
22,834,390,343
IssuesEvent
2022-07-12 15:23:57
prometheus-community/windows_exporter
https://api.github.com/repos/prometheus-community/windows_exporter
closed
Expose process cli arguments?
question collector/process
Is there already capability to expose process cli arguments? Looking into this as a way to add uniqueness to processes that run from same executable. process_id changes on every execution so isn't ideal for my purposes.
1.0
Expose process cli arguments? - Is there already capability to expose process cli arguments? Looking into this as a way to add uniqueness to processes that run from same executable. process_id changes on every execution so isn't ideal for my purposes.
process
expose process cli arguments is there already capability to expose process cli arguments looking into this as a way to add uniqueness to processes that run from same executable process id changes on every execution so isn t ideal for my purposes
1
123,618
12,214,057,449
IssuesEvent
2020-05-01 08:52:45
kai-tub/latex-beamer-pure-minimalistic
https://api.github.com/repos/kai-tub/latex-beamer-pure-minimalistic
closed
Add contributer note and code of conduct
documentation
I am trying to make a serious OpenSource project and therefore these should be included. (Even if the chances of contributions are quite low 🙃 )
1.0
Add contributer note and code of conduct - I am trying to make a serious OpenSource project and therefore these should be included. (Even if the chances of contributions are quite low 🙃 )
non_process
add contributer note and code of conduct i am trying to make a serious opensource project and therefore these should be included even if the chances of contributions are quite low 🙃
0
9,997
13,041,806,441
IssuesEvent
2020-07-28 21:05:50
googleapis/python-bigquery
https://api.github.com/repos/googleapis/python-bigquery
closed
Unit tests should not rely on systest environ variables
api: bigquery testing type: process
```bash $ env | grep GOOGLE && echo YES || echo NO NO $ nox -e unit-2.7 nox > Running session unit-2.7 nox > Creating virtual environment (virtualenv) using python2.7 in .nox/unit-2-7 nox > pip install mock pytest google-cloud-testutils pytest-cov freezegun nox > pip install grpcio nox > pip install -e .[all,fastparquet] nox > pip install ipython==5.5 nox > py.test --quiet --cov=google.cloud.bigquery --cov=tests.unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit ........................................................................ [ 5%] ........................................................................ [ 10%] ........................................................................ [ 15%] ..............F...................F..................................... [ 20%] ........................................................................ [ 25%] ........................................................................ [ 30%] .......................ss............................................... [ 35%] ........................................................................ [ 40%] ........................................................................ [ 45%] ........................................................................ [ 50%] ........................................................................ [ 55%] ........................................................................ [ 60%] ........................................................................ [ 65%] .......................................................F................ [ 70%] ........................................................................ [ 75%] ........................................................................ [ 80%] ....................ss.................................................. [ 85%] ........................................................................ [ 90%] ........................................................................ [ 95%] ..................................................................... [100%] =================================== FAILURES =================================== __________ TestClient.test__call_api_applying_custom_retry_on_timeout __________ self = <tests.unit.test_client.TestClient testMethod=test__call_api_applying_custom_retry_on_timeout> def test__call_api_applying_custom_retry_on_timeout(self): from concurrent.futures import TimeoutError from google.cloud.bigquery.retry import DEFAULT_RETRY > client = self._make_one() tests/unit/test_client.py:224: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/unit/test_client.py:108: in _make_one return self._get_target_class()(*args, **kw) google/cloud/bigquery/client.py:179: in __init__ project=project, credentials=credentials, _http=_http .nox/unit-2-7/lib/python2.7/site-packages/google/cloud/client.py:226: in __init__ _ClientProjectMixin.__init__(self, project=project) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.cloud.bigquery.client.Client object at 0x7f3c1826b590> project = None def __init__(self, project=None): project = self._determine_default(project) if project is None: raise EnvironmentError( > "Project was not passed and could not be " "determined from the environment." ) E EnvironmentError: Project was not passed and could not be determined from the environment. .nox/unit-2-7/lib/python2.7/site-packages/google/cloud/client.py:181: EnvironmentError ------------------------------ Captured log call ------------------------------- WARNING google.auth._default:_default.py:334 No project ID could be determined. Consider running `gcloud config set project` or setting the GOOGLE_CLOUD_PROJECT environment variable __________ TestClient.test_create_bqstorage_client_missing_dependency __________ self = <tests.unit.test_client.TestClient testMethod=test_create_bqstorage_client_missing_dependency> def test_create_bqstorage_client_missing_dependency(self): > client = self._make_one() tests/unit/test_client.py:677: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/unit/test_client.py:108: in _make_one return self._get_target_class()(*args, **kw) google/cloud/bigquery/client.py:179: in __init__ project=project, credentials=credentials, _http=_http .nox/unit-2-7/lib/python2.7/site-packages/google/cloud/client.py:226: in __init__ _ClientProjectMixin.__init__(self, project=project) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.cloud.bigquery.client.Client object at 0x7f3c18278c50> project = None def __init__(self, project=None): project = self._determine_default(project) if project is None: raise EnvironmentError( > "Project was not passed and could not be " "determined from the environment." ) E EnvironmentError: Project was not passed and could not be determined from the environment. .nox/unit-2-7/lib/python2.7/site-packages/google/cloud/client.py:181: EnvironmentError ------------------------------ Captured log call ------------------------------- WARNING google.auth._default:_default.py:334 No project ID could be determined. Consider running `gcloud config set project` or setting the GOOGLE_CLOUD_PROJECT environment variable _____________________ test_bigquery_magic_w_missing_query ______________________ def test_bigquery_magic_w_missing_query(): ip = IPython.get_ipython() ip.extension_manager.load_extension("google.cloud.bigquery") magics.context._project = None cell_body = " \n \n \t\t \n " with io.capture_output() as captured_io: > ip.run_cell_magic("bigquery", "df", cell_body) tests/unit/test_magics.py:778: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .nox/unit-2-7/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2117: in run_cell_magic result = fn(magic_arg_s, cell) google/cloud/bigquery/magics.py:503: in _cell_magic client_info=client_info.ClientInfo(user_agent=IPYTHON_USER_AGENT), google/cloud/bigquery/client.py:179: in __init__ project=project, credentials=credentials, _http=_http .nox/unit-2-7/lib/python2.7/site-packages/google/cloud/client.py:226: in __init__ _ClientProjectMixin.__init__(self, project=project) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.cloud.bigquery.client.Client object at 0x7f3c17f97c90> project = None def __init__(self, project=None): project = self._determine_default(project) if project is None: raise EnvironmentError( > "Project was not passed and could not be " "determined from the environment." ) E EnvironmentError: Project was not passed and could not be determined from the environment. .nox/unit-2-7/lib/python2.7/site-packages/google/cloud/client.py:181: EnvironmentError ```
1.0
Unit tests should not rely on systest environ variables - ```bash $ env | grep GOOGLE && echo YES || echo NO NO $ nox -e unit-2.7 nox > Running session unit-2.7 nox > Creating virtual environment (virtualenv) using python2.7 in .nox/unit-2-7 nox > pip install mock pytest google-cloud-testutils pytest-cov freezegun nox > pip install grpcio nox > pip install -e .[all,fastparquet] nox > pip install ipython==5.5 nox > py.test --quiet --cov=google.cloud.bigquery --cov=tests.unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit ........................................................................ [ 5%] ........................................................................ [ 10%] ........................................................................ [ 15%] ..............F...................F..................................... [ 20%] ........................................................................ [ 25%] ........................................................................ [ 30%] .......................ss............................................... [ 35%] ........................................................................ [ 40%] ........................................................................ [ 45%] ........................................................................ [ 50%] ........................................................................ [ 55%] ........................................................................ [ 60%] ........................................................................ [ 65%] .......................................................F................ [ 70%] ........................................................................ [ 75%] ........................................................................ [ 80%] ....................ss.................................................. [ 85%] ........................................................................ [ 90%] ........................................................................ [ 95%] ..................................................................... [100%] =================================== FAILURES =================================== __________ TestClient.test__call_api_applying_custom_retry_on_timeout __________ self = <tests.unit.test_client.TestClient testMethod=test__call_api_applying_custom_retry_on_timeout> def test__call_api_applying_custom_retry_on_timeout(self): from concurrent.futures import TimeoutError from google.cloud.bigquery.retry import DEFAULT_RETRY > client = self._make_one() tests/unit/test_client.py:224: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/unit/test_client.py:108: in _make_one return self._get_target_class()(*args, **kw) google/cloud/bigquery/client.py:179: in __init__ project=project, credentials=credentials, _http=_http .nox/unit-2-7/lib/python2.7/site-packages/google/cloud/client.py:226: in __init__ _ClientProjectMixin.__init__(self, project=project) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.cloud.bigquery.client.Client object at 0x7f3c1826b590> project = None def __init__(self, project=None): project = self._determine_default(project) if project is None: raise EnvironmentError( > "Project was not passed and could not be " "determined from the environment." ) E EnvironmentError: Project was not passed and could not be determined from the environment. .nox/unit-2-7/lib/python2.7/site-packages/google/cloud/client.py:181: EnvironmentError ------------------------------ Captured log call ------------------------------- WARNING google.auth._default:_default.py:334 No project ID could be determined. Consider running `gcloud config set project` or setting the GOOGLE_CLOUD_PROJECT environment variable __________ TestClient.test_create_bqstorage_client_missing_dependency __________ self = <tests.unit.test_client.TestClient testMethod=test_create_bqstorage_client_missing_dependency> def test_create_bqstorage_client_missing_dependency(self): > client = self._make_one() tests/unit/test_client.py:677: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/unit/test_client.py:108: in _make_one return self._get_target_class()(*args, **kw) google/cloud/bigquery/client.py:179: in __init__ project=project, credentials=credentials, _http=_http .nox/unit-2-7/lib/python2.7/site-packages/google/cloud/client.py:226: in __init__ _ClientProjectMixin.__init__(self, project=project) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.cloud.bigquery.client.Client object at 0x7f3c18278c50> project = None def __init__(self, project=None): project = self._determine_default(project) if project is None: raise EnvironmentError( > "Project was not passed and could not be " "determined from the environment." ) E EnvironmentError: Project was not passed and could not be determined from the environment. .nox/unit-2-7/lib/python2.7/site-packages/google/cloud/client.py:181: EnvironmentError ------------------------------ Captured log call ------------------------------- WARNING google.auth._default:_default.py:334 No project ID could be determined. Consider running `gcloud config set project` or setting the GOOGLE_CLOUD_PROJECT environment variable _____________________ test_bigquery_magic_w_missing_query ______________________ def test_bigquery_magic_w_missing_query(): ip = IPython.get_ipython() ip.extension_manager.load_extension("google.cloud.bigquery") magics.context._project = None cell_body = " \n \n \t\t \n " with io.capture_output() as captured_io: > ip.run_cell_magic("bigquery", "df", cell_body) tests/unit/test_magics.py:778: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .nox/unit-2-7/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2117: in run_cell_magic result = fn(magic_arg_s, cell) google/cloud/bigquery/magics.py:503: in _cell_magic client_info=client_info.ClientInfo(user_agent=IPYTHON_USER_AGENT), google/cloud/bigquery/client.py:179: in __init__ project=project, credentials=credentials, _http=_http .nox/unit-2-7/lib/python2.7/site-packages/google/cloud/client.py:226: in __init__ _ClientProjectMixin.__init__(self, project=project) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.cloud.bigquery.client.Client object at 0x7f3c17f97c90> project = None def __init__(self, project=None): project = self._determine_default(project) if project is None: raise EnvironmentError( > "Project was not passed and could not be " "determined from the environment." ) E EnvironmentError: Project was not passed and could not be determined from the environment. .nox/unit-2-7/lib/python2.7/site-packages/google/cloud/client.py:181: EnvironmentError ```
process
unit tests should not rely on systest environ variables bash env grep google echo yes echo no no nox e unit nox running session unit nox creating virtual environment virtualenv using in nox unit nox pip install mock pytest google cloud testutils pytest cov freezegun nox pip install grpcio nox pip install e nox pip install ipython nox py test quiet cov google cloud bigquery cov tests unit cov append cov config coveragerc cov report cov fail under tests unit f f ss f ss failures testclient test call api applying custom retry on timeout self def test call api applying custom retry on timeout self from concurrent futures import timeouterror from google cloud bigquery retry import default retry client self make one tests unit test client py tests unit test client py in make one return self get target class args kw google cloud bigquery client py in init project project credentials credentials http http nox unit lib site packages google cloud client py in init clientprojectmixin init self project project self project none def init self project none project self determine default project if project is none raise environmenterror project was not passed and could not be determined from the environment e environmenterror project was not passed and could not be determined from the environment nox unit lib site packages google cloud client py environmenterror captured log call warning google auth default default py no project id could be determined consider running gcloud config set project or setting the google cloud project environment variable testclient test create bqstorage client missing dependency self def test create bqstorage client missing dependency self client self make one tests unit test client py tests unit test client py in make one return self get target class args kw google cloud bigquery client py in init project project credentials credentials http http nox unit lib site packages google cloud client py in init clientprojectmixin init self project project self project none def init self project none project self determine default project if project is none raise environmenterror project was not passed and could not be determined from the environment e environmenterror project was not passed and could not be determined from the environment nox unit lib site packages google cloud client py environmenterror captured log call warning google auth default default py no project id could be determined consider running gcloud config set project or setting the google cloud project environment variable test bigquery magic w missing query def test bigquery magic w missing query ip ipython get ipython ip extension manager load extension google cloud bigquery magics context project none cell body n n t t n with io capture output as captured io ip run cell magic bigquery df cell body tests unit test magics py nox unit lib site packages ipython core interactiveshell py in run cell magic result fn magic arg s cell google cloud bigquery magics py in cell magic client info client info clientinfo user agent ipython user agent google cloud bigquery client py in init project project credentials credentials http http nox unit lib site packages google cloud client py in init clientprojectmixin init self project project self project none def init self project none project self determine default project if project is none raise environmenterror project was not passed and could not be determined from the environment e environmenterror project was not passed and could not be determined from the environment nox unit lib site packages google cloud client py environmenterror
1
11,783
14,616,163,724
IssuesEvent
2020-12-22 12:48:21
prisma/prisma
https://api.github.com/repos/prisma/prisma
closed
Mapped transactions should not be grouped together
bug/2-confirmed kind/bug process/candidate team/client topic: transaction
<!-- Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client. Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports --> ## Bug description If you do something like: ```typescript await Promise.all( calendars.map(calendar => ctx.prisma.$transaction<any>([...]) )) ``` It will actually batch all the operations of the different transactions together. It doesn't seem to matter if I add an await on each transaction request. ## How to reproduce 1. Take any schema 2. Create an array of values to operation on 3. Map that array to a a transaction with one or more operations 4. Check prisma logs, you will see one `BEGIN` and one `COMMIT` ## Expected behavior Each transaction should be independant. ## Prisma information ``` @prisma/cli : 2.13.0 @prisma/client : 2.13.0 Current platform : darwin Query Engine : query-engine 833ab05d2a20e822f6736a39a27de4fc8f6b3e49 (at ../node_modules/@prisma/engines/query-engine-darwin) Migration Engine : migration-engine-cli 833ab05d2a20e822f6736a39a27de4fc8f6b3e49 (at ../node_modules/@prisma/engines/migration-engine-darwin) Introspection Engine : introspection-core 833ab05d2a20e822f6736a39a27de4fc8f6b3e49 (at ../node_modules/@prisma/engines/introspection-engine-darwin) Format Binary : prisma-fmt 833ab05d2a20e822f6736a39a27de4fc8f6b3e49 (at ../node_modules/@prisma/engines/prisma-fmt-darwin) Studio : 0.329.0 ```
1.0
Mapped transactions should not be grouped together - <!-- Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client. Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports --> ## Bug description If you do something like: ```typescript await Promise.all( calendars.map(calendar => ctx.prisma.$transaction<any>([...]) )) ``` It will actually batch all the operations of the different transactions together. It doesn't seem to matter if I add an await on each transaction request. ## How to reproduce 1. Take any schema 2. Create an array of values to operation on 3. Map that array to a a transaction with one or more operations 4. Check prisma logs, you will see one `BEGIN` and one `COMMIT` ## Expected behavior Each transaction should be independant. ## Prisma information ``` @prisma/cli : 2.13.0 @prisma/client : 2.13.0 Current platform : darwin Query Engine : query-engine 833ab05d2a20e822f6736a39a27de4fc8f6b3e49 (at ../node_modules/@prisma/engines/query-engine-darwin) Migration Engine : migration-engine-cli 833ab05d2a20e822f6736a39a27de4fc8f6b3e49 (at ../node_modules/@prisma/engines/migration-engine-darwin) Introspection Engine : introspection-core 833ab05d2a20e822f6736a39a27de4fc8f6b3e49 (at ../node_modules/@prisma/engines/introspection-engine-darwin) Format Binary : prisma-fmt 833ab05d2a20e822f6736a39a27de4fc8f6b3e49 (at ../node_modules/@prisma/engines/prisma-fmt-darwin) Studio : 0.329.0 ```
process
mapped transactions should not be grouped together thanks for helping us improve prisma 🙏 please follow the sections in the template and provide as much information as possible about your problem e g by setting the debug environment variable and enabling additional logging output in prisma client learn more about writing proper bug reports here bug description if you do something like typescript await promise all calendars map calendar ctx prisma transaction it will actually batch all the operations of the different transactions together it doesn t seem to matter if i add an await on each transaction request how to reproduce take any schema create an array of values to operation on map that array to a a transaction with one or more operations check prisma logs you will see one begin and one commit expected behavior each transaction should be independant prisma information prisma cli prisma client current platform darwin query engine query engine at node modules prisma engines query engine darwin migration engine migration engine cli at node modules prisma engines migration engine darwin introspection engine introspection core at node modules prisma engines introspection engine darwin format binary prisma fmt at node modules prisma engines prisma fmt darwin studio
1
19,412
25,556,453,984
IssuesEvent
2022-11-30 07:11:32
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Wed, 30 Nov 22
event camera white balance compression image signal processing image signal process raw raw image events camera color contrast AWBISP
## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWBISP There is no result ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Post-training Quantization on Diffusion Models - **Authors:** Yuzhang Shang, Zhihang Yuan, Bin Xie, Bingzhe Wu, Yan Yan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15736 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15736 - **Abstract** Denoising diffusion (score-based) generative models have recently achieved significant accomplishments in generating realistic and diverse data. These approaches define a forward diffusion process for transforming data into noise and a backward denoising process for sampling data from noise. Unfortunately, the generation process of current denoising diffusion models is notoriously slow due to the lengthy iterative noise estimations, which rely on cumbersome neural networks. It prevents the diffusion models from being widely deployed, especially on edge devices. Previous works accelerate the generation process of diffusion model (DM) via finding shorter yet effective sampling trajectories. However, they overlook the cost of noise estimation with a heavy network in every iteration. In this work, we accelerate generation from the perspective of compressing the noise estimation network. Due to the difficulty of retraining DMs, we exclude mainstream training-aware compression paradigms and introduce post-training quantization (PTQ) into DM acceleration. However, the output distributions of noise estimation networks change with time-step, making previous PTQ methods fail in DMs since they are designed for single-time step scenarios. To devise a DM-specific PTQ method, we explore PTQ on DM in three aspects: quantized operations, calibration dataset, and calibration metric. We summarize and use several observations derived from all-inclusive investigations to formulate our method, which especially targets the unique multi-time-step structure of DMs. Experimentally, our method can directly quantize full-precision DMs into 8-bit models while maintaining or even improving their performance in a training-free manner. Importantly, our method can serve as a plug-and-play module on other fast-sampling methods, e.g., DDIM. ### Compressing Cross-Lingual Multi-Task Models at Qualtrics - **Authors:** Daniel Campos, Daniel Perry, Samir Joshi, Yashmeet Gambhir, Wei Du, Zhengzheng Xing, Aaron Colak - **Subjects:** Computation and Language (cs.CL); Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15927 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15927 - **Abstract** Experience management is an emerging business area where organizations focus on understanding the feedback of customers and employees in order to improve their end-to-end experiences. This results in a unique set of machine learning problems to help understand how people feel, discover issues they care about, and find which actions need to be taken on data that are different in content and distribution from traditional NLP domains. In this paper, we present a case study of building text analysis applications that perform multiple classification tasks efficiently in 12 languages in the nascent business area of experience management. In order to scale up modern ML methods on experience data, we leverage cross lingual and multi-task modeling techniques to consolidate our models into a single deployment to avoid overhead. We also make use of model compression and model distillation to reduce overall inference latency and hardware cost to the level acceptable for business needs while maintaining model prediction quality. Our findings show that multi-task modeling improves task performance for a subset of experience management tasks in both XLM-R and mBert architectures. Among the compressed architectures we explored, we found that MiniLM achieved the best compression/performance tradeoff. Our case study demonstrates a speedup of up to 15.61x with 2.60% average task degradation (or 3.29x speedup with 1.71% degradation) and estimated savings of 44% over using the original full-size model. These results demonstrate a successful scaling up of text classification for the challenging new area of ML for experience management. ### Maximal Atomic irRedundant Sets: a Usage-based Dataflow Partitioning Algorithm - **Authors:** Corentin Ferry, Steven Derrien, Sanjay Rajopadhye - **Subjects:** Programming Languages (cs.PL); Distributed, Parallel, and Cluster Computing (cs.DC) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15933 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15933 - **Abstract** Programs admitting a polyhedral representation can be transformed in many ways for locality and parallelism, notably loop tiling. Data flow analysis can then compute dependence relations between iterations and between tiles. When tiling is applied, certain iteration-wise dependences cross tile boundaries, creating the need for inter-tile data communication. Previous work computes it as the flow-in and flow-out sets of iteration tiles. In this paper, we propose a partitioning of the flow-out of a tile into the maximal sets of iterations that are entirely consumed and incur no redundant storage or transfer. The computation is described as an algorithm and performed on a selection of polyhedral programs. We then suggest possible applications of this decomposition in compression and memory allocation. ### Trustless unknown-order groups - **Authors:** Samuel Dobson, Steven Galbraith, Benjamin Smith (GRACE) - **Subjects:** Cryptography and Security (cs.CR); Number Theory (math.NT) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16128 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16128 - **Abstract** Groups of unknown order are of major interest due to their applications including time-lock puzzles, verifiable delay functions, and accumulators. In this paper we focus on trustless setup: in this setting, the most popular unknown-order group construction is ideal class groups of imaginary quadratic fields. We argue that the full impact of Sutherland's generic group-order algorithm has not been recognised in this context, and show that group sizes currently being proposed in practice (namely, approximately 830 bits) do not meet the claimed security level. Instead, we claim that random group orders should be at least 3300 bits to meet a 128-bit security level. For ideal class groups this leads to discriminants of around 6656 bits, which are much larger than desirable. One drawback of class groups is that current approaches require approximately $2\log_2(N)$ bits to represent an element in a group of order N. We provide two solutions to mitigate this blow-up in the size of representations. First, we explain how an idea of Bleichenbacher can be used to compress class group elements to $(3/2)\log_2(N)$ bits. Second, we note that using Jacobians of hyperelliptic curves (in other words, class groups of quadratic function fields) allows efficient compression to the optimal element representation size of $\log_2(N)$ bits. We discuss point-counting approaches for hyperelliptic curves and argue that genus-3 curves are secure in the trustless unknown-order setting. We conclude that in practice, Jacobians of hyperelliptic curves are more efficient in practice than ideal class groups at the same security level -- both in the group operation and in the size of the element representation. ### DBA: Efficient Transformer with Dynamic Bilinear Low-Rank Attention - **Authors:** Bosheng Qin, Juncheng Li, Siliang Tang, Yueting Zhuang - **Subjects:** Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16368 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16368 - **Abstract** Many studies have been conducted to improve the efficiency of Transformer from quadric to linear. Among them, the low-rank-based methods aim to learn the projection matrices to compress the sequence length. However, the projection matrices are fixed once they have been learned, which compress sequence length with dedicated coefficients for tokens in the same position. Adopting such input-invariant projections ignores the fact that the most informative part of a sequence varies from sequence to sequence, thus failing to preserve the most useful information that lies in varied positions. In addition, previous efficient Transformers only focus on the influence of sequence length while neglecting the effect of hidden state dimension. To address the aforementioned problems, we present an efficient yet effective attention mechanism, namely the Dynamic Bilinear Low-Rank Attention (DBA), which compresses the sequence length by input-sensitive dynamic projection matrices and achieves linear time and space complexity by jointly optimizing the sequence length and hidden state dimension while maintaining state-of-the-art performance. Specifically, we first theoretically demonstrate that the sequence length can be compressed non-destructively from a novel perspective of information theory, with compression matrices dynamically determined by the input sequence. Furthermore, we show that the hidden state dimension can be approximated by extending the Johnson-Lindenstrauss lemma, optimizing the attention in bilinear form. Theoretical analysis shows that DBA is proficient in capturing high-order relations in cross-attention problems. Experiments over tasks with diverse sequence length conditions show that DBA achieves state-of-the-art performance compared with various strong baselines while maintaining less memory consumption with higher speed. ### Compressing Volumetric Radiance Fields to 1 MB - **Authors:** Lingzhi Li, Zhen Shen, Zhongshu Wang, Li Shen, Liefeng Bo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16386 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16386 - **Abstract** Approximating radiance fields with volumetric grids is one of promising directions for improving NeRF, represented by methods like Plenoxels and DVGO, which achieve super-fast training convergence and real-time rendering. However, these methods typically require a tremendous storage overhead, costing up to hundreds of megabytes of disk space and runtime memory for a single scene. We address this issue in this paper by introducing a simple yet effective framework, called vector quantized radiance fields (VQRF), for compressing these volume-grid-based radiance fields. We first present a robust and adaptive metric for estimating redundancy in grid models and performing voxel pruning by better exploring intermediate outputs of volumetric rendering. A trainable vector quantization is further proposed to improve the compactness of grid models. In combination with an efficient joint tuning strategy and post-processing, our method can achieve a compression ratio of 100$\times$ by reducing the overall model size to 1 MB with negligible loss on visual quality. Extensive experiments demonstrate that the proposed framework is capable of achieving unrivaled performance and well generalization across multiple methods with distinct volumetric structures, facilitating the wide use of volumetric radiance fields methods in real-world applications. Code Available at \url{https://github.com/AlgoHunt/VQRF} ## Keyword: RAW ### Learning Visual Planning Models from Partially Observed Images - **Authors:** Kebing Jin, Zhanhao Xiao, Hankui Hankz Zhuo, Hai Wan, Jiaran Cai - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15666 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15666 - **Abstract** There has been increasing attention on planning model learning in classical planning. Most existing approaches, however, focus on learning planning models from structured data in symbolic representations. It is often difficult to obtain such structured data in real-world scenarios. Although a number of approaches have been developed for learning planning models from fully observed unstructured data (e.g., images), in many scenarios raw observations are often incomplete. In this paper, we provide a novel framework, \aType{Recplan}, for learning a transition model from partially observed raw image traces. More specifically, by considering the preceding and subsequent images in a trace, we learn the latent state representations of raw observations and then build a transition model based on such representations. Additionally, we propose a neural-network-based approach to learn a heuristic model that estimates the distance toward a given goal observation. Based on the learned transition model and heuristic model, we implement a classical planner for images. We exhibit empirically that our approach is more effective than a state-of-the-art approach of learning visual planning models in the environment with incomplete observations. ### Deep Semi-supervised Learning with Double-Contrast of Features and Semantics - **Authors:** Quan Feng, Jiayu Yao, Zhison Pan, Guojun Zhou - **Subjects:** Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15671 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15671 - **Abstract** In recent years, the field of intelligent transportation systems (ITS) has achieved remarkable success, which is mainly due to the large amount of available annotation data. However, obtaining these annotated data has to afford expensive costs in reality. Therefore, a more realistic strategy is to leverage semi-supervised learning (SSL) with a small amount of labeled data and a large amount of unlabeled data. Typically, semantic consistency regularization and the two-stage learning methods of decoupling feature extraction and classification have been proven effective. Nevertheless, representation learning only limited to semantic consistency regularization may not guarantee the separation or discriminability of representations of samples with different semantics; due to the inherent limitations of the two-stage learning methods, the extracted features may not match the specific downstream tasks. In order to deal with the above drawbacks, this paper proposes an end-to-end deep semi-supervised learning double contrast of semantic and feature, which extracts effective tasks specific discriminative features by contrasting the semantics/features of positive and negative augmented samples pairs. Moreover, we leverage information theory to explain the rationality of double contrast of semantics and features and slack mutual information to contrastive loss in a simpler way. Finally, the effectiveness of our method is verified in benchmark datasets. ### Superpoint Transformer for 3D Scene Instance Segmentation - **Authors:** Jiahao Sun, Chunmei Qing, Junpeng Tan, Xiangmin Xu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15766 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15766 - **Abstract** Most existing methods realize 3D instance segmentation by extending those models used for 3D object detection or 3D semantic segmentation. However, these non-straightforward methods suffer from two drawbacks: 1) Imprecise bounding boxes or unsatisfactory semantic predictions limit the performance of the overall 3D instance segmentation framework. 2) Existing method requires a time-consuming intermediate step of aggregation. To address these issues, this paper proposes a novel end-to-end 3D instance segmentation method based on Superpoint Transformer, named as SPFormer. It groups potential features from point clouds into superpoints, and directly predicts instances through query vectors without relying on the results of object detection or semantic segmentation. The key step in this framework is a novel query decoder with transformers that can capture the instance information through the superpoint cross-attention mechanism and generate the superpoint masks of the instances. Through bipartite matching based on superpoint masks, SPFormer can implement the network training without the intermediate aggregation step, which accelerates the network. Extensive experiments on ScanNetv2 and S3DIS benchmarks verify that our method is concise yet efficient. Notably, SPFormer exceeds compared state-of-the-art methods by 4.3% on ScanNetv2 hidden test set in terms of mAP and keeps fast inference speed (247ms per frame) simultaneously. Code is available at https://github.com/sunjiahao1999/SPFormer. ### ClueWeb22: 10 Billion Web Documents with Rich Information - **Authors:** Arnold Overwijk, Chenyan Xiong, Xiao Liu, Cameron VandenBerg, Jamie Callan - **Subjects:** Information Retrieval (cs.IR); Artificial Intelligence (cs.AI); Computation and Language (cs.CL) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15848 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15848 - **Abstract** ClueWeb22, the newest iteration of the ClueWeb line of datasets, provides 10 billion web pages affiliated with rich information. Its design was influenced by the need for a high quality, large scale web corpus to support a range of academic and industry research, for example, in information systems, retrieval-augmented AI systems, and model pretraining. Compared with earlier ClueWeb corpora, the ClueWeb22 corpus is larger, more varied, of higher-quality, and aligned with the document distributions in commercial web search. Besides raw HTML, ClueWeb22 includes rich information about the web pages provided by industry-standard document understanding systems, including the visual representation of pages rendered by a web browser, parsed HTML structure information from a neural network parser, and pre-processed cleaned document text to lower the barrier to entry. Many of these signals have been widely used in industry but are available to the research community for the first time at this scale. ### Neural Feature-Adaptation for Symbolic Predictions Using Pre-Training and Semantic Loss - **Authors:** Vedant Shah, Aditya Agrawal, Lovekesh Vig, Ashwin Srinivasan, Gautam Shroff, Tanmay Verlekar - **Subjects:** Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Logic in Computer Science (cs.LO) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16047 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16047 - **Abstract** We are interested in neurosymbolic systems consisting of a high-level symbolic layer for explainable prediction in terms of human-intelligible concepts; and a low-level neural layer for extracting symbols required to generate the symbolic explanation. Real data is often imperfect meaning that even if the symbolic theory remains unchanged, we may still need to address the problem of mapping raw data to high-level symbols, each time there is a change in the data acquisition environment or equipment. Manual (re-)annotation of the raw data each time this happens is laborious and expensive; and automated labelling methods are often imperfect, especially for complex problems. NEUROLOG proposed the use of a semantic loss function that allows an existing feature-based symbolic model to guide the extraction of feature-values from raw data, using `abduction'. However, the experiments demonstrating the use of semantic loss through abduction appear to rely heavily on a domain-specific pre-processing step that enables a prior delineation of feature locations in the raw data. We examine the use of semantic loss in domains where such pre-processing is not possible, or is not obvious. We show that without any prior information about the features, the NEUROLOG approach can continue to predict accurately even with substantially incorrect feature predictions. We show also that prior information about the features in the form of even imperfect pre-training can help correct this situation. These findings are replicated on the original problem considered by NEUROLOG, without the use of feature-delineation. This suggests that symbolic explanations constructed for data in a domain could be re-used in a related domain, by `feature-adaptation' of pre-trained neural extractors using the semantic loss function constrained by abductive feedback. ### Behavior Estimation from Multi-Source Data for Offline Reinforcement Learning - **Authors:** Guoxi Zhang, Hisashi Kashima - **Subjects:** Machine Learning (cs.LG); Robotics (cs.RO) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16078 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16078 - **Abstract** Offline reinforcement learning (RL) have received rising interest due to its appealing data efficiency. The present study addresses behavior estimation, a task that lays the foundation of many offline RL algorithms. Behavior estimation aims at estimating the policy with which training data are generated. In particular, this work considers a scenario where the data are collected from multiple sources. In this case, neglecting data heterogeneity, existing approaches for behavior estimation suffers from behavior misspecification. To overcome this drawback, the present study proposes a latent variable model to infer a set of policies from data, which allows an agent to use as behavior policy the policy that best describes a particular trajectory. This model provides with a agent fine-grained characterization for multi-source data and helps it overcome behavior misspecification. This work also proposes a learning algorithm for this model and illustrates its practical usage via extending an existing offline RL algorithm. Lastly, with extensive evaluation this work confirms the existence of behavior misspecification and the efficacy of the proposed model. ### Peculiarities of gender disambiguation and ordering of non-English authors' names for Economic papers beyond core databases - **Authors:** O. Mryglod, S. Nazarovets, S. Kozmenko - **Subjects:** Digital Libraries (cs.DL) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16124 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16124 - **Abstract** This paper presents the results of further exploration of Crossref data related to Ukrainian Economics research (the first part can be found in [Mryglod, O., Nazarovets, S. & Kozmenko, S. (2021) Scientometrics, 126, 8187]). Our purpose is to supplement the quantitative portrait of Ukrainian Economics discipline with the results of gender and author ordering analysis at the level of individual authors, special methods of working with bibliographic data with a predominant share of non-English authors are used. The properties of gender mixing, the likelihood of male and female authors occupying the first position in the authorship list, as well as the arrangements of names are studied. A data set containing bibliographic records related to Ukrainian journal publications in the field of Economics is constructed using Crossref metadata. The described stages for working with such specific data help to work at the level of authors and analyse, in particular, gender issues. Despite the larger number of female authors, gender equality is more likely to be reported at the individual level for the discipline of Ukrainian Economics. The tendencies towards collaborative or solo-publications and gender mixing patterns are found to be dependent on the journal: the differences for publications indexed in Scopus and/or Web of Science databases are found. It has also been found that Ukrainian Economics research is characterized by rather a non-alphabetical order of authors. To our knowledge, this is the first large-scale quantitative study of Ukrainian Economic discipline. The results obtained are valuable not only at the national level, but also contribute to general knowledge about Economic research, gender issues and authors' names ordering. Here, for the first time, attention is drawn to the explicit use of the features of the Slavic authors' names. ### Trustless unknown-order groups - **Authors:** Samuel Dobson, Steven Galbraith, Benjamin Smith (GRACE) - **Subjects:** Cryptography and Security (cs.CR); Number Theory (math.NT) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16128 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16128 - **Abstract** Groups of unknown order are of major interest due to their applications including time-lock puzzles, verifiable delay functions, and accumulators. In this paper we focus on trustless setup: in this setting, the most popular unknown-order group construction is ideal class groups of imaginary quadratic fields. We argue that the full impact of Sutherland's generic group-order algorithm has not been recognised in this context, and show that group sizes currently being proposed in practice (namely, approximately 830 bits) do not meet the claimed security level. Instead, we claim that random group orders should be at least 3300 bits to meet a 128-bit security level. For ideal class groups this leads to discriminants of around 6656 bits, which are much larger than desirable. One drawback of class groups is that current approaches require approximately $2\log_2(N)$ bits to represent an element in a group of order N. We provide two solutions to mitigate this blow-up in the size of representations. First, we explain how an idea of Bleichenbacher can be used to compress class group elements to $(3/2)\log_2(N)$ bits. Second, we note that using Jacobians of hyperelliptic curves (in other words, class groups of quadratic function fields) allows efficient compression to the optimal element representation size of $\log_2(N)$ bits. We discuss point-counting approaches for hyperelliptic curves and argue that genus-3 curves are secure in the trustless unknown-order setting. We conclude that in practice, Jacobians of hyperelliptic curves are more efficient in practice than ideal class groups at the same security level -- both in the group operation and in the size of the element representation. ### AdaEnlight: Energy-aware Low-light Video Stream Enhancement on Mobile Devices - **Authors:** Sicong Liu (Northwestern Polytechnical University, China), Xiaochen Li (Northwestern Polytechnical University, China), Zimu Zhou (City University of Hong Kong, China), Bin Guo (Northwestern Polytechnical University, China), Meng Zhang (Northwestern Polytechnical University, China), Haochen Shen (Northwestern Polytechnical University, China), Zhiwen Yu (Northwestern Polytechnical University, China) - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16135 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16135 - **Abstract** The ubiquity of camera-embedded devices and the advances in deep learning have stimulated various intelligent mobile video applications. These applications often demand on-device processing of video streams to deliver real-time, high-quality services for privacy and robustness concerns. However, the performance of these applications is constrained by the raw video streams, which tend to be taken with small-aperture cameras of ubiquitous mobile platforms in dim light. Despite extensive low-light video enhancement solutions, they are unfit for deployment to mobile devices due to their complex models and and ignorance of system dynamics like energy budgets. In this paper, we propose AdaEnlight, an energy-aware low-light video stream enhancement system on mobile devices. It achieves real-time video enhancement with competitive visual quality while allowing runtime behavior adaptation to the platform-imposed dynamic energy budgets. We report extensive experiments on diverse datasets, scenarios, and platforms and demonstrate the superiority of AdaEnlight compared with state-of-the-art low-light image and video enhancement solutions. ### Few-shot Query-Focused Summarization with Prefix-Merging - **Authors:** Ruifeng Yuan, Zili Wang, Ziqiang Cao, Wenjie Li - **Subjects:** Computation and Language (cs.CL); Artificial Intelligence (cs.AI) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16164 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16164 - **Abstract** Query-focused summarization has been considered as an important extension for text summarization. It aims to generate a concise highlight for a given query. Different from text summarization, query-focused summarization has long been plagued by the problem of lacking high-quality large-scale datasets. In this paper, we investigate the idea that whether we can integrate and transfer the knowledge of text summarization and question answering to assist the few-shot learning in query-focused summarization. Here, we propose prefix-merging, a prefix-based pretraining strategy for few-shot learning in query-focused summarization. Drawn inspiration from prefix-tuning, we are allowed to integrate the task knowledge from text summarization and question answering into a properly designed prefix and apply the merged prefix to query-focused summarization. With only a small amount of trainable parameters, prefix-merging outperforms fine-tuning on query-focused summarization. We further discuss the influence of different prefix designs and propose a visualized explanation for how prefix-merging works. ### DATID-3D: Diversity-Preserved Domain Adaptation Using Text-to-Image Diffusion for 3D Generative Model - **Authors:** Gwanghyun Kim, Se Young Chun - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16374 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16374 - **Abstract** Recent 3D generative models have achieved remarkable performance in synthesizing high resolution photorealistic images with view consistency and detailed 3D shapes, but training them for diverse domains is challenging since it requires massive training images and their camera distribution information. Text-guided domain adaptation methods have shown impressive performance on converting the 2D generative model on one domain into the models on other domains with different styles by leveraging the CLIP (Contrastive Language-Image Pre-training), rather than collecting massive datasets for those domains. However, one drawback of them is that the sample diversity in the original generative model is not well-preserved in the domain-adapted generative models due to the deterministic nature of the CLIP text encoder. Text-guided domain adaptation will be even more challenging for 3D generative models not only because of catastrophic diversity loss, but also because of inferior text-image correspondence and poor image quality. Here we propose DATID-3D, a domain adaptation method tailored for 3D generative models using text-to-image diffusion models that can synthesize diverse images per text prompt without collecting additional images and camera information for the target domain. Unlike 3D extensions of prior text-guided domain adaptation methods, our novel pipeline was able to fine-tune the state-of-the-art 3D generator of the source domain to synthesize high resolution, multi-view consistent images in text-guided targeted domains without additional data, outperforming the existing text-guided domain adaptation methods in diversity and text-image correspondence. Furthermore, we propose and demonstrate diverse 3D image manipulations such as one-shot instance-selected adaptation and single-view manipulated 3D reconstruction to fully enjoy diversity in text. ### Symmetry Detection in Trajectory Data for More Meaningful Reinforcement Learning Representations - **Authors:** Marissa D'Alonzo, Rebecca Russell - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Robotics (cs.RO) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16381 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16381 - **Abstract** Knowledge of the symmetries of reinforcement learning (RL) systems can be used to create compressed and semantically meaningful representations of a low-level state space. We present a method of automatically detecting RL symmetries directly from raw trajectory data without requiring active control of the system. Our method generates candidate symmetries and trains a recurrent neural network (RNN) to discriminate between the original trajectories and the transformed trajectories for each candidate symmetry. The RNN discriminator's accuracy for each candidate reveals how symmetric the system is under that transformation. This information can be used to create high-level representations that are invariant to all symmetries on a dataset level and to communicate properties of the RL behavior to users. We show in experiments on two simulated RL use cases (a pusher robot and a UAV flying in wind) that our method can determine the symmetries underlying both the environment physics and the trained RL policy. ### Abstract Visual Reasoning with Tangram Shapes - **Authors:** Anya Ji, Noriyuki Kojima, Noah Rush, Alane Suhr, Wai Keen Vong, Robert D. Hawkins, Yoav Artzi - **Subjects:** Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16492 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16492 - **Abstract** We introduce KiloGram, a resource for studying abstract visual reasoning in humans and machines. Drawing on the history of tangram puzzles as stimuli in cognitive science, we build a richly annotated dataset that, with >1k distinct stimuli, is orders of magnitude larger and more diverse than prior resources. It is both visually and linguistically richer, moving beyond whole shape descriptions to include segmentation maps and part labels. We use this resource to evaluate the abstract visual reasoning capacities of recent multi-modal models. We observe that pre-trained weights demonstrate limited abstract reasoning, which dramatically improves with fine-tuning. We also observe that explicitly describing parts aids abstract reasoning for both humans and models, especially when jointly encoding the linguistic and visual inputs. KiloGram is available at https://lil.nlp.cornell.edu/kilogram . ## Keyword: raw image ### Learning Visual Planning Models from Partially Observed Images - **Authors:** Kebing Jin, Zhanhao Xiao, Hankui Hankz Zhuo, Hai Wan, Jiaran Cai - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15666 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15666 - **Abstract** There has been increasing attention on planning model learning in classical planning. Most existing approaches, however, focus on learning planning models from structured data in symbolic representations. It is often difficult to obtain such structured data in real-world scenarios. Although a number of approaches have been developed for learning planning models from fully observed unstructured data (e.g., images), in many scenarios raw observations are often incomplete. In this paper, we provide a novel framework, \aType{Recplan}, for learning a transition model from partially observed raw image traces. More specifically, by considering the preceding and subsequent images in a trace, we learn the latent state representations of raw observations and then build a transition model based on such representations. Additionally, we propose a neural-network-based approach to learn a heuristic model that estimates the distance toward a given goal observation. Based on the learned transition model and heuristic model, we implement a classical planner for images. We exhibit empirically that our approach is more effective than a state-of-the-art approach of learning visual planning models in the environment with incomplete observations.
2.0
New submissions for Wed, 30 Nov 22 - ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWBISP There is no result ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Post-training Quantization on Diffusion Models - **Authors:** Yuzhang Shang, Zhihang Yuan, Bin Xie, Bingzhe Wu, Yan Yan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15736 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15736 - **Abstract** Denoising diffusion (score-based) generative models have recently achieved significant accomplishments in generating realistic and diverse data. These approaches define a forward diffusion process for transforming data into noise and a backward denoising process for sampling data from noise. Unfortunately, the generation process of current denoising diffusion models is notoriously slow due to the lengthy iterative noise estimations, which rely on cumbersome neural networks. It prevents the diffusion models from being widely deployed, especially on edge devices. Previous works accelerate the generation process of diffusion model (DM) via finding shorter yet effective sampling trajectories. However, they overlook the cost of noise estimation with a heavy network in every iteration. In this work, we accelerate generation from the perspective of compressing the noise estimation network. Due to the difficulty of retraining DMs, we exclude mainstream training-aware compression paradigms and introduce post-training quantization (PTQ) into DM acceleration. However, the output distributions of noise estimation networks change with time-step, making previous PTQ methods fail in DMs since they are designed for single-time step scenarios. To devise a DM-specific PTQ method, we explore PTQ on DM in three aspects: quantized operations, calibration dataset, and calibration metric. We summarize and use several observations derived from all-inclusive investigations to formulate our method, which especially targets the unique multi-time-step structure of DMs. Experimentally, our method can directly quantize full-precision DMs into 8-bit models while maintaining or even improving their performance in a training-free manner. Importantly, our method can serve as a plug-and-play module on other fast-sampling methods, e.g., DDIM. ### Compressing Cross-Lingual Multi-Task Models at Qualtrics - **Authors:** Daniel Campos, Daniel Perry, Samir Joshi, Yashmeet Gambhir, Wei Du, Zhengzheng Xing, Aaron Colak - **Subjects:** Computation and Language (cs.CL); Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15927 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15927 - **Abstract** Experience management is an emerging business area where organizations focus on understanding the feedback of customers and employees in order to improve their end-to-end experiences. This results in a unique set of machine learning problems to help understand how people feel, discover issues they care about, and find which actions need to be taken on data that are different in content and distribution from traditional NLP domains. In this paper, we present a case study of building text analysis applications that perform multiple classification tasks efficiently in 12 languages in the nascent business area of experience management. In order to scale up modern ML methods on experience data, we leverage cross lingual and multi-task modeling techniques to consolidate our models into a single deployment to avoid overhead. We also make use of model compression and model distillation to reduce overall inference latency and hardware cost to the level acceptable for business needs while maintaining model prediction quality. Our findings show that multi-task modeling improves task performance for a subset of experience management tasks in both XLM-R and mBert architectures. Among the compressed architectures we explored, we found that MiniLM achieved the best compression/performance tradeoff. Our case study demonstrates a speedup of up to 15.61x with 2.60% average task degradation (or 3.29x speedup with 1.71% degradation) and estimated savings of 44% over using the original full-size model. These results demonstrate a successful scaling up of text classification for the challenging new area of ML for experience management. ### Maximal Atomic irRedundant Sets: a Usage-based Dataflow Partitioning Algorithm - **Authors:** Corentin Ferry, Steven Derrien, Sanjay Rajopadhye - **Subjects:** Programming Languages (cs.PL); Distributed, Parallel, and Cluster Computing (cs.DC) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15933 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15933 - **Abstract** Programs admitting a polyhedral representation can be transformed in many ways for locality and parallelism, notably loop tiling. Data flow analysis can then compute dependence relations between iterations and between tiles. When tiling is applied, certain iteration-wise dependences cross tile boundaries, creating the need for inter-tile data communication. Previous work computes it as the flow-in and flow-out sets of iteration tiles. In this paper, we propose a partitioning of the flow-out of a tile into the maximal sets of iterations that are entirely consumed and incur no redundant storage or transfer. The computation is described as an algorithm and performed on a selection of polyhedral programs. We then suggest possible applications of this decomposition in compression and memory allocation. ### Trustless unknown-order groups - **Authors:** Samuel Dobson, Steven Galbraith, Benjamin Smith (GRACE) - **Subjects:** Cryptography and Security (cs.CR); Number Theory (math.NT) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16128 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16128 - **Abstract** Groups of unknown order are of major interest due to their applications including time-lock puzzles, verifiable delay functions, and accumulators. In this paper we focus on trustless setup: in this setting, the most popular unknown-order group construction is ideal class groups of imaginary quadratic fields. We argue that the full impact of Sutherland's generic group-order algorithm has not been recognised in this context, and show that group sizes currently being proposed in practice (namely, approximately 830 bits) do not meet the claimed security level. Instead, we claim that random group orders should be at least 3300 bits to meet a 128-bit security level. For ideal class groups this leads to discriminants of around 6656 bits, which are much larger than desirable. One drawback of class groups is that current approaches require approximately $2\log_2(N)$ bits to represent an element in a group of order N. We provide two solutions to mitigate this blow-up in the size of representations. First, we explain how an idea of Bleichenbacher can be used to compress class group elements to $(3/2)\log_2(N)$ bits. Second, we note that using Jacobians of hyperelliptic curves (in other words, class groups of quadratic function fields) allows efficient compression to the optimal element representation size of $\log_2(N)$ bits. We discuss point-counting approaches for hyperelliptic curves and argue that genus-3 curves are secure in the trustless unknown-order setting. We conclude that in practice, Jacobians of hyperelliptic curves are more efficient in practice than ideal class groups at the same security level -- both in the group operation and in the size of the element representation. ### DBA: Efficient Transformer with Dynamic Bilinear Low-Rank Attention - **Authors:** Bosheng Qin, Juncheng Li, Siliang Tang, Yueting Zhuang - **Subjects:** Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16368 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16368 - **Abstract** Many studies have been conducted to improve the efficiency of Transformer from quadric to linear. Among them, the low-rank-based methods aim to learn the projection matrices to compress the sequence length. However, the projection matrices are fixed once they have been learned, which compress sequence length with dedicated coefficients for tokens in the same position. Adopting such input-invariant projections ignores the fact that the most informative part of a sequence varies from sequence to sequence, thus failing to preserve the most useful information that lies in varied positions. In addition, previous efficient Transformers only focus on the influence of sequence length while neglecting the effect of hidden state dimension. To address the aforementioned problems, we present an efficient yet effective attention mechanism, namely the Dynamic Bilinear Low-Rank Attention (DBA), which compresses the sequence length by input-sensitive dynamic projection matrices and achieves linear time and space complexity by jointly optimizing the sequence length and hidden state dimension while maintaining state-of-the-art performance. Specifically, we first theoretically demonstrate that the sequence length can be compressed non-destructively from a novel perspective of information theory, with compression matrices dynamically determined by the input sequence. Furthermore, we show that the hidden state dimension can be approximated by extending the Johnson-Lindenstrauss lemma, optimizing the attention in bilinear form. Theoretical analysis shows that DBA is proficient in capturing high-order relations in cross-attention problems. Experiments over tasks with diverse sequence length conditions show that DBA achieves state-of-the-art performance compared with various strong baselines while maintaining less memory consumption with higher speed. ### Compressing Volumetric Radiance Fields to 1 MB - **Authors:** Lingzhi Li, Zhen Shen, Zhongshu Wang, Li Shen, Liefeng Bo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16386 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16386 - **Abstract** Approximating radiance fields with volumetric grids is one of promising directions for improving NeRF, represented by methods like Plenoxels and DVGO, which achieve super-fast training convergence and real-time rendering. However, these methods typically require a tremendous storage overhead, costing up to hundreds of megabytes of disk space and runtime memory for a single scene. We address this issue in this paper by introducing a simple yet effective framework, called vector quantized radiance fields (VQRF), for compressing these volume-grid-based radiance fields. We first present a robust and adaptive metric for estimating redundancy in grid models and performing voxel pruning by better exploring intermediate outputs of volumetric rendering. A trainable vector quantization is further proposed to improve the compactness of grid models. In combination with an efficient joint tuning strategy and post-processing, our method can achieve a compression ratio of 100$\times$ by reducing the overall model size to 1 MB with negligible loss on visual quality. Extensive experiments demonstrate that the proposed framework is capable of achieving unrivaled performance and well generalization across multiple methods with distinct volumetric structures, facilitating the wide use of volumetric radiance fields methods in real-world applications. Code Available at \url{https://github.com/AlgoHunt/VQRF} ## Keyword: RAW ### Learning Visual Planning Models from Partially Observed Images - **Authors:** Kebing Jin, Zhanhao Xiao, Hankui Hankz Zhuo, Hai Wan, Jiaran Cai - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15666 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15666 - **Abstract** There has been increasing attention on planning model learning in classical planning. Most existing approaches, however, focus on learning planning models from structured data in symbolic representations. It is often difficult to obtain such structured data in real-world scenarios. Although a number of approaches have been developed for learning planning models from fully observed unstructured data (e.g., images), in many scenarios raw observations are often incomplete. In this paper, we provide a novel framework, \aType{Recplan}, for learning a transition model from partially observed raw image traces. More specifically, by considering the preceding and subsequent images in a trace, we learn the latent state representations of raw observations and then build a transition model based on such representations. Additionally, we propose a neural-network-based approach to learn a heuristic model that estimates the distance toward a given goal observation. Based on the learned transition model and heuristic model, we implement a classical planner for images. We exhibit empirically that our approach is more effective than a state-of-the-art approach of learning visual planning models in the environment with incomplete observations. ### Deep Semi-supervised Learning with Double-Contrast of Features and Semantics - **Authors:** Quan Feng, Jiayu Yao, Zhison Pan, Guojun Zhou - **Subjects:** Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15671 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15671 - **Abstract** In recent years, the field of intelligent transportation systems (ITS) has achieved remarkable success, which is mainly due to the large amount of available annotation data. However, obtaining these annotated data has to afford expensive costs in reality. Therefore, a more realistic strategy is to leverage semi-supervised learning (SSL) with a small amount of labeled data and a large amount of unlabeled data. Typically, semantic consistency regularization and the two-stage learning methods of decoupling feature extraction and classification have been proven effective. Nevertheless, representation learning only limited to semantic consistency regularization may not guarantee the separation or discriminability of representations of samples with different semantics; due to the inherent limitations of the two-stage learning methods, the extracted features may not match the specific downstream tasks. In order to deal with the above drawbacks, this paper proposes an end-to-end deep semi-supervised learning double contrast of semantic and feature, which extracts effective tasks specific discriminative features by contrasting the semantics/features of positive and negative augmented samples pairs. Moreover, we leverage information theory to explain the rationality of double contrast of semantics and features and slack mutual information to contrastive loss in a simpler way. Finally, the effectiveness of our method is verified in benchmark datasets. ### Superpoint Transformer for 3D Scene Instance Segmentation - **Authors:** Jiahao Sun, Chunmei Qing, Junpeng Tan, Xiangmin Xu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15766 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15766 - **Abstract** Most existing methods realize 3D instance segmentation by extending those models used for 3D object detection or 3D semantic segmentation. However, these non-straightforward methods suffer from two drawbacks: 1) Imprecise bounding boxes or unsatisfactory semantic predictions limit the performance of the overall 3D instance segmentation framework. 2) Existing method requires a time-consuming intermediate step of aggregation. To address these issues, this paper proposes a novel end-to-end 3D instance segmentation method based on Superpoint Transformer, named as SPFormer. It groups potential features from point clouds into superpoints, and directly predicts instances through query vectors without relying on the results of object detection or semantic segmentation. The key step in this framework is a novel query decoder with transformers that can capture the instance information through the superpoint cross-attention mechanism and generate the superpoint masks of the instances. Through bipartite matching based on superpoint masks, SPFormer can implement the network training without the intermediate aggregation step, which accelerates the network. Extensive experiments on ScanNetv2 and S3DIS benchmarks verify that our method is concise yet efficient. Notably, SPFormer exceeds compared state-of-the-art methods by 4.3% on ScanNetv2 hidden test set in terms of mAP and keeps fast inference speed (247ms per frame) simultaneously. Code is available at https://github.com/sunjiahao1999/SPFormer. ### ClueWeb22: 10 Billion Web Documents with Rich Information - **Authors:** Arnold Overwijk, Chenyan Xiong, Xiao Liu, Cameron VandenBerg, Jamie Callan - **Subjects:** Information Retrieval (cs.IR); Artificial Intelligence (cs.AI); Computation and Language (cs.CL) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15848 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15848 - **Abstract** ClueWeb22, the newest iteration of the ClueWeb line of datasets, provides 10 billion web pages affiliated with rich information. Its design was influenced by the need for a high quality, large scale web corpus to support a range of academic and industry research, for example, in information systems, retrieval-augmented AI systems, and model pretraining. Compared with earlier ClueWeb corpora, the ClueWeb22 corpus is larger, more varied, of higher-quality, and aligned with the document distributions in commercial web search. Besides raw HTML, ClueWeb22 includes rich information about the web pages provided by industry-standard document understanding systems, including the visual representation of pages rendered by a web browser, parsed HTML structure information from a neural network parser, and pre-processed cleaned document text to lower the barrier to entry. Many of these signals have been widely used in industry but are available to the research community for the first time at this scale. ### Neural Feature-Adaptation for Symbolic Predictions Using Pre-Training and Semantic Loss - **Authors:** Vedant Shah, Aditya Agrawal, Lovekesh Vig, Ashwin Srinivasan, Gautam Shroff, Tanmay Verlekar - **Subjects:** Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Logic in Computer Science (cs.LO) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16047 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16047 - **Abstract** We are interested in neurosymbolic systems consisting of a high-level symbolic layer for explainable prediction in terms of human-intelligible concepts; and a low-level neural layer for extracting symbols required to generate the symbolic explanation. Real data is often imperfect meaning that even if the symbolic theory remains unchanged, we may still need to address the problem of mapping raw data to high-level symbols, each time there is a change in the data acquisition environment or equipment. Manual (re-)annotation of the raw data each time this happens is laborious and expensive; and automated labelling methods are often imperfect, especially for complex problems. NEUROLOG proposed the use of a semantic loss function that allows an existing feature-based symbolic model to guide the extraction of feature-values from raw data, using `abduction'. However, the experiments demonstrating the use of semantic loss through abduction appear to rely heavily on a domain-specific pre-processing step that enables a prior delineation of feature locations in the raw data. We examine the use of semantic loss in domains where such pre-processing is not possible, or is not obvious. We show that without any prior information about the features, the NEUROLOG approach can continue to predict accurately even with substantially incorrect feature predictions. We show also that prior information about the features in the form of even imperfect pre-training can help correct this situation. These findings are replicated on the original problem considered by NEUROLOG, without the use of feature-delineation. This suggests that symbolic explanations constructed for data in a domain could be re-used in a related domain, by `feature-adaptation' of pre-trained neural extractors using the semantic loss function constrained by abductive feedback. ### Behavior Estimation from Multi-Source Data for Offline Reinforcement Learning - **Authors:** Guoxi Zhang, Hisashi Kashima - **Subjects:** Machine Learning (cs.LG); Robotics (cs.RO) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16078 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16078 - **Abstract** Offline reinforcement learning (RL) have received rising interest due to its appealing data efficiency. The present study addresses behavior estimation, a task that lays the foundation of many offline RL algorithms. Behavior estimation aims at estimating the policy with which training data are generated. In particular, this work considers a scenario where the data are collected from multiple sources. In this case, neglecting data heterogeneity, existing approaches for behavior estimation suffers from behavior misspecification. To overcome this drawback, the present study proposes a latent variable model to infer a set of policies from data, which allows an agent to use as behavior policy the policy that best describes a particular trajectory. This model provides with a agent fine-grained characterization for multi-source data and helps it overcome behavior misspecification. This work also proposes a learning algorithm for this model and illustrates its practical usage via extending an existing offline RL algorithm. Lastly, with extensive evaluation this work confirms the existence of behavior misspecification and the efficacy of the proposed model. ### Peculiarities of gender disambiguation and ordering of non-English authors' names for Economic papers beyond core databases - **Authors:** O. Mryglod, S. Nazarovets, S. Kozmenko - **Subjects:** Digital Libraries (cs.DL) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16124 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16124 - **Abstract** This paper presents the results of further exploration of Crossref data related to Ukrainian Economics research (the first part can be found in [Mryglod, O., Nazarovets, S. & Kozmenko, S. (2021) Scientometrics, 126, 8187]). Our purpose is to supplement the quantitative portrait of Ukrainian Economics discipline with the results of gender and author ordering analysis at the level of individual authors, special methods of working with bibliographic data with a predominant share of non-English authors are used. The properties of gender mixing, the likelihood of male and female authors occupying the first position in the authorship list, as well as the arrangements of names are studied. A data set containing bibliographic records related to Ukrainian journal publications in the field of Economics is constructed using Crossref metadata. The described stages for working with such specific data help to work at the level of authors and analyse, in particular, gender issues. Despite the larger number of female authors, gender equality is more likely to be reported at the individual level for the discipline of Ukrainian Economics. The tendencies towards collaborative or solo-publications and gender mixing patterns are found to be dependent on the journal: the differences for publications indexed in Scopus and/or Web of Science databases are found. It has also been found that Ukrainian Economics research is characterized by rather a non-alphabetical order of authors. To our knowledge, this is the first large-scale quantitative study of Ukrainian Economic discipline. The results obtained are valuable not only at the national level, but also contribute to general knowledge about Economic research, gender issues and authors' names ordering. Here, for the first time, attention is drawn to the explicit use of the features of the Slavic authors' names. ### Trustless unknown-order groups - **Authors:** Samuel Dobson, Steven Galbraith, Benjamin Smith (GRACE) - **Subjects:** Cryptography and Security (cs.CR); Number Theory (math.NT) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16128 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16128 - **Abstract** Groups of unknown order are of major interest due to their applications including time-lock puzzles, verifiable delay functions, and accumulators. In this paper we focus on trustless setup: in this setting, the most popular unknown-order group construction is ideal class groups of imaginary quadratic fields. We argue that the full impact of Sutherland's generic group-order algorithm has not been recognised in this context, and show that group sizes currently being proposed in practice (namely, approximately 830 bits) do not meet the claimed security level. Instead, we claim that random group orders should be at least 3300 bits to meet a 128-bit security level. For ideal class groups this leads to discriminants of around 6656 bits, which are much larger than desirable. One drawback of class groups is that current approaches require approximately $2\log_2(N)$ bits to represent an element in a group of order N. We provide two solutions to mitigate this blow-up in the size of representations. First, we explain how an idea of Bleichenbacher can be used to compress class group elements to $(3/2)\log_2(N)$ bits. Second, we note that using Jacobians of hyperelliptic curves (in other words, class groups of quadratic function fields) allows efficient compression to the optimal element representation size of $\log_2(N)$ bits. We discuss point-counting approaches for hyperelliptic curves and argue that genus-3 curves are secure in the trustless unknown-order setting. We conclude that in practice, Jacobians of hyperelliptic curves are more efficient in practice than ideal class groups at the same security level -- both in the group operation and in the size of the element representation. ### AdaEnlight: Energy-aware Low-light Video Stream Enhancement on Mobile Devices - **Authors:** Sicong Liu (Northwestern Polytechnical University, China), Xiaochen Li (Northwestern Polytechnical University, China), Zimu Zhou (City University of Hong Kong, China), Bin Guo (Northwestern Polytechnical University, China), Meng Zhang (Northwestern Polytechnical University, China), Haochen Shen (Northwestern Polytechnical University, China), Zhiwen Yu (Northwestern Polytechnical University, China) - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16135 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16135 - **Abstract** The ubiquity of camera-embedded devices and the advances in deep learning have stimulated various intelligent mobile video applications. These applications often demand on-device processing of video streams to deliver real-time, high-quality services for privacy and robustness concerns. However, the performance of these applications is constrained by the raw video streams, which tend to be taken with small-aperture cameras of ubiquitous mobile platforms in dim light. Despite extensive low-light video enhancement solutions, they are unfit for deployment to mobile devices due to their complex models and and ignorance of system dynamics like energy budgets. In this paper, we propose AdaEnlight, an energy-aware low-light video stream enhancement system on mobile devices. It achieves real-time video enhancement with competitive visual quality while allowing runtime behavior adaptation to the platform-imposed dynamic energy budgets. We report extensive experiments on diverse datasets, scenarios, and platforms and demonstrate the superiority of AdaEnlight compared with state-of-the-art low-light image and video enhancement solutions. ### Few-shot Query-Focused Summarization with Prefix-Merging - **Authors:** Ruifeng Yuan, Zili Wang, Ziqiang Cao, Wenjie Li - **Subjects:** Computation and Language (cs.CL); Artificial Intelligence (cs.AI) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16164 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16164 - **Abstract** Query-focused summarization has been considered as an important extension for text summarization. It aims to generate a concise highlight for a given query. Different from text summarization, query-focused summarization has long been plagued by the problem of lacking high-quality large-scale datasets. In this paper, we investigate the idea that whether we can integrate and transfer the knowledge of text summarization and question answering to assist the few-shot learning in query-focused summarization. Here, we propose prefix-merging, a prefix-based pretraining strategy for few-shot learning in query-focused summarization. Drawn inspiration from prefix-tuning, we are allowed to integrate the task knowledge from text summarization and question answering into a properly designed prefix and apply the merged prefix to query-focused summarization. With only a small amount of trainable parameters, prefix-merging outperforms fine-tuning on query-focused summarization. We further discuss the influence of different prefix designs and propose a visualized explanation for how prefix-merging works. ### DATID-3D: Diversity-Preserved Domain Adaptation Using Text-to-Image Diffusion for 3D Generative Model - **Authors:** Gwanghyun Kim, Se Young Chun - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16374 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16374 - **Abstract** Recent 3D generative models have achieved remarkable performance in synthesizing high resolution photorealistic images with view consistency and detailed 3D shapes, but training them for diverse domains is challenging since it requires massive training images and their camera distribution information. Text-guided domain adaptation methods have shown impressive performance on converting the 2D generative model on one domain into the models on other domains with different styles by leveraging the CLIP (Contrastive Language-Image Pre-training), rather than collecting massive datasets for those domains. However, one drawback of them is that the sample diversity in the original generative model is not well-preserved in the domain-adapted generative models due to the deterministic nature of the CLIP text encoder. Text-guided domain adaptation will be even more challenging for 3D generative models not only because of catastrophic diversity loss, but also because of inferior text-image correspondence and poor image quality. Here we propose DATID-3D, a domain adaptation method tailored for 3D generative models using text-to-image diffusion models that can synthesize diverse images per text prompt without collecting additional images and camera information for the target domain. Unlike 3D extensions of prior text-guided domain adaptation methods, our novel pipeline was able to fine-tune the state-of-the-art 3D generator of the source domain to synthesize high resolution, multi-view consistent images in text-guided targeted domains without additional data, outperforming the existing text-guided domain adaptation methods in diversity and text-image correspondence. Furthermore, we propose and demonstrate diverse 3D image manipulations such as one-shot instance-selected adaptation and single-view manipulated 3D reconstruction to fully enjoy diversity in text. ### Symmetry Detection in Trajectory Data for More Meaningful Reinforcement Learning Representations - **Authors:** Marissa D'Alonzo, Rebecca Russell - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Robotics (cs.RO) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16381 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16381 - **Abstract** Knowledge of the symmetries of reinforcement learning (RL) systems can be used to create compressed and semantically meaningful representations of a low-level state space. We present a method of automatically detecting RL symmetries directly from raw trajectory data without requiring active control of the system. Our method generates candidate symmetries and trains a recurrent neural network (RNN) to discriminate between the original trajectories and the transformed trajectories for each candidate symmetry. The RNN discriminator's accuracy for each candidate reveals how symmetric the system is under that transformation. This information can be used to create high-level representations that are invariant to all symmetries on a dataset level and to communicate properties of the RL behavior to users. We show in experiments on two simulated RL use cases (a pusher robot and a UAV flying in wind) that our method can determine the symmetries underlying both the environment physics and the trained RL policy. ### Abstract Visual Reasoning with Tangram Shapes - **Authors:** Anya Ji, Noriyuki Kojima, Noah Rush, Alane Suhr, Wai Keen Vong, Robert D. Hawkins, Yoav Artzi - **Subjects:** Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16492 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16492 - **Abstract** We introduce KiloGram, a resource for studying abstract visual reasoning in humans and machines. Drawing on the history of tangram puzzles as stimuli in cognitive science, we build a richly annotated dataset that, with >1k distinct stimuli, is orders of magnitude larger and more diverse than prior resources. It is both visually and linguistically richer, moving beyond whole shape descriptions to include segmentation maps and part labels. We use this resource to evaluate the abstract visual reasoning capacities of recent multi-modal models. We observe that pre-trained weights demonstrate limited abstract reasoning, which dramatically improves with fine-tuning. We also observe that explicitly describing parts aids abstract reasoning for both humans and models, especially when jointly encoding the linguistic and visual inputs. KiloGram is available at https://lil.nlp.cornell.edu/kilogram . ## Keyword: raw image ### Learning Visual Planning Models from Partially Observed Images - **Authors:** Kebing Jin, Zhanhao Xiao, Hankui Hankz Zhuo, Hai Wan, Jiaran Cai - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15666 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15666 - **Abstract** There has been increasing attention on planning model learning in classical planning. Most existing approaches, however, focus on learning planning models from structured data in symbolic representations. It is often difficult to obtain such structured data in real-world scenarios. Although a number of approaches have been developed for learning planning models from fully observed unstructured data (e.g., images), in many scenarios raw observations are often incomplete. In this paper, we provide a novel framework, \aType{Recplan}, for learning a transition model from partially observed raw image traces. More specifically, by considering the preceding and subsequent images in a trace, we learn the latent state representations of raw observations and then build a transition model based on such representations. Additionally, we propose a neural-network-based approach to learn a heuristic model that estimates the distance toward a given goal observation. Based on the learned transition model and heuristic model, we implement a classical planner for images. We exhibit empirically that our approach is more effective than a state-of-the-art approach of learning visual planning models in the environment with incomplete observations.
process
new submissions for wed nov keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awbisp there is no result keyword image signal processing there is no result keyword image signal process there is no result keyword compression post training quantization on diffusion models authors yuzhang shang zhihang yuan bin xie bingzhe wu yan yan subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract denoising diffusion score based generative models have recently achieved significant accomplishments in generating realistic and diverse data these approaches define a forward diffusion process for transforming data into noise and a backward denoising process for sampling data from noise unfortunately the generation process of current denoising diffusion models is notoriously slow due to the lengthy iterative noise estimations which rely on cumbersome neural networks it prevents the diffusion models from being widely deployed especially on edge devices previous works accelerate the generation process of diffusion model dm via finding shorter yet effective sampling trajectories however they overlook the cost of noise estimation with a heavy network in every iteration in this work we accelerate generation from the perspective of compressing the noise estimation network due to the difficulty of retraining dms we exclude mainstream training aware compression paradigms and introduce post training quantization ptq into dm acceleration however the output distributions of noise estimation networks change with time step making previous ptq methods fail in dms since they are designed for single time step scenarios to devise a dm specific ptq method we explore ptq on dm in three aspects quantized operations calibration dataset and calibration metric we summarize and use several observations derived from all inclusive investigations to formulate our method which especially targets the unique multi time step structure of dms experimentally our method can directly quantize full precision dms into bit models while maintaining or even improving their performance in a training free manner importantly our method can serve as a plug and play module on other fast sampling methods e g ddim compressing cross lingual multi task models at qualtrics authors daniel campos daniel perry samir joshi yashmeet gambhir wei du zhengzheng xing aaron colak subjects computation and language cs cl machine learning cs lg arxiv link pdf link abstract experience management is an emerging business area where organizations focus on understanding the feedback of customers and employees in order to improve their end to end experiences this results in a unique set of machine learning problems to help understand how people feel discover issues they care about and find which actions need to be taken on data that are different in content and distribution from traditional nlp domains in this paper we present a case study of building text analysis applications that perform multiple classification tasks efficiently in languages in the nascent business area of experience management in order to scale up modern ml methods on experience data we leverage cross lingual and multi task modeling techniques to consolidate our models into a single deployment to avoid overhead we also make use of model compression and model distillation to reduce overall inference latency and hardware cost to the level acceptable for business needs while maintaining model prediction quality our findings show that multi task modeling improves task performance for a subset of experience management tasks in both xlm r and mbert architectures among the compressed architectures we explored we found that minilm achieved the best compression performance tradeoff our case study demonstrates a speedup of up to with average task degradation or speedup with degradation and estimated savings of over using the original full size model these results demonstrate a successful scaling up of text classification for the challenging new area of ml for experience management maximal atomic irredundant sets a usage based dataflow partitioning algorithm authors corentin ferry steven derrien sanjay rajopadhye subjects programming languages cs pl distributed parallel and cluster computing cs dc arxiv link pdf link abstract programs admitting a polyhedral representation can be transformed in many ways for locality and parallelism notably loop tiling data flow analysis can then compute dependence relations between iterations and between tiles when tiling is applied certain iteration wise dependences cross tile boundaries creating the need for inter tile data communication previous work computes it as the flow in and flow out sets of iteration tiles in this paper we propose a partitioning of the flow out of a tile into the maximal sets of iterations that are entirely consumed and incur no redundant storage or transfer the computation is described as an algorithm and performed on a selection of polyhedral programs we then suggest possible applications of this decomposition in compression and memory allocation trustless unknown order groups authors samuel dobson steven galbraith benjamin smith grace subjects cryptography and security cs cr number theory math nt arxiv link pdf link abstract groups of unknown order are of major interest due to their applications including time lock puzzles verifiable delay functions and accumulators in this paper we focus on trustless setup in this setting the most popular unknown order group construction is ideal class groups of imaginary quadratic fields we argue that the full impact of sutherland s generic group order algorithm has not been recognised in this context and show that group sizes currently being proposed in practice namely approximately bits do not meet the claimed security level instead we claim that random group orders should be at least bits to meet a bit security level for ideal class groups this leads to discriminants of around bits which are much larger than desirable one drawback of class groups is that current approaches require approximately log n bits to represent an element in a group of order n we provide two solutions to mitigate this blow up in the size of representations first we explain how an idea of bleichenbacher can be used to compress class group elements to log n bits second we note that using jacobians of hyperelliptic curves in other words class groups of quadratic function fields allows efficient compression to the optimal element representation size of log n bits we discuss point counting approaches for hyperelliptic curves and argue that genus curves are secure in the trustless unknown order setting we conclude that in practice jacobians of hyperelliptic curves are more efficient in practice than ideal class groups at the same security level both in the group operation and in the size of the element representation dba efficient transformer with dynamic bilinear low rank attention authors bosheng qin juncheng li siliang tang yueting zhuang subjects machine learning cs lg arxiv link pdf link abstract many studies have been conducted to improve the efficiency of transformer from quadric to linear among them the low rank based methods aim to learn the projection matrices to compress the sequence length however the projection matrices are fixed once they have been learned which compress sequence length with dedicated coefficients for tokens in the same position adopting such input invariant projections ignores the fact that the most informative part of a sequence varies from sequence to sequence thus failing to preserve the most useful information that lies in varied positions in addition previous efficient transformers only focus on the influence of sequence length while neglecting the effect of hidden state dimension to address the aforementioned problems we present an efficient yet effective attention mechanism namely the dynamic bilinear low rank attention dba which compresses the sequence length by input sensitive dynamic projection matrices and achieves linear time and space complexity by jointly optimizing the sequence length and hidden state dimension while maintaining state of the art performance specifically we first theoretically demonstrate that the sequence length can be compressed non destructively from a novel perspective of information theory with compression matrices dynamically determined by the input sequence furthermore we show that the hidden state dimension can be approximated by extending the johnson lindenstrauss lemma optimizing the attention in bilinear form theoretical analysis shows that dba is proficient in capturing high order relations in cross attention problems experiments over tasks with diverse sequence length conditions show that dba achieves state of the art performance compared with various strong baselines while maintaining less memory consumption with higher speed compressing volumetric radiance fields to mb authors lingzhi li zhen shen zhongshu wang li shen liefeng bo subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract approximating radiance fields with volumetric grids is one of promising directions for improving nerf represented by methods like plenoxels and dvgo which achieve super fast training convergence and real time rendering however these methods typically require a tremendous storage overhead costing up to hundreds of megabytes of disk space and runtime memory for a single scene we address this issue in this paper by introducing a simple yet effective framework called vector quantized radiance fields vqrf for compressing these volume grid based radiance fields we first present a robust and adaptive metric for estimating redundancy in grid models and performing voxel pruning by better exploring intermediate outputs of volumetric rendering a trainable vector quantization is further proposed to improve the compactness of grid models in combination with an efficient joint tuning strategy and post processing our method can achieve a compression ratio of times by reducing the overall model size to mb with negligible loss on visual quality extensive experiments demonstrate that the proposed framework is capable of achieving unrivaled performance and well generalization across multiple methods with distinct volumetric structures facilitating the wide use of volumetric radiance fields methods in real world applications code available at url keyword raw learning visual planning models from partially observed images authors kebing jin zhanhao xiao hankui hankz zhuo hai wan jiaran cai subjects machine learning cs lg artificial intelligence cs ai computer vision and pattern recognition cs cv arxiv link pdf link abstract there has been increasing attention on planning model learning in classical planning most existing approaches however focus on learning planning models from structured data in symbolic representations it is often difficult to obtain such structured data in real world scenarios although a number of approaches have been developed for learning planning models from fully observed unstructured data e g images in many scenarios raw observations are often incomplete in this paper we provide a novel framework atype recplan for learning a transition model from partially observed raw image traces more specifically by considering the preceding and subsequent images in a trace we learn the latent state representations of raw observations and then build a transition model based on such representations additionally we propose a neural network based approach to learn a heuristic model that estimates the distance toward a given goal observation based on the learned transition model and heuristic model we implement a classical planner for images we exhibit empirically that our approach is more effective than a state of the art approach of learning visual planning models in the environment with incomplete observations deep semi supervised learning with double contrast of features and semantics authors quan feng jiayu yao zhison pan guojun zhou subjects machine learning cs lg arxiv link pdf link abstract in recent years the field of intelligent transportation systems its has achieved remarkable success which is mainly due to the large amount of available annotation data however obtaining these annotated data has to afford expensive costs in reality therefore a more realistic strategy is to leverage semi supervised learning ssl with a small amount of labeled data and a large amount of unlabeled data typically semantic consistency regularization and the two stage learning methods of decoupling feature extraction and classification have been proven effective nevertheless representation learning only limited to semantic consistency regularization may not guarantee the separation or discriminability of representations of samples with different semantics due to the inherent limitations of the two stage learning methods the extracted features may not match the specific downstream tasks in order to deal with the above drawbacks this paper proposes an end to end deep semi supervised learning double contrast of semantic and feature which extracts effective tasks specific discriminative features by contrasting the semantics features of positive and negative augmented samples pairs moreover we leverage information theory to explain the rationality of double contrast of semantics and features and slack mutual information to contrastive loss in a simpler way finally the effectiveness of our method is verified in benchmark datasets superpoint transformer for scene instance segmentation authors jiahao sun chunmei qing junpeng tan xiangmin xu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract most existing methods realize instance segmentation by extending those models used for object detection or semantic segmentation however these non straightforward methods suffer from two drawbacks imprecise bounding boxes or unsatisfactory semantic predictions limit the performance of the overall instance segmentation framework existing method requires a time consuming intermediate step of aggregation to address these issues this paper proposes a novel end to end instance segmentation method based on superpoint transformer named as spformer it groups potential features from point clouds into superpoints and directly predicts instances through query vectors without relying on the results of object detection or semantic segmentation the key step in this framework is a novel query decoder with transformers that can capture the instance information through the superpoint cross attention mechanism and generate the superpoint masks of the instances through bipartite matching based on superpoint masks spformer can implement the network training without the intermediate aggregation step which accelerates the network extensive experiments on and benchmarks verify that our method is concise yet efficient notably spformer exceeds compared state of the art methods by on hidden test set in terms of map and keeps fast inference speed per frame simultaneously code is available at billion web documents with rich information authors arnold overwijk chenyan xiong xiao liu cameron vandenberg jamie callan subjects information retrieval cs ir artificial intelligence cs ai computation and language cs cl arxiv link pdf link abstract the newest iteration of the clueweb line of datasets provides billion web pages affiliated with rich information its design was influenced by the need for a high quality large scale web corpus to support a range of academic and industry research for example in information systems retrieval augmented ai systems and model pretraining compared with earlier clueweb corpora the corpus is larger more varied of higher quality and aligned with the document distributions in commercial web search besides raw html includes rich information about the web pages provided by industry standard document understanding systems including the visual representation of pages rendered by a web browser parsed html structure information from a neural network parser and pre processed cleaned document text to lower the barrier to entry many of these signals have been widely used in industry but are available to the research community for the first time at this scale neural feature adaptation for symbolic predictions using pre training and semantic loss authors vedant shah aditya agrawal lovekesh vig ashwin srinivasan gautam shroff tanmay verlekar subjects artificial intelligence cs ai machine learning cs lg logic in computer science cs lo arxiv link pdf link abstract we are interested in neurosymbolic systems consisting of a high level symbolic layer for explainable prediction in terms of human intelligible concepts and a low level neural layer for extracting symbols required to generate the symbolic explanation real data is often imperfect meaning that even if the symbolic theory remains unchanged we may still need to address the problem of mapping raw data to high level symbols each time there is a change in the data acquisition environment or equipment manual re annotation of the raw data each time this happens is laborious and expensive and automated labelling methods are often imperfect especially for complex problems neurolog proposed the use of a semantic loss function that allows an existing feature based symbolic model to guide the extraction of feature values from raw data using abduction however the experiments demonstrating the use of semantic loss through abduction appear to rely heavily on a domain specific pre processing step that enables a prior delineation of feature locations in the raw data we examine the use of semantic loss in domains where such pre processing is not possible or is not obvious we show that without any prior information about the features the neurolog approach can continue to predict accurately even with substantially incorrect feature predictions we show also that prior information about the features in the form of even imperfect pre training can help correct this situation these findings are replicated on the original problem considered by neurolog without the use of feature delineation this suggests that symbolic explanations constructed for data in a domain could be re used in a related domain by feature adaptation of pre trained neural extractors using the semantic loss function constrained by abductive feedback behavior estimation from multi source data for offline reinforcement learning authors guoxi zhang hisashi kashima subjects machine learning cs lg robotics cs ro arxiv link pdf link abstract offline reinforcement learning rl have received rising interest due to its appealing data efficiency the present study addresses behavior estimation a task that lays the foundation of many offline rl algorithms behavior estimation aims at estimating the policy with which training data are generated in particular this work considers a scenario where the data are collected from multiple sources in this case neglecting data heterogeneity existing approaches for behavior estimation suffers from behavior misspecification to overcome this drawback the present study proposes a latent variable model to infer a set of policies from data which allows an agent to use as behavior policy the policy that best describes a particular trajectory this model provides with a agent fine grained characterization for multi source data and helps it overcome behavior misspecification this work also proposes a learning algorithm for this model and illustrates its practical usage via extending an existing offline rl algorithm lastly with extensive evaluation this work confirms the existence of behavior misspecification and the efficacy of the proposed model peculiarities of gender disambiguation and ordering of non english authors names for economic papers beyond core databases authors o mryglod s nazarovets s kozmenko subjects digital libraries cs dl arxiv link pdf link abstract this paper presents the results of further exploration of crossref data related to ukrainian economics research the first part can be found in our purpose is to supplement the quantitative portrait of ukrainian economics discipline with the results of gender and author ordering analysis at the level of individual authors special methods of working with bibliographic data with a predominant share of non english authors are used the properties of gender mixing the likelihood of male and female authors occupying the first position in the authorship list as well as the arrangements of names are studied a data set containing bibliographic records related to ukrainian journal publications in the field of economics is constructed using crossref metadata the described stages for working with such specific data help to work at the level of authors and analyse in particular gender issues despite the larger number of female authors gender equality is more likely to be reported at the individual level for the discipline of ukrainian economics the tendencies towards collaborative or solo publications and gender mixing patterns are found to be dependent on the journal the differences for publications indexed in scopus and or web of science databases are found it has also been found that ukrainian economics research is characterized by rather a non alphabetical order of authors to our knowledge this is the first large scale quantitative study of ukrainian economic discipline the results obtained are valuable not only at the national level but also contribute to general knowledge about economic research gender issues and authors names ordering here for the first time attention is drawn to the explicit use of the features of the slavic authors names trustless unknown order groups authors samuel dobson steven galbraith benjamin smith grace subjects cryptography and security cs cr number theory math nt arxiv link pdf link abstract groups of unknown order are of major interest due to their applications including time lock puzzles verifiable delay functions and accumulators in this paper we focus on trustless setup in this setting the most popular unknown order group construction is ideal class groups of imaginary quadratic fields we argue that the full impact of sutherland s generic group order algorithm has not been recognised in this context and show that group sizes currently being proposed in practice namely approximately bits do not meet the claimed security level instead we claim that random group orders should be at least bits to meet a bit security level for ideal class groups this leads to discriminants of around bits which are much larger than desirable one drawback of class groups is that current approaches require approximately log n bits to represent an element in a group of order n we provide two solutions to mitigate this blow up in the size of representations first we explain how an idea of bleichenbacher can be used to compress class group elements to log n bits second we note that using jacobians of hyperelliptic curves in other words class groups of quadratic function fields allows efficient compression to the optimal element representation size of log n bits we discuss point counting approaches for hyperelliptic curves and argue that genus curves are secure in the trustless unknown order setting we conclude that in practice jacobians of hyperelliptic curves are more efficient in practice than ideal class groups at the same security level both in the group operation and in the size of the element representation adaenlight energy aware low light video stream enhancement on mobile devices authors sicong liu northwestern polytechnical university china xiaochen li northwestern polytechnical university china zimu zhou city university of hong kong china bin guo northwestern polytechnical university china meng zhang northwestern polytechnical university china haochen shen northwestern polytechnical university china zhiwen yu northwestern polytechnical university china subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the ubiquity of camera embedded devices and the advances in deep learning have stimulated various intelligent mobile video applications these applications often demand on device processing of video streams to deliver real time high quality services for privacy and robustness concerns however the performance of these applications is constrained by the raw video streams which tend to be taken with small aperture cameras of ubiquitous mobile platforms in dim light despite extensive low light video enhancement solutions they are unfit for deployment to mobile devices due to their complex models and and ignorance of system dynamics like energy budgets in this paper we propose adaenlight an energy aware low light video stream enhancement system on mobile devices it achieves real time video enhancement with competitive visual quality while allowing runtime behavior adaptation to the platform imposed dynamic energy budgets we report extensive experiments on diverse datasets scenarios and platforms and demonstrate the superiority of adaenlight compared with state of the art low light image and video enhancement solutions few shot query focused summarization with prefix merging authors ruifeng yuan zili wang ziqiang cao wenjie li subjects computation and language cs cl artificial intelligence cs ai arxiv link pdf link abstract query focused summarization has been considered as an important extension for text summarization it aims to generate a concise highlight for a given query different from text summarization query focused summarization has long been plagued by the problem of lacking high quality large scale datasets in this paper we investigate the idea that whether we can integrate and transfer the knowledge of text summarization and question answering to assist the few shot learning in query focused summarization here we propose prefix merging a prefix based pretraining strategy for few shot learning in query focused summarization drawn inspiration from prefix tuning we are allowed to integrate the task knowledge from text summarization and question answering into a properly designed prefix and apply the merged prefix to query focused summarization with only a small amount of trainable parameters prefix merging outperforms fine tuning on query focused summarization we further discuss the influence of different prefix designs and propose a visualized explanation for how prefix merging works datid diversity preserved domain adaptation using text to image diffusion for generative model authors gwanghyun kim se young chun subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract recent generative models have achieved remarkable performance in synthesizing high resolution photorealistic images with view consistency and detailed shapes but training them for diverse domains is challenging since it requires massive training images and their camera distribution information text guided domain adaptation methods have shown impressive performance on converting the generative model on one domain into the models on other domains with different styles by leveraging the clip contrastive language image pre training rather than collecting massive datasets for those domains however one drawback of them is that the sample diversity in the original generative model is not well preserved in the domain adapted generative models due to the deterministic nature of the clip text encoder text guided domain adaptation will be even more challenging for generative models not only because of catastrophic diversity loss but also because of inferior text image correspondence and poor image quality here we propose datid a domain adaptation method tailored for generative models using text to image diffusion models that can synthesize diverse images per text prompt without collecting additional images and camera information for the target domain unlike extensions of prior text guided domain adaptation methods our novel pipeline was able to fine tune the state of the art generator of the source domain to synthesize high resolution multi view consistent images in text guided targeted domains without additional data outperforming the existing text guided domain adaptation methods in diversity and text image correspondence furthermore we propose and demonstrate diverse image manipulations such as one shot instance selected adaptation and single view manipulated reconstruction to fully enjoy diversity in text symmetry detection in trajectory data for more meaningful reinforcement learning representations authors marissa d alonzo rebecca russell subjects machine learning cs lg artificial intelligence cs ai robotics cs ro arxiv link pdf link abstract knowledge of the symmetries of reinforcement learning rl systems can be used to create compressed and semantically meaningful representations of a low level state space we present a method of automatically detecting rl symmetries directly from raw trajectory data without requiring active control of the system our method generates candidate symmetries and trains a recurrent neural network rnn to discriminate between the original trajectories and the transformed trajectories for each candidate symmetry the rnn discriminator s accuracy for each candidate reveals how symmetric the system is under that transformation this information can be used to create high level representations that are invariant to all symmetries on a dataset level and to communicate properties of the rl behavior to users we show in experiments on two simulated rl use cases a pusher robot and a uav flying in wind that our method can determine the symmetries underlying both the environment physics and the trained rl policy abstract visual reasoning with tangram shapes authors anya ji noriyuki kojima noah rush alane suhr wai keen vong robert d hawkins yoav artzi subjects computation and language cs cl artificial intelligence cs ai computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract we introduce kilogram a resource for studying abstract visual reasoning in humans and machines drawing on the history of tangram puzzles as stimuli in cognitive science we build a richly annotated dataset that with distinct stimuli is orders of magnitude larger and more diverse than prior resources it is both visually and linguistically richer moving beyond whole shape descriptions to include segmentation maps and part labels we use this resource to evaluate the abstract visual reasoning capacities of recent multi modal models we observe that pre trained weights demonstrate limited abstract reasoning which dramatically improves with fine tuning we also observe that explicitly describing parts aids abstract reasoning for both humans and models especially when jointly encoding the linguistic and visual inputs kilogram is available at keyword raw image learning visual planning models from partially observed images authors kebing jin zhanhao xiao hankui hankz zhuo hai wan jiaran cai subjects machine learning cs lg artificial intelligence cs ai computer vision and pattern recognition cs cv arxiv link pdf link abstract there has been increasing attention on planning model learning in classical planning most existing approaches however focus on learning planning models from structured data in symbolic representations it is often difficult to obtain such structured data in real world scenarios although a number of approaches have been developed for learning planning models from fully observed unstructured data e g images in many scenarios raw observations are often incomplete in this paper we provide a novel framework atype recplan for learning a transition model from partially observed raw image traces more specifically by considering the preceding and subsequent images in a trace we learn the latent state representations of raw observations and then build a transition model based on such representations additionally we propose a neural network based approach to learn a heuristic model that estimates the distance toward a given goal observation based on the learned transition model and heuristic model we implement a classical planner for images we exhibit empirically that our approach is more effective than a state of the art approach of learning visual planning models in the environment with incomplete observations
1
462,275
13,243,991,216
IssuesEvent
2020-08-19 12:23:00
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
pantasandals.com - The hamburger menu is not working
browser-fenix engine-gecko priority-normal severity-important
<!-- @browser: Firefox Mobile 69.0 --> <!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:69.0) Gecko/69.0 Firefox/69.0 --> <!-- @reported_with: --> <!-- @extra_labels: browser-fenix --> **URL**: https://pantasandals.com/ **Browser / Version**: Firefox Mobile 69.0 **Operating System**: Android 8.0.0 **Tested Another Browser**: No **Problem type**: Site is not usable **Description**: Can't open Hamburger menu **Steps to Reproduce**: <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
pantasandals.com - The hamburger menu is not working - <!-- @browser: Firefox Mobile 69.0 --> <!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:69.0) Gecko/69.0 Firefox/69.0 --> <!-- @reported_with: --> <!-- @extra_labels: browser-fenix --> **URL**: https://pantasandals.com/ **Browser / Version**: Firefox Mobile 69.0 **Operating System**: Android 8.0.0 **Tested Another Browser**: No **Problem type**: Site is not usable **Description**: Can't open Hamburger menu **Steps to Reproduce**: <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
pantasandals com the hamburger menu is not working url browser version firefox mobile operating system android tested another browser no problem type site is not usable description can t open hamburger menu steps to reproduce browser configuration none from with ❤️
0
38,430
6,670,889,295
IssuesEvent
2017-10-04 03:03:59
nltk/nltk
https://api.github.com/repos/nltk/nltk
closed
Dependencies on PyPI needs to be updated.
admin documentation pleaseverify
Most users installs `nltk` through: ``` $ pip install -U nltk ``` or ``` $ conda install nltk ``` but somehow the PyPI requirements seems to be far lacking behind the ones in `pip-req.txt`.
1.0
Dependencies on PyPI needs to be updated. - Most users installs `nltk` through: ``` $ pip install -U nltk ``` or ``` $ conda install nltk ``` but somehow the PyPI requirements seems to be far lacking behind the ones in `pip-req.txt`.
non_process
dependencies on pypi needs to be updated most users installs nltk through pip install u nltk or conda install nltk but somehow the pypi requirements seems to be far lacking behind the ones in pip req txt
0
245,892
26,569,482,514
IssuesEvent
2023-01-21 01:08:45
nidhi7598/linux-3.0.35_CVE-2022-45934
https://api.github.com/repos/nidhi7598/linux-3.0.35_CVE-2022-45934
opened
CVE-2018-25020 (High) detected in linux-stable-rtv3.8.6
security vulnerability
## CVE-2018-25020 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv3.8.6</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/core/filter.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/core/filter.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The BPF subsystem in the Linux kernel before 4.17 mishandles situations with a long jump over an instruction sequence where inner instructions require substantial expansions into multiple BPF instructions, leading to an overflow. This affects kernel/bpf/core.c and net/core/filter.c. <p>Publish Date: 2021-12-08 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-25020>CVE-2018-25020</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-25020">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-25020</a></p> <p>Release Date: 2021-12-08</p> <p>Fix Resolution: v4.17</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-25020 (High) detected in linux-stable-rtv3.8.6 - ## CVE-2018-25020 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv3.8.6</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/core/filter.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/core/filter.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The BPF subsystem in the Linux kernel before 4.17 mishandles situations with a long jump over an instruction sequence where inner instructions require substantial expansions into multiple BPF instructions, leading to an overflow. This affects kernel/bpf/core.c and net/core/filter.c. <p>Publish Date: 2021-12-08 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-25020>CVE-2018-25020</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-25020">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-25020</a></p> <p>Release Date: 2021-12-08</p> <p>Fix Resolution: v4.17</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in linux stable cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in base branch master vulnerable source files net core filter c net core filter c vulnerability details the bpf subsystem in the linux kernel before mishandles situations with a long jump over an instruction sequence where inner instructions require substantial expansions into multiple bpf instructions leading to an overflow this affects kernel bpf core c and net core filter c publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
18,044
24,054,137,026
IssuesEvent
2022-09-16 15:14:27
opensearch-project/data-prepper
https://api.github.com/repos/opensearch-project/data-prepper
closed
Provide a trace_peer_forwarder plugin
plugin - processor
**Is your feature request related to a problem? Please describe.** The Core Peer Forwarding (#700) feature will replace the existing `peer-forwarder`. However, an ideal trace pipeline should forward events prior to the `raw_trace` and `service_map_stateful` processors. **Describe the solution you'd like** Create a `trace_peer_forwarder` plugin. Thus, we can create a pipeline like the following. ``` entry-pipeline: delay: "100" source: otel_trace_source: processor: - trace_peer_forwarder: sink: - pipeline: name: "raw-pipeline" - pipeline: name: "service-map-pipeline" raw-pipeline: source: pipeline: name: "entry-pipeline" processor: - otel_trace_raw: sink: - opensearch: service-map-pipeline: delay: "100" source: pipeline: name: "entry-pipeline" processor: - service_map_stateful: sink: - opensearch: ``` It can forward based on the `traceId`. **Describe alternatives you've considered (Optional)** Rely exclusively on `service_map_stateful` and `otel_trace_raw`. But, then the events will need to be forwarded twice.
1.0
Provide a trace_peer_forwarder plugin - **Is your feature request related to a problem? Please describe.** The Core Peer Forwarding (#700) feature will replace the existing `peer-forwarder`. However, an ideal trace pipeline should forward events prior to the `raw_trace` and `service_map_stateful` processors. **Describe the solution you'd like** Create a `trace_peer_forwarder` plugin. Thus, we can create a pipeline like the following. ``` entry-pipeline: delay: "100" source: otel_trace_source: processor: - trace_peer_forwarder: sink: - pipeline: name: "raw-pipeline" - pipeline: name: "service-map-pipeline" raw-pipeline: source: pipeline: name: "entry-pipeline" processor: - otel_trace_raw: sink: - opensearch: service-map-pipeline: delay: "100" source: pipeline: name: "entry-pipeline" processor: - service_map_stateful: sink: - opensearch: ``` It can forward based on the `traceId`. **Describe alternatives you've considered (Optional)** Rely exclusively on `service_map_stateful` and `otel_trace_raw`. But, then the events will need to be forwarded twice.
process
provide a trace peer forwarder plugin is your feature request related to a problem please describe the core peer forwarding feature will replace the existing peer forwarder however an ideal trace pipeline should forward events prior to the raw trace and service map stateful processors describe the solution you d like create a trace peer forwarder plugin thus we can create a pipeline like the following entry pipeline delay source otel trace source processor trace peer forwarder sink pipeline name raw pipeline pipeline name service map pipeline raw pipeline source pipeline name entry pipeline processor otel trace raw sink opensearch service map pipeline delay source pipeline name entry pipeline processor service map stateful sink opensearch it can forward based on the traceid describe alternatives you ve considered optional rely exclusively on service map stateful and otel trace raw but then the events will need to be forwarded twice
1
14,725
17,936,433,341
IssuesEvent
2021-09-10 15:53:48
aiidateam/aiida-core
https://api.github.com/repos/aiidateam/aiida-core
opened
Process functions allow non-Data arguments to be passed as input
type/bug priority/important topic/engine topic/processes
MWE: ```python In [1]: from aiida import engine, orm In [2]: @engine.calcfunction ...: def test_kwargs(**kwargs): ...: for value in kwargs.values(): ...: assert isinstance(value, orm.Data) ...: ...: In [3]: test_kwargs(**{'a': orm.Int(1)}) Out[3]: {} In [4]: test_kwargs(**{'a': orm.Int(1), 'b': 1}) Report: [487|test_kwargs|on_except]: Traceback (most recent call last): File "/home/sph/.virtualenvs/aiida_dev/lib/python3.9/site-packages/plumpy/process_states.py", line 231, in execute result = self.run_fn(*self.args, **self.kwargs) File "/home/sph/code/aiida/env/dev/aiida-core/aiida/engine/processes/functions.py", line 395, in run result = self._func(*args, **kwargs) File "<ipython-input-2-5fb70fcb7652>", line 4, in test_kwargs assert isinstance(value, orm.Data) AssertionError ``` As you can see, the second invocation of the `test_kwargs` calcfunction passes a normal `int` as an argument, but the input validation does not catch it and simply executes the function where the kwargs contains the normal `int` value. In contrast, if we don't use `**kwargs` but explicitly define the arguments, the validation _does_ work: ```python In [5]: @engine.calcfunction ...: def test_explicit(a, b): ...: assert isinstance(a, orm.Data) ...: assert isinstance(b, orm.Data) In [6]: test_explicit(a=orm.Int(1), b=5) ValueError: Error occurred validating port 'inputs.b': value 'b' is not of the right type. Got '<class 'int'>', expected '(<class 'aiida.orm.nodes.data.data.Data'>,)' ```
1.0
Process functions allow non-Data arguments to be passed as input - MWE: ```python In [1]: from aiida import engine, orm In [2]: @engine.calcfunction ...: def test_kwargs(**kwargs): ...: for value in kwargs.values(): ...: assert isinstance(value, orm.Data) ...: ...: In [3]: test_kwargs(**{'a': orm.Int(1)}) Out[3]: {} In [4]: test_kwargs(**{'a': orm.Int(1), 'b': 1}) Report: [487|test_kwargs|on_except]: Traceback (most recent call last): File "/home/sph/.virtualenvs/aiida_dev/lib/python3.9/site-packages/plumpy/process_states.py", line 231, in execute result = self.run_fn(*self.args, **self.kwargs) File "/home/sph/code/aiida/env/dev/aiida-core/aiida/engine/processes/functions.py", line 395, in run result = self._func(*args, **kwargs) File "<ipython-input-2-5fb70fcb7652>", line 4, in test_kwargs assert isinstance(value, orm.Data) AssertionError ``` As you can see, the second invocation of the `test_kwargs` calcfunction passes a normal `int` as an argument, but the input validation does not catch it and simply executes the function where the kwargs contains the normal `int` value. In contrast, if we don't use `**kwargs` but explicitly define the arguments, the validation _does_ work: ```python In [5]: @engine.calcfunction ...: def test_explicit(a, b): ...: assert isinstance(a, orm.Data) ...: assert isinstance(b, orm.Data) In [6]: test_explicit(a=orm.Int(1), b=5) ValueError: Error occurred validating port 'inputs.b': value 'b' is not of the right type. Got '<class 'int'>', expected '(<class 'aiida.orm.nodes.data.data.Data'>,)' ```
process
process functions allow non data arguments to be passed as input mwe python in from aiida import engine orm in engine calcfunction def test kwargs kwargs for value in kwargs values assert isinstance value orm data in test kwargs a orm int out in test kwargs a orm int b report traceback most recent call last file home sph virtualenvs aiida dev lib site packages plumpy process states py line in execute result self run fn self args self kwargs file home sph code aiida env dev aiida core aiida engine processes functions py line in run result self func args kwargs file line in test kwargs assert isinstance value orm data assertionerror as you can see the second invocation of the test kwargs calcfunction passes a normal int as an argument but the input validation does not catch it and simply executes the function where the kwargs contains the normal int value in contrast if we don t use kwargs but explicitly define the arguments the validation does work python in engine calcfunction def test explicit a b assert isinstance a orm data assert isinstance b orm data in test explicit a orm int b valueerror error occurred validating port inputs b value b is not of the right type got expected
1
21,777
30,289,583,270
IssuesEvent
2023-07-09 05:20:07
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
can't get k8sattributesprocessor to add k8s metadata to metrics
bug Stale processor/k8sattributes closed as inactive
### Component(s) processor/k8sattributes ### What happened? ## Description Apologise in advance if this is not a bug, but some misconfiguration... I'm trying to configure otel collector to automatically add k8s metadata (namespace, etc) to exported metrics. Problem happens on any cluster I tested (local kind cluster, production like one, etc). Here I'll describe the way to reproduce using local kind cluster. ## Steps to Reproduce 1. create a ```values.yaml``` to configure ```opentelemetry-collector-0.43.4``` helm chart. Configuration includes: - collector in daemonset mode - enable kuberentesAttributes processor - enabling metrics pipeline with prometheus exporter ```yaml mode: daemonset presets: kubernetesAttributes: enabled: true config: exporters: prometheus: endpoint: "0.0.0.0:8889" namespace: datacenter service: pipelines: logs: null metrics: exporters: - prometheus traces: null ports: jaeger-compact: enabled: false jaeger-thrift: enabled: false jaeger-grpc: enabled: false zipkin: enabled: false prmts-exporter: enabled: true containerPort: 8889 servicePort: 8889 protocol: TCP ``` 2. install opentelemetry-collector helm in local cluster using above configuration ```bash kubectl install otel opentelemetry-collector-0.43.4.tgz -f values.yaml ``` 3. install some application that exports metrics to otel-collector. for this I use a java application, auto instrumented with opentelementry-javaagent 4. hit opentelemetry-collector pod scraping endpoint and observe the exported metrics ```bash kubectl port-forward otel-opentelemetry-collector-agent-tp95m 8889:8889 curl 127.0.0.1:8889/metrics ``` ## Actual Result I get a list with all metrics as expected but none of them includes the k8s attributes like k8s.namespace.name, etc. for example, I get a bunch of jvm related metrics but none of them is tagged with k8s attributes: ``` # HELP datacenter_jvm_buffer_count An estimate of the number of buffers in the pool # TYPE datacenter_jvm_buffer_count gauge datacenter_jvm_buffer_count{id="direct",job="dc-kvstore"} 17 datacenter_jvm_buffer_count{id="mapped",job="dc-kvstore"} 0 datacenter_jvm_buffer_count{id="mapped - 'non-volatile memory'",job="dc-kvstore"} 0 # HELP datacenter_jvm_buffer_memory_used An estimate of the memory that the Java virtual machine is using for this buffer pool # TYPE datacenter_jvm_buffer_memory_used gauge datacenter_jvm_buffer_memory_used{id="direct",job="dc-kvstore"} 117228 datacenter_jvm_buffer_memory_used{id="mapped",job="dc-kvstore"} 0 datacenter_jvm_buffer_memory_used{id="mapped - 'non-volatile memory'",job="dc-kvstore"} 0 # HELP datacenter_jvm_buffer_total_capacity An estimate of the total capacity of the buffers in this pool # TYPE datacenter_jvm_buffer_total_capacity gauge datacenter_jvm_buffer_total_capacity{id="direct",job="dc-kvstore"} 117228 datacenter_jvm_buffer_total_capacity{id="mapped",job="dc-kvstore"} 0 datacenter_jvm_buffer_total_capacity{id="mapped - 'non-volatile memory'",job="dc-kvstore"} 0 # HELP datacenter_jvm_classes_loaded The number of classes that are currently loaded in the Java virtual machine # TYPE datacenter_jvm_classes_loaded gauge datacenter_jvm_classes_loaded{job="dc-kvstore"} 14892 ``` In addition, otel-collector seems to be configured correctly (see below actual configmap), I can also see in its logs a line telling the k8sattributesprocess is running. ``` service/pipelines.go:96 Processor started. {"kind": "processor", "name": "k8sattributes", "pipeline": "metrics"} ``` ## Expected Result I would expect that all metrics will be tagged with k8s namespace. i.e. ``` # HELP datacenter_jvm_buffer_count An estimate of the number of buffers in the pool # TYPE datacenter_jvm_buffer_count gauge datacenter_jvm_buffer_count{id="direct",job="dc-kvstore", k8s.namespace.name="default"} 17 Thanks. ``` ### Collector version opentelemetry-collector-contrib:0.67.0 ### Environment information ## Environment OS: "Ubuntu 20.04" k8s cluster: kind 0.17.0, k8s 1.23 ### OpenTelemetry Collector configuration ```yaml exporters: logging: {} prometheus: endpoint: 0.0.0.0:8889 namespace: datacenter extensions: health_check: {} memory_ballast: size_in_percentage: 40 processors: batch: {} k8sattributes: extract: metadata: - k8s.namespace.name - k8s.deployment.name - k8s.statefulset.name - k8s.daemonset.name - k8s.cronjob.name - k8s.job.name passthrough: false pod_association: - sources: - from: resource_attribute name: k8s.pod.ip - sources: - from: resource_attribute name: k8s.pod.uid - sources: - from: connection memory_limiter: check_interval: 5s limit_percentage: 80 spike_limit_percentage: 25 receivers: jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 thrift_compact: endpoint: 0.0.0.0:6831 thrift_http: endpoint: 0.0.0.0:14268 otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 prometheus: config: scrape_configs: - job_name: opentelemetry-collector scrape_interval: 10s static_configs: - targets: - ${MY_POD_IP}:8888 zipkin: endpoint: 0.0.0.0:9411 service: extensions: - health_check - memory_ballast pipelines: metrics: exporters: - prometheus processors: - k8sattributes - memory_limiter - batch receivers: - otlp - prometheus telemetry: metrics: address: 0.0.0.0:8888 ``` ### Log output ```shell 2023-01-17T19:47:29.875Z info service/telemetry.go:111 Setting up own telemetry... 2023-01-17T19:47:29.875Z info service/telemetry.go:141 Serving Prometheus metrics {"address": "0.0.0.0:8888", "level": "Basic"} 2023-01-17T19:47:29.876Z info memorylimiterprocessor@v0.67.0/memorylimiter.go:148 Using percentage memory limiter {"kind": "processor", "name": "memory_limiter", "pipeline": "metrics", "total_memory_mib": 512, "limit_percentage": 80, "spike_limit_percentage": 25} 2023-01-17T19:47:29.876Z info memorylimiterprocessor@v0.67.0/memorylimiter.go:112 Memory limiter configured {"kind": "processor", "name": "memory_limiter", "pipeline": "metrics", "limit_mib": 409, "spike_limit_mib": 128, "check_interval": 5} 2023-01-17T19:47:29.877Z info kube/client.go:101 k8s filtering {"kind": "processor", "name": "k8sattributes", "pipeline": "metrics", "labelSelector": "", "fieldSelector": ""} 2023-01-17T19:47:29.878Z info service/service.go:88 Starting otelcol-contrib... {"Version": "0.67.0", "NumCPU": 8} 2023-01-17T19:47:29.878Z info extensions/extensions.go:42 Starting extensions... 2023-01-17T19:47:29.878Z info extensions/extensions.go:45 Extension is starting... {"kind": "extension", "name": "health_check"} 2023-01-17T19:47:29.878Z info healthcheckextension@v0.67.0/healthcheckextension.go:45 Starting health_check extension {"kind": "extension", "name": "health_check", "config": {"Config":null,"Endpoint":"0.0.0.0:13133","TLSSetting":null,"CORS":null,"Auth":null,"MaxRequestBodySize":0,"IncludeMetadata":false,"Path":"/","CheckCollectorPipeline":{"Enabled":false,"Interval":"5m","ExporterFailureThreshold":5}}} 2023-01-17T19:47:29.878Z warn internal/warning.go:51 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks {"kind": "extension", "name": "health_check", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} 2023-01-17T19:47:29.878Z info extensions/extensions.go:49 Extension started. {"kind": "extension", "name": "health_check"} 2023-01-17T19:47:29.878Z info extensions/extensions.go:45 Extension is starting... {"kind": "extension", "name": "memory_ballast"} 2023-01-17T19:47:29.967Z info ballastextension@v0.67.0/memory_ballast.go:52 Setting memory ballast {"kind": "extension", "name": "memory_ballast", "MiBs": 204} 2023-01-17T19:47:29.967Z info extensions/extensions.go:49 Extension started. {"kind": "extension", "name": "memory_ballast"} 2023-01-17T19:47:29.967Z info service/pipelines.go:76 Starting exporters... 2023-01-17T19:47:29.967Z info service/pipelines.go:80 Exporter is starting... {"kind": "exporter", "data_type": "metrics", "name": "prometheus"} 2023-01-17T19:47:29.968Z warn internal/warning.go:51 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks {"kind": "exporter", "data_type": "metrics", "name": "prometheus", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} 2023-01-17T19:47:29.968Z info service/pipelines.go:84 Exporter started. {"kind": "exporter", "data_type": "metrics", "name": "prometheus"} 2023-01-17T19:47:29.968Z info service/pipelines.go:88 Starting processors... 2023-01-17T19:47:29.968Z info service/pipelines.go:92 Processor is starting... {"kind": "processor", "name": "batch", "pipeline": "metrics"} 2023-01-17T19:47:29.968Z info service/pipelines.go:96 Processor started. {"kind": "processor", "name": "batch", "pipeline": "metrics"} 2023-01-17T19:47:29.968Z info service/pipelines.go:92 Processor is starting... {"kind": "processor", "name": "memory_limiter", "pipeline": "metrics"} 2023-01-17T19:47:29.968Z info service/pipelines.go:96 Processor started. {"kind": "processor", "name": "memory_limiter", "pipeline": "metrics"} 2023-01-17T19:47:29.968Z info service/pipelines.go:92 Processor is starting... {"kind": "processor", "name": "k8sattributes", "pipeline": "metrics"} 2023-01-17T19:47:29.968Z info service/pipelines.go:96 Processor started. {"kind": "processor", "name": "k8sattributes", "pipeline": "metrics"} 2023-01-17T19:47:29.968Z info service/pipelines.go:100 Starting receivers... 2023-01-17T19:47:29.968Z info service/pipelines.go:104 Receiver is starting... {"kind": "receiver", "name": "prometheus", "pipeline": "metrics"} 2023-01-17T19:47:29.969Z info prometheusreceiver@v0.67.0/metrics_receiver.go:255 Starting discovery manager {"kind": "receiver", "name": "prometheus", "pipeline": "metrics"} 2023-01-17T19:47:29.969Z info prometheusreceiver@v0.67.0/metrics_receiver.go:243 Scrape job added {"kind": "receiver", "name": "prometheus", "pipeline": "metrics", "jobName": "opentelemetry-collector"} 2023-01-17T19:47:29.969Z info service/pipelines.go:108 Receiver started. {"kind": "receiver", "name": "prometheus", "pipeline": "metrics"} 2023-01-17T19:47:29.969Z info service/pipelines.go:104 Receiver is starting... {"kind": "receiver", "name": "otlp", "pipeline": "metrics"} 2023-01-17T19:47:29.969Z warn internal/warning.go:51 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks {"kind": "receiver", "name": "otlp", "pipeline": "metrics", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} 2023-01-17T19:47:29.969Z info prometheusreceiver@v0.67.0/metrics_receiver.go:288 Starting scrape manager {"kind": "receiver", "name": "prometheus", "pipeline": "metrics"} 2023-01-17T19:47:29.969Z info otlpreceiver@v0.67.0/otlp.go:72 Starting GRPC server {"kind": "receiver", "name": "otlp", "pipeline": "metrics", "endpoint": "0.0.0.0:4317"} 2023-01-17T19:47:29.970Z warn internal/warning.go:51 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks {"kind": "receiver", "name": "otlp", "pipeline": "metrics", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} 2023-01-17T19:47:29.970Z info otlpreceiver@v0.67.0/otlp.go:90 Starting HTTP server {"kind": "receiver", "name": "otlp", "pipeline": "metrics", "endpoint": "0.0.0.0:4318"} 2023-01-17T19:47:29.970Z info service/pipelines.go:108 Receiver started. {"kind": "receiver", "name": "otlp", "pipeline": "metrics"} 2023-01-17T19:47:29.970Z info healthcheck/handler.go:129 Health Check state change {"kind": "extension", "name": "health_check", "status": "ready"} 2023-01-17T19:47:29.970Z info service/service.go:105 Everything is ready. Begin running and processing data. ``` ### Additional context _No response_
1.0
can't get k8sattributesprocessor to add k8s metadata to metrics - ### Component(s) processor/k8sattributes ### What happened? ## Description Apologise in advance if this is not a bug, but some misconfiguration... I'm trying to configure otel collector to automatically add k8s metadata (namespace, etc) to exported metrics. Problem happens on any cluster I tested (local kind cluster, production like one, etc). Here I'll describe the way to reproduce using local kind cluster. ## Steps to Reproduce 1. create a ```values.yaml``` to configure ```opentelemetry-collector-0.43.4``` helm chart. Configuration includes: - collector in daemonset mode - enable kuberentesAttributes processor - enabling metrics pipeline with prometheus exporter ```yaml mode: daemonset presets: kubernetesAttributes: enabled: true config: exporters: prometheus: endpoint: "0.0.0.0:8889" namespace: datacenter service: pipelines: logs: null metrics: exporters: - prometheus traces: null ports: jaeger-compact: enabled: false jaeger-thrift: enabled: false jaeger-grpc: enabled: false zipkin: enabled: false prmts-exporter: enabled: true containerPort: 8889 servicePort: 8889 protocol: TCP ``` 2. install opentelemetry-collector helm in local cluster using above configuration ```bash kubectl install otel opentelemetry-collector-0.43.4.tgz -f values.yaml ``` 3. install some application that exports metrics to otel-collector. for this I use a java application, auto instrumented with opentelementry-javaagent 4. hit opentelemetry-collector pod scraping endpoint and observe the exported metrics ```bash kubectl port-forward otel-opentelemetry-collector-agent-tp95m 8889:8889 curl 127.0.0.1:8889/metrics ``` ## Actual Result I get a list with all metrics as expected but none of them includes the k8s attributes like k8s.namespace.name, etc. for example, I get a bunch of jvm related metrics but none of them is tagged with k8s attributes: ``` # HELP datacenter_jvm_buffer_count An estimate of the number of buffers in the pool # TYPE datacenter_jvm_buffer_count gauge datacenter_jvm_buffer_count{id="direct",job="dc-kvstore"} 17 datacenter_jvm_buffer_count{id="mapped",job="dc-kvstore"} 0 datacenter_jvm_buffer_count{id="mapped - 'non-volatile memory'",job="dc-kvstore"} 0 # HELP datacenter_jvm_buffer_memory_used An estimate of the memory that the Java virtual machine is using for this buffer pool # TYPE datacenter_jvm_buffer_memory_used gauge datacenter_jvm_buffer_memory_used{id="direct",job="dc-kvstore"} 117228 datacenter_jvm_buffer_memory_used{id="mapped",job="dc-kvstore"} 0 datacenter_jvm_buffer_memory_used{id="mapped - 'non-volatile memory'",job="dc-kvstore"} 0 # HELP datacenter_jvm_buffer_total_capacity An estimate of the total capacity of the buffers in this pool # TYPE datacenter_jvm_buffer_total_capacity gauge datacenter_jvm_buffer_total_capacity{id="direct",job="dc-kvstore"} 117228 datacenter_jvm_buffer_total_capacity{id="mapped",job="dc-kvstore"} 0 datacenter_jvm_buffer_total_capacity{id="mapped - 'non-volatile memory'",job="dc-kvstore"} 0 # HELP datacenter_jvm_classes_loaded The number of classes that are currently loaded in the Java virtual machine # TYPE datacenter_jvm_classes_loaded gauge datacenter_jvm_classes_loaded{job="dc-kvstore"} 14892 ``` In addition, otel-collector seems to be configured correctly (see below actual configmap), I can also see in its logs a line telling the k8sattributesprocess is running. ``` service/pipelines.go:96 Processor started. {"kind": "processor", "name": "k8sattributes", "pipeline": "metrics"} ``` ## Expected Result I would expect that all metrics will be tagged with k8s namespace. i.e. ``` # HELP datacenter_jvm_buffer_count An estimate of the number of buffers in the pool # TYPE datacenter_jvm_buffer_count gauge datacenter_jvm_buffer_count{id="direct",job="dc-kvstore", k8s.namespace.name="default"} 17 Thanks. ``` ### Collector version opentelemetry-collector-contrib:0.67.0 ### Environment information ## Environment OS: "Ubuntu 20.04" k8s cluster: kind 0.17.0, k8s 1.23 ### OpenTelemetry Collector configuration ```yaml exporters: logging: {} prometheus: endpoint: 0.0.0.0:8889 namespace: datacenter extensions: health_check: {} memory_ballast: size_in_percentage: 40 processors: batch: {} k8sattributes: extract: metadata: - k8s.namespace.name - k8s.deployment.name - k8s.statefulset.name - k8s.daemonset.name - k8s.cronjob.name - k8s.job.name passthrough: false pod_association: - sources: - from: resource_attribute name: k8s.pod.ip - sources: - from: resource_attribute name: k8s.pod.uid - sources: - from: connection memory_limiter: check_interval: 5s limit_percentage: 80 spike_limit_percentage: 25 receivers: jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 thrift_compact: endpoint: 0.0.0.0:6831 thrift_http: endpoint: 0.0.0.0:14268 otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 prometheus: config: scrape_configs: - job_name: opentelemetry-collector scrape_interval: 10s static_configs: - targets: - ${MY_POD_IP}:8888 zipkin: endpoint: 0.0.0.0:9411 service: extensions: - health_check - memory_ballast pipelines: metrics: exporters: - prometheus processors: - k8sattributes - memory_limiter - batch receivers: - otlp - prometheus telemetry: metrics: address: 0.0.0.0:8888 ``` ### Log output ```shell 2023-01-17T19:47:29.875Z info service/telemetry.go:111 Setting up own telemetry... 2023-01-17T19:47:29.875Z info service/telemetry.go:141 Serving Prometheus metrics {"address": "0.0.0.0:8888", "level": "Basic"} 2023-01-17T19:47:29.876Z info memorylimiterprocessor@v0.67.0/memorylimiter.go:148 Using percentage memory limiter {"kind": "processor", "name": "memory_limiter", "pipeline": "metrics", "total_memory_mib": 512, "limit_percentage": 80, "spike_limit_percentage": 25} 2023-01-17T19:47:29.876Z info memorylimiterprocessor@v0.67.0/memorylimiter.go:112 Memory limiter configured {"kind": "processor", "name": "memory_limiter", "pipeline": "metrics", "limit_mib": 409, "spike_limit_mib": 128, "check_interval": 5} 2023-01-17T19:47:29.877Z info kube/client.go:101 k8s filtering {"kind": "processor", "name": "k8sattributes", "pipeline": "metrics", "labelSelector": "", "fieldSelector": ""} 2023-01-17T19:47:29.878Z info service/service.go:88 Starting otelcol-contrib... {"Version": "0.67.0", "NumCPU": 8} 2023-01-17T19:47:29.878Z info extensions/extensions.go:42 Starting extensions... 2023-01-17T19:47:29.878Z info extensions/extensions.go:45 Extension is starting... {"kind": "extension", "name": "health_check"} 2023-01-17T19:47:29.878Z info healthcheckextension@v0.67.0/healthcheckextension.go:45 Starting health_check extension {"kind": "extension", "name": "health_check", "config": {"Config":null,"Endpoint":"0.0.0.0:13133","TLSSetting":null,"CORS":null,"Auth":null,"MaxRequestBodySize":0,"IncludeMetadata":false,"Path":"/","CheckCollectorPipeline":{"Enabled":false,"Interval":"5m","ExporterFailureThreshold":5}}} 2023-01-17T19:47:29.878Z warn internal/warning.go:51 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks {"kind": "extension", "name": "health_check", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} 2023-01-17T19:47:29.878Z info extensions/extensions.go:49 Extension started. {"kind": "extension", "name": "health_check"} 2023-01-17T19:47:29.878Z info extensions/extensions.go:45 Extension is starting... {"kind": "extension", "name": "memory_ballast"} 2023-01-17T19:47:29.967Z info ballastextension@v0.67.0/memory_ballast.go:52 Setting memory ballast {"kind": "extension", "name": "memory_ballast", "MiBs": 204} 2023-01-17T19:47:29.967Z info extensions/extensions.go:49 Extension started. {"kind": "extension", "name": "memory_ballast"} 2023-01-17T19:47:29.967Z info service/pipelines.go:76 Starting exporters... 2023-01-17T19:47:29.967Z info service/pipelines.go:80 Exporter is starting... {"kind": "exporter", "data_type": "metrics", "name": "prometheus"} 2023-01-17T19:47:29.968Z warn internal/warning.go:51 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks {"kind": "exporter", "data_type": "metrics", "name": "prometheus", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} 2023-01-17T19:47:29.968Z info service/pipelines.go:84 Exporter started. {"kind": "exporter", "data_type": "metrics", "name": "prometheus"} 2023-01-17T19:47:29.968Z info service/pipelines.go:88 Starting processors... 2023-01-17T19:47:29.968Z info service/pipelines.go:92 Processor is starting... {"kind": "processor", "name": "batch", "pipeline": "metrics"} 2023-01-17T19:47:29.968Z info service/pipelines.go:96 Processor started. {"kind": "processor", "name": "batch", "pipeline": "metrics"} 2023-01-17T19:47:29.968Z info service/pipelines.go:92 Processor is starting... {"kind": "processor", "name": "memory_limiter", "pipeline": "metrics"} 2023-01-17T19:47:29.968Z info service/pipelines.go:96 Processor started. {"kind": "processor", "name": "memory_limiter", "pipeline": "metrics"} 2023-01-17T19:47:29.968Z info service/pipelines.go:92 Processor is starting... {"kind": "processor", "name": "k8sattributes", "pipeline": "metrics"} 2023-01-17T19:47:29.968Z info service/pipelines.go:96 Processor started. {"kind": "processor", "name": "k8sattributes", "pipeline": "metrics"} 2023-01-17T19:47:29.968Z info service/pipelines.go:100 Starting receivers... 2023-01-17T19:47:29.968Z info service/pipelines.go:104 Receiver is starting... {"kind": "receiver", "name": "prometheus", "pipeline": "metrics"} 2023-01-17T19:47:29.969Z info prometheusreceiver@v0.67.0/metrics_receiver.go:255 Starting discovery manager {"kind": "receiver", "name": "prometheus", "pipeline": "metrics"} 2023-01-17T19:47:29.969Z info prometheusreceiver@v0.67.0/metrics_receiver.go:243 Scrape job added {"kind": "receiver", "name": "prometheus", "pipeline": "metrics", "jobName": "opentelemetry-collector"} 2023-01-17T19:47:29.969Z info service/pipelines.go:108 Receiver started. {"kind": "receiver", "name": "prometheus", "pipeline": "metrics"} 2023-01-17T19:47:29.969Z info service/pipelines.go:104 Receiver is starting... {"kind": "receiver", "name": "otlp", "pipeline": "metrics"} 2023-01-17T19:47:29.969Z warn internal/warning.go:51 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks {"kind": "receiver", "name": "otlp", "pipeline": "metrics", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} 2023-01-17T19:47:29.969Z info prometheusreceiver@v0.67.0/metrics_receiver.go:288 Starting scrape manager {"kind": "receiver", "name": "prometheus", "pipeline": "metrics"} 2023-01-17T19:47:29.969Z info otlpreceiver@v0.67.0/otlp.go:72 Starting GRPC server {"kind": "receiver", "name": "otlp", "pipeline": "metrics", "endpoint": "0.0.0.0:4317"} 2023-01-17T19:47:29.970Z warn internal/warning.go:51 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks {"kind": "receiver", "name": "otlp", "pipeline": "metrics", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} 2023-01-17T19:47:29.970Z info otlpreceiver@v0.67.0/otlp.go:90 Starting HTTP server {"kind": "receiver", "name": "otlp", "pipeline": "metrics", "endpoint": "0.0.0.0:4318"} 2023-01-17T19:47:29.970Z info service/pipelines.go:108 Receiver started. {"kind": "receiver", "name": "otlp", "pipeline": "metrics"} 2023-01-17T19:47:29.970Z info healthcheck/handler.go:129 Health Check state change {"kind": "extension", "name": "health_check", "status": "ready"} 2023-01-17T19:47:29.970Z info service/service.go:105 Everything is ready. Begin running and processing data. ``` ### Additional context _No response_
process
can t get to add metadata to metrics component s processor what happened description apologise in advance if this is not a bug but some misconfiguration i m trying to configure otel collector to automatically add metadata namespace etc to exported metrics problem happens on any cluster i tested local kind cluster production like one etc here i ll describe the way to reproduce using local kind cluster steps to reproduce create a values yaml to configure opentelemetry collector helm chart configuration includes collector in daemonset mode enable kuberentesattributes processor enabling metrics pipeline with prometheus exporter yaml mode daemonset presets kubernetesattributes enabled true config exporters prometheus endpoint namespace datacenter service pipelines logs null metrics exporters prometheus traces null ports jaeger compact enabled false jaeger thrift enabled false jaeger grpc enabled false zipkin enabled false prmts exporter enabled true containerport serviceport protocol tcp install opentelemetry collector helm in local cluster using above configuration bash kubectl install otel opentelemetry collector tgz f values yaml install some application that exports metrics to otel collector for this i use a java application auto instrumented with opentelementry javaagent hit opentelemetry collector pod scraping endpoint and observe the exported metrics bash kubectl port forward otel opentelemetry collector agent curl metrics actual result i get a list with all metrics as expected but none of them includes the attributes like namespace name etc for example i get a bunch of jvm related metrics but none of them is tagged with attributes help datacenter jvm buffer count an estimate of the number of buffers in the pool type datacenter jvm buffer count gauge datacenter jvm buffer count id direct job dc kvstore datacenter jvm buffer count id mapped job dc kvstore datacenter jvm buffer count id mapped non volatile memory job dc kvstore help datacenter jvm buffer memory used an estimate of the memory that the java virtual machine is using for this buffer pool type datacenter jvm buffer memory used gauge datacenter jvm buffer memory used id direct job dc kvstore datacenter jvm buffer memory used id mapped job dc kvstore datacenter jvm buffer memory used id mapped non volatile memory job dc kvstore help datacenter jvm buffer total capacity an estimate of the total capacity of the buffers in this pool type datacenter jvm buffer total capacity gauge datacenter jvm buffer total capacity id direct job dc kvstore datacenter jvm buffer total capacity id mapped job dc kvstore datacenter jvm buffer total capacity id mapped non volatile memory job dc kvstore help datacenter jvm classes loaded the number of classes that are currently loaded in the java virtual machine type datacenter jvm classes loaded gauge datacenter jvm classes loaded job dc kvstore in addition otel collector seems to be configured correctly see below actual configmap i can also see in its logs a line telling the is running service pipelines go processor started kind processor name pipeline metrics expected result i would expect that all metrics will be tagged with namespace i e help datacenter jvm buffer count an estimate of the number of buffers in the pool type datacenter jvm buffer count gauge datacenter jvm buffer count id direct job dc kvstore namespace name default thanks collector version opentelemetry collector contrib environment information environment os ubuntu cluster kind opentelemetry collector configuration yaml exporters logging prometheus endpoint namespace datacenter extensions health check memory ballast size in percentage processors batch extract metadata namespace name deployment name statefulset name daemonset name cronjob name job name passthrough false pod association sources from resource attribute name pod ip sources from resource attribute name pod uid sources from connection memory limiter check interval limit percentage spike limit percentage receivers jaeger protocols grpc endpoint thrift compact endpoint thrift http endpoint otlp protocols grpc endpoint http endpoint prometheus config scrape configs job name opentelemetry collector scrape interval static configs targets my pod ip zipkin endpoint service extensions health check memory ballast pipelines metrics exporters prometheus processors memory limiter batch receivers otlp prometheus telemetry metrics address log output shell info service telemetry go setting up own telemetry info service telemetry go serving prometheus metrics address level basic info memorylimiterprocessor memorylimiter go using percentage memory limiter kind processor name memory limiter pipeline metrics total memory mib limit percentage spike limit percentage info memorylimiterprocessor memorylimiter go memory limiter configured kind processor name memory limiter pipeline metrics limit mib spike limit mib check interval info kube client go filtering kind processor name pipeline metrics labelselector fieldselector info service service go starting otelcol contrib version numcpu info extensions extensions go starting extensions info extensions extensions go extension is starting kind extension name health check info healthcheckextension healthcheckextension go starting health check extension kind extension name health check config config null endpoint tlssetting null cors null auth null maxrequestbodysize includemetadata false path checkcollectorpipeline enabled false interval exporterfailurethreshold warn internal warning go using the address exposes this server to every network interface which may facilitate denial of service attacks kind extension name health check documentation info extensions extensions go extension started kind extension name health check info extensions extensions go extension is starting kind extension name memory ballast info ballastextension memory ballast go setting memory ballast kind extension name memory ballast mibs info extensions extensions go extension started kind extension name memory ballast info service pipelines go starting exporters info service pipelines go exporter is starting kind exporter data type metrics name prometheus warn internal warning go using the address exposes this server to every network interface which may facilitate denial of service attacks kind exporter data type metrics name prometheus documentation info service pipelines go exporter started kind exporter data type metrics name prometheus info service pipelines go starting processors info service pipelines go processor is starting kind processor name batch pipeline metrics info service pipelines go processor started kind processor name batch pipeline metrics info service pipelines go processor is starting kind processor name memory limiter pipeline metrics info service pipelines go processor started kind processor name memory limiter pipeline metrics info service pipelines go processor is starting kind processor name pipeline metrics info service pipelines go processor started kind processor name pipeline metrics info service pipelines go starting receivers info service pipelines go receiver is starting kind receiver name prometheus pipeline metrics info prometheusreceiver metrics receiver go starting discovery manager kind receiver name prometheus pipeline metrics info prometheusreceiver metrics receiver go scrape job added kind receiver name prometheus pipeline metrics jobname opentelemetry collector info service pipelines go receiver started kind receiver name prometheus pipeline metrics info service pipelines go receiver is starting kind receiver name otlp pipeline metrics warn internal warning go using the address exposes this server to every network interface which may facilitate denial of service attacks kind receiver name otlp pipeline metrics documentation info prometheusreceiver metrics receiver go starting scrape manager kind receiver name prometheus pipeline metrics info otlpreceiver otlp go starting grpc server kind receiver name otlp pipeline metrics endpoint warn internal warning go using the address exposes this server to every network interface which may facilitate denial of service attacks kind receiver name otlp pipeline metrics documentation info otlpreceiver otlp go starting http server kind receiver name otlp pipeline metrics endpoint info service pipelines go receiver started kind receiver name otlp pipeline metrics info healthcheck handler go health check state change kind extension name health check status ready info service service go everything is ready begin running and processing data additional context no response
1
297,504
9,171,391,809
IssuesEvent
2019-03-04 01:30:38
qlcchain/go-qlc
https://api.github.com/repos/qlcchain/go-qlc
closed
refactor block struct
Priority: High Type: Enhancement
### Description of the issue - [ ] only keep serialize function for block interface - [ ] split state block and smartcontract block struct - [ ] add Sender and Receiver to state block ### Issue-Type - [ ] bug report - [x] feature request - [ ] Documentation improvement
1.0
refactor block struct - ### Description of the issue - [ ] only keep serialize function for block interface - [ ] split state block and smartcontract block struct - [ ] add Sender and Receiver to state block ### Issue-Type - [ ] bug report - [x] feature request - [ ] Documentation improvement
non_process
refactor block struct description of the issue only keep serialize function for block interface split state block and smartcontract block struct add sender and receiver to state block issue type bug report feature request documentation improvement
0
12,943
15,307,786,346
IssuesEvent
2021-02-24 21:25:34
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
URI is not hierarchical exception with email reference (DITA OT 3.6)
bug preprocess priority/medium
I publish this DITA Map to HTML5 using DITA OT 3.6: <!DOCTYPE map PUBLIC "-//OASIS//DTD DITA Map//EN" "map.dtd"> <map> <title>Test</title> <topicref href="mailto:support@oxygenxml.com" scope="peer" format="html"> <topicmeta> <navtitle>Contact support</navtitle> </topicmeta> </topicref> </map> the publishing breaks with: /Users/../dita-ot/plugins/org.dita.base/build_preprocess.xml:260: java.lang.IllegalArgumentException: URI is not hierarchical at java.base/java.io.File.<init>(File.java:421) at org.dita.dost.store.StreamStore.exists(StreamStore.java:300) at org.dita.dost.reader.ChunkMapReader.processTopicref(ChunkMapReader.java:324) at org.dita.dost.reader.ChunkMapReader.process(ChunkMapReader.java:136) at org.dita.dost.writer.AbstractDomFilter.read(AbstractDomFilter.java:41) at org.dita.dost.reader.ChunkMapReader.read(ChunkMapReader.java:112) at org.dita.dost.module.ChunkModule.execute(ChunkModule.java:80) This used to work with DITA OT 3.5.4.
1.0
URI is not hierarchical exception with email reference (DITA OT 3.6) - I publish this DITA Map to HTML5 using DITA OT 3.6: <!DOCTYPE map PUBLIC "-//OASIS//DTD DITA Map//EN" "map.dtd"> <map> <title>Test</title> <topicref href="mailto:support@oxygenxml.com" scope="peer" format="html"> <topicmeta> <navtitle>Contact support</navtitle> </topicmeta> </topicref> </map> the publishing breaks with: /Users/../dita-ot/plugins/org.dita.base/build_preprocess.xml:260: java.lang.IllegalArgumentException: URI is not hierarchical at java.base/java.io.File.<init>(File.java:421) at org.dita.dost.store.StreamStore.exists(StreamStore.java:300) at org.dita.dost.reader.ChunkMapReader.processTopicref(ChunkMapReader.java:324) at org.dita.dost.reader.ChunkMapReader.process(ChunkMapReader.java:136) at org.dita.dost.writer.AbstractDomFilter.read(AbstractDomFilter.java:41) at org.dita.dost.reader.ChunkMapReader.read(ChunkMapReader.java:112) at org.dita.dost.module.ChunkModule.execute(ChunkModule.java:80) This used to work with DITA OT 3.5.4.
process
uri is not hierarchical exception with email reference dita ot i publish this dita map to using dita ot test contact support the publishing breaks with users dita ot plugins org dita base build preprocess xml java lang illegalargumentexception uri is not hierarchical at java base java io file file java at org dita dost store streamstore exists streamstore java at org dita dost reader chunkmapreader processtopicref chunkmapreader java at org dita dost reader chunkmapreader process chunkmapreader java at org dita dost writer abstractdomfilter read abstractdomfilter java at org dita dost reader chunkmapreader read chunkmapreader java at org dita dost module chunkmodule execute chunkmodule java this used to work with dita ot
1
43,763
2,892,422,943
IssuesEvent
2015-06-15 12:58:38
jonathf/matlab2cpp
https://api.github.com/repos/jonathf/matlab2cpp
closed
Fieldnames
medium priority
Structure variables can be retrieved through fieldnames. Not recognized by translation. Example: a = struct a(1).b = "c" d = filednames(a)
1.0
Fieldnames - Structure variables can be retrieved through fieldnames. Not recognized by translation. Example: a = struct a(1).b = "c" d = filednames(a)
non_process
fieldnames structure variables can be retrieved through fieldnames not recognized by translation example a struct a b c d filednames a
0
20,327
26,968,528,879
IssuesEvent
2023-02-09 01:26:28
MikaylaFischler/cc-mek-scada
https://api.github.com/repos/MikaylaFischler/cc-mek-scada
closed
Change Precision to Handle Steady State Error
enhancement plc supervisor process control
Possibly using more than 10ths place of burn rate may improve responsiveness of closed loop controllers.
1.0
Change Precision to Handle Steady State Error - Possibly using more than 10ths place of burn rate may improve responsiveness of closed loop controllers.
process
change precision to handle steady state error possibly using more than place of burn rate may improve responsiveness of closed loop controllers
1
4,401
7,296,252,958
IssuesEvent
2018-02-26 10:08:46
DevExpress/testcafe-hammerhead
https://api.github.com/repos/DevExpress/testcafe-hammerhead
closed
Call of bind function from postMessage and eval should be instrumented
AREA: client AREA: server SYSTEM: resource processing TYPE: bug
Client script may contains code as below: ```js var w = a.postMessage ? a.postMessage.bind(a) : function() {} ``` And corently it processed: ```js var w = __get$PostMessage(a) ? a.postMessage.bind(a) : function() {} ```
1.0
Call of bind function from postMessage and eval should be instrumented - Client script may contains code as below: ```js var w = a.postMessage ? a.postMessage.bind(a) : function() {} ``` And corently it processed: ```js var w = __get$PostMessage(a) ? a.postMessage.bind(a) : function() {} ```
process
call of bind function from postmessage and eval should be instrumented client script may contains code as below js var w a postmessage a postmessage bind a function and corently it processed js var w get postmessage a a postmessage bind a function
1
687,162
23,515,782,898
IssuesEvent
2022-08-18 21:11:51
ezolenko/rollup-plugin-typescript2
https://api.github.com/repos/ezolenko/rollup-plugin-typescript2
closed
some type-only TS files are ignored when using `tsconfig` `files`?
kind: bug solution: duplicate problem: removed issue template priority: in progress problem: plugin order topic: type-only / emit-less imports
## What happens and why it is incorrect there are many files ending in `*.ts` in the src, but only some have been transpiled 1. Does `tsc` have the same output? If so, please explain why this is incorrect behavior NO, `tsc` run as expectly. ![image](https://user-images.githubusercontent.com/52886395/185072389-587c5bfa-0817-4c06-a4c9-3a5833ba1457.png) here's file structure: ![image](https://user-images.githubusercontent.com/52886395/185073553-3a473e39-d5ae-41ea-b5dc-f32a5e10a4eb.png) but rpt2 seems to ignore the file named `component.ts`: <img width="1241" alt="image" src="https://user-images.githubusercontent.com/52886395/185072856-019e3e3e-9598-4a5e-8dfb-10cb9f7276cc.png"> ## Environment ### Versions ```text System: OS: macOS 11.4 CPU: (12) x64 Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz Memory: 59.11 MB / 32.00 GB Shell: 5.8 - /bin/zsh Binaries: Node: 14.17.4 - ~/.nvm/versions/node/v14.17.4/bin/node Yarn: 1.22.11 - ~/.nvm/versions/node/v14.17.4/bin/yarn npm: 6.14.14 - ~/.nvm/versions/node/v14.17.4/bin/npm npmPackages: rollup: ^2.45.2 => 2.45.2 rollup-plugin-typescript2: ^0.30.0 => 0.30.0 typescript: ^4.3.5 => 4.3.5 npmGlobalPackages: typescript: 4.3.5 ``` <!--- paste your rollup config below if relevant ---> <details> <summary><h4><code>rollup.config.js</code></h4>: </summary> <!--- INSERT rollup.config.ts IN THE CODE SNIPPET BELOW ---> ```js import { nodeResolve } from '@rollup/plugin-node-resolve'; import replace from '@rollup/plugin-replace'; import postcss from 'rollup-plugin-postcss'; import image from '@rollup/plugin-image'; import ts from 'rollup-plugin-typescript2'; import dts from 'rollup-plugin-dts'; import json from '@rollup/plugin-json'; import alias from '@rollup/plugin-alias'; import autoExternal from 'rollup-plugin-auto-external'; import path from 'path'; import importCss from './rollup-plugin-import-css'; const resolve = (...dirs) => path.resolve(__dirname, '../', ...dirs); function toUpperCase(match) { return match.replace('-', '').toUpperCase(); } export default [ { input: resolve('src/lib/index.tsx'), output: [ { file: resolve('dist/index.esm.js'), format: 'es', sourcemap: true, }, ], external: id => { try { const idSourcePath = require.resolve(id, { paths: [resolve()] }); return idSourcePath && idSourcePath.includes('node_modules'); } catch (error) { return false; } }, plugins: [ ts({ check: false, tsconfig: resolve('tsconfig.json'), tsconfigOverride: { compilerOptions: { declaration: true, declarationDir: resolve('dist/type') }, }, verbosity: 2, useTsconfigDeclarationDir: true, include: ['*.ts+(|x)', '**/*.ts+(|x)', '*.js+(|x)', '**/*.js+(|x)'], }), image(), postcss({ extensions: ['.css', '.scss', '.less'], autoModules: true, extract: 'index.css', namedExports(name) { let reg = /-[a-z]/g; const temp = name.replace(reg, toUpperCase); return temp; }, }), json(), alias({ entries: [ { find: 'src', replacement: resolve('src'), }, ], }), nodeResolve({ extensions: ['.js', '.jsx', '.ts', '.tsx'], // some package.json files have a "browser" field which specifies // alternative files to load for people bundling for the browser. If // that's you, either use this option or add "browser" to the // "mainfields" option, otherwise pkg.browser will be ignored browser: true, preferBuiltins: true, mainFields: ['browser', 'jsnext', 'module', 'main'], }), replace({ 'process.env.NODE_ENV': JSON.stringify('development'), 'process.env.SEMI_ICON_LAZY_LOAD': true, preventAssignment: true, }), autoExternal({ packagePath: resolve(), }), importCss(), ], }, { input: resolve('dist/type/src/lib/index.d.ts'), external: id => { try { const idSourcePath = require.resolve(id, { paths: [resolve()] }); return idSourcePath && idSourcePath.includes('node_modules'); } catch (error) { return false; } }, output: [{ file: resolve('dist/index.d.ts'), format: 'es' }], plugins: [dts()], }, ]; ``` </details> <!--- paste your tsconfig.json below if relevant ---> <details> <summary><h4><code>tsconfig.json</code></h4>: </summary> <!--- INSERT tsconfig.json IN THE CODE SNIPPET BELOW ---> ```json5 { "compilerOptions": { "target": "es6", "module": "esnext", "lib": ["es7", "dom"], "sourceMap": true, "allowJs": true, "jsx": "react", "moduleResolution": "node", "experimentalDecorators": true, "rootDir": "./", "baseUrl": "./src", "forceConsistentCasingInFileNames": true, "noImplicitReturns": true, "noImplicitThis": false, "noImplicitAny": false, "importHelpers": true, "strictNullChecks": false, "suppressImplicitAnyIndexErrors": true, "noUnusedLocals": true, "noEmit": true, "allowSyntheticDefaultImports": true, "esModuleInterop": false, "paths": { "@ies/kefu-editor": ["src/lib/*"] }, "plugins": [ { "transform": "ts-optchain/transform" } ] }, "typeRoots": ["node", "node_modules/@types", "./src/typings"], "exclude": ["node_modules"] } ``` </details> <!--- paste your package.json below if relevant ---> <details> <summary><h4><code>package.json</code></h4>: </summary> <!--- INSERT package.json IN THE CODE SNIPPET BELOW ---> ```json ``` </details> <!--- add verbosity verbosity: 3 to plugin options and attach output if relevant (censor out anything sensitive) ---> <details> <summary><h4>plugin output with verbosity 3</h4>: </summary> <!--- INSERT plugin output IN THE CODE SNIPPET BELOW or attach ---> ```text /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx → dist/index.esm.js... rpt2: built-in options overrides: { "noEmitHelpers": false, "importHelpers": true, "noResolve": false, "noEmit": false, "inlineSourceMap": false, "outDir": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/.cache/rollup-plugin-typescript2/placeholder", "moduleResolution": 2, "allowNonTsExtensions": true } rpt2: parsed tsconfig: { "options": { "isolatedModules": false, "declaration": true, "target": 2, "module": 99, "lib": [ "lib.es2016.d.ts", "lib.dom.d.ts" ], "sourceMap": true, "allowJs": true, "jsx": 2, "moduleResolution": 2, "experimentalDecorators": true, "rootDir": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable", "baseUrl": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src", "forceConsistentCasingInFileNames": true, "noImplicitReturns": true, "noImplicitThis": false, "noImplicitAny": false, "importHelpers": true, "strictNullChecks": false, "suppressImplicitAnyIndexErrors": true, "noUnusedLocals": true, "allowSyntheticDefaultImports": true, "esModuleInterop": false, "paths": { "@ies/kefu-tag-table": [ "src/lib/*" ] }, "plugins": [ { "transform": "ts-optchain/transform" } ], "declarationDir": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/dist/type", "configFilePath": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/tsconfig.json", "pathsBasePath": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable", "noEmitHelpers": false, "noResolve": false, "noEmit": false, "inlineSourceMap": false, "outDir": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/.cache/rollup-plugin-typescript2/placeholder", "allowNonTsExtensions": true }, "fileNames": [ "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx" ], "typeAcquisition": { "enable": false, "include": [], "exclude": [] }, "raw": { "files": [ "src/lib/index.tsx" ], "compilerOptions": { "isolatedModules": false, "declaration": true, "target": "es6", "module": "esnext", "lib": [ "es7", "dom" ], "sourceMap": true, "allowJs": true, "jsx": "react", "moduleResolution": "node", "experimentalDecorators": true, "rootDir": "./", "baseUrl": "./src", "forceConsistentCasingInFileNames": true, "noImplicitReturns": true, "noImplicitThis": false, "noImplicitAny": false, "importHelpers": true, "strictNullChecks": false, "suppressImplicitAnyIndexErrors": true, "noUnusedLocals": true, "allowSyntheticDefaultImports": true, "esModuleInterop": false, "paths": { "@ies/kefu-tag-table": [ "src/lib/*" ] }, "plugins": [ { "transform": "ts-optchain/transform" } ], "declarationDir": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/dist/type" }, "typeRoots": [ "node", "node_modules/@types", "./src/typings" ], "exclude": [ "node_modules" ], "compileOnSave": false }, "errors": [], "wildcardDirectories": {}, "compileOnSave": false } rpt2: typescript version: 4.3.5 rpt2: tslib version: 2.1.0 rpt2: rollup version: 2.45.2 rpt2: rollup-plugin-typescript2 version: 0.30.0 rpt2: plugin options: { "check": false, "tsconfig": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/tsconfig.json", "tsconfigOverride": { "compilerOptions": { "declaration": true, "declarationDir": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/dist/type" } }, "verbosity": 3, "useTsconfigDeclarationDir": true, "include": [ "*.ts+(|x)", "**/*.ts+(|x)", "*.js+(|x)", "**/*.js+(|x)" ], "clean": false, "cacheRoot": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/.cache/rollup-plugin-typescript2", "exclude": [ "*.d.ts", "**/*.d.ts" ], "abortOnError": true, "rollupCommonJSResolveHack": false, "transformers": [], "tsconfigDefaults": {}, "objectHashIgnoreUnknownHack": false, "cwd": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable", "typescript": "version 4.3.5" } rpt2: rollup config: { "input": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx", "plugins": [ { "name": "rpt2" }, { "name": "image" }, { "name": "postcss" }, { "name": "json" }, { "name": "alias" }, { "name": "node-resolve" }, { "name": "replace" }, { "name": "auto-external" }, {}, { "name": "stdin" } ], "output": [ { "file": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/dist/index.esm.js", "format": "es", "plugins": [], "sourcemap": true } ] } rpt2: tsconfig path: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/tsconfig.json rpt2: included: [ "*.ts+(|x)", "**/*.ts+(|x)", "*.js+(|x)", "**/*.js+(|x)" ] rpt2: excluded: [ "*.d.ts", "**/*.d.ts" ] rpt2: Ambient types: rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/babel__core/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/babel__generator/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/babel__template/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/babel__traverse/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/color-name/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/css-modules-loader-core/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/estree/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/graceful-fs/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/hast/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/istanbul-lib-coverage/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/istanbul-lib-report/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/istanbul-reports/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/jest/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/json-schema/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/mdast/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/node/ts3.7/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/normalize-package-data/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/parse-json/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/parse5/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/prettier/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/prop-types/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/q/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/react/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/react-dom/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/resolve/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/stack-utils/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/unist/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/yargs/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/yargs-parser/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/json-schema/index.d.ts rpt2: /Users/bytedance/node_modules/@types/json5/index.d.ts rpt2: /Users/bytedance/node_modules/@types/minimist/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/node/ts3.7/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/normalize-package-data/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/parse-json/index.d.ts rpt2: ambient types changed, redoing all semantic diagnostics rpt2: transpiling '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx' rpt2: cache: '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/.cache/rollup-plugin-typescript2/rpt2_9e69c65b2d74fee0ad388d42568ea4ce7ba1b27d/code/cache/7cdbf1c7db69c2fd53815824bca49513a129c642' rpt2: cache miss rpt2: generated declarations for '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx' rpt2: dependency '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/utils.ts' rpt2: imported by '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx' rpt2: resolving './utils' imported by '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx' rpt2: to '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/utils.ts' rpt2: dependency '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/mock.ts' rpt2: imported by '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx' rpt2: resolving './mock' imported by '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx' rpt2: to '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/mock.ts' rpt2: transpiling '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/mock.ts' rpt2: cache: '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/.cache/rollup-plugin-typescript2/rpt2_9e69c65b2d74fee0ad388d42568ea4ce7ba1b27d/code/cache/a3be5c51ebb3fef03fc9a43c539d188ad1577216' rpt2: cache hit rpt2: generated declarations for '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/mock.ts' rpt2: transpiling '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/utils.ts' rpt2: cache: '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/.cache/rollup-plugin-typescript2/rpt2_9e69c65b2d74fee0ad388d42568ea4ce7ba1b27d/code/cache/d1d6e9acec5e9e0b4392fcbc5bf623dab081aa31' rpt2: cache hit rpt2: generated declarations for '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/utils.ts' rpt2: generating target 1 rpt2: rolling caches rpt2: emitting declarations for '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx' to '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/dist/type/src/lib/index.d.ts' rpt2: emitting declarations for '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/mock.ts' to '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/dist/type/src/lib/mock.d.ts' rpt2: emitting declarations for '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/utils.ts' to '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/dist/type/src/lib/utils.d.ts' (!) Broken sourcemap https://rollupjs.org/guide/en/#warning-sourcemap-is-likely-to-be-incorrect Plugins that transform code (such as 'at position 9') should generate accompanying sourcemaps created dist/index.esm.js in 2.7s User/dist/type/src/lib/index.d.ts → dist/index.d.ts... [!] Error: Could not resolve './component' from dist/type/src/lib/index.d.ts Error: Could not resolve './component' from dist/type/src/lib/index.d.ts ``` </details>
1.0
some type-only TS files are ignored when using `tsconfig` `files`? - ## What happens and why it is incorrect there are many files ending in `*.ts` in the src, but only some have been transpiled 1. Does `tsc` have the same output? If so, please explain why this is incorrect behavior NO, `tsc` run as expectly. ![image](https://user-images.githubusercontent.com/52886395/185072389-587c5bfa-0817-4c06-a4c9-3a5833ba1457.png) here's file structure: ![image](https://user-images.githubusercontent.com/52886395/185073553-3a473e39-d5ae-41ea-b5dc-f32a5e10a4eb.png) but rpt2 seems to ignore the file named `component.ts`: <img width="1241" alt="image" src="https://user-images.githubusercontent.com/52886395/185072856-019e3e3e-9598-4a5e-8dfb-10cb9f7276cc.png"> ## Environment ### Versions ```text System: OS: macOS 11.4 CPU: (12) x64 Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz Memory: 59.11 MB / 32.00 GB Shell: 5.8 - /bin/zsh Binaries: Node: 14.17.4 - ~/.nvm/versions/node/v14.17.4/bin/node Yarn: 1.22.11 - ~/.nvm/versions/node/v14.17.4/bin/yarn npm: 6.14.14 - ~/.nvm/versions/node/v14.17.4/bin/npm npmPackages: rollup: ^2.45.2 => 2.45.2 rollup-plugin-typescript2: ^0.30.0 => 0.30.0 typescript: ^4.3.5 => 4.3.5 npmGlobalPackages: typescript: 4.3.5 ``` <!--- paste your rollup config below if relevant ---> <details> <summary><h4><code>rollup.config.js</code></h4>: </summary> <!--- INSERT rollup.config.ts IN THE CODE SNIPPET BELOW ---> ```js import { nodeResolve } from '@rollup/plugin-node-resolve'; import replace from '@rollup/plugin-replace'; import postcss from 'rollup-plugin-postcss'; import image from '@rollup/plugin-image'; import ts from 'rollup-plugin-typescript2'; import dts from 'rollup-plugin-dts'; import json from '@rollup/plugin-json'; import alias from '@rollup/plugin-alias'; import autoExternal from 'rollup-plugin-auto-external'; import path from 'path'; import importCss from './rollup-plugin-import-css'; const resolve = (...dirs) => path.resolve(__dirname, '../', ...dirs); function toUpperCase(match) { return match.replace('-', '').toUpperCase(); } export default [ { input: resolve('src/lib/index.tsx'), output: [ { file: resolve('dist/index.esm.js'), format: 'es', sourcemap: true, }, ], external: id => { try { const idSourcePath = require.resolve(id, { paths: [resolve()] }); return idSourcePath && idSourcePath.includes('node_modules'); } catch (error) { return false; } }, plugins: [ ts({ check: false, tsconfig: resolve('tsconfig.json'), tsconfigOverride: { compilerOptions: { declaration: true, declarationDir: resolve('dist/type') }, }, verbosity: 2, useTsconfigDeclarationDir: true, include: ['*.ts+(|x)', '**/*.ts+(|x)', '*.js+(|x)', '**/*.js+(|x)'], }), image(), postcss({ extensions: ['.css', '.scss', '.less'], autoModules: true, extract: 'index.css', namedExports(name) { let reg = /-[a-z]/g; const temp = name.replace(reg, toUpperCase); return temp; }, }), json(), alias({ entries: [ { find: 'src', replacement: resolve('src'), }, ], }), nodeResolve({ extensions: ['.js', '.jsx', '.ts', '.tsx'], // some package.json files have a "browser" field which specifies // alternative files to load for people bundling for the browser. If // that's you, either use this option or add "browser" to the // "mainfields" option, otherwise pkg.browser will be ignored browser: true, preferBuiltins: true, mainFields: ['browser', 'jsnext', 'module', 'main'], }), replace({ 'process.env.NODE_ENV': JSON.stringify('development'), 'process.env.SEMI_ICON_LAZY_LOAD': true, preventAssignment: true, }), autoExternal({ packagePath: resolve(), }), importCss(), ], }, { input: resolve('dist/type/src/lib/index.d.ts'), external: id => { try { const idSourcePath = require.resolve(id, { paths: [resolve()] }); return idSourcePath && idSourcePath.includes('node_modules'); } catch (error) { return false; } }, output: [{ file: resolve('dist/index.d.ts'), format: 'es' }], plugins: [dts()], }, ]; ``` </details> <!--- paste your tsconfig.json below if relevant ---> <details> <summary><h4><code>tsconfig.json</code></h4>: </summary> <!--- INSERT tsconfig.json IN THE CODE SNIPPET BELOW ---> ```json5 { "compilerOptions": { "target": "es6", "module": "esnext", "lib": ["es7", "dom"], "sourceMap": true, "allowJs": true, "jsx": "react", "moduleResolution": "node", "experimentalDecorators": true, "rootDir": "./", "baseUrl": "./src", "forceConsistentCasingInFileNames": true, "noImplicitReturns": true, "noImplicitThis": false, "noImplicitAny": false, "importHelpers": true, "strictNullChecks": false, "suppressImplicitAnyIndexErrors": true, "noUnusedLocals": true, "noEmit": true, "allowSyntheticDefaultImports": true, "esModuleInterop": false, "paths": { "@ies/kefu-editor": ["src/lib/*"] }, "plugins": [ { "transform": "ts-optchain/transform" } ] }, "typeRoots": ["node", "node_modules/@types", "./src/typings"], "exclude": ["node_modules"] } ``` </details> <!--- paste your package.json below if relevant ---> <details> <summary><h4><code>package.json</code></h4>: </summary> <!--- INSERT package.json IN THE CODE SNIPPET BELOW ---> ```json ``` </details> <!--- add verbosity verbosity: 3 to plugin options and attach output if relevant (censor out anything sensitive) ---> <details> <summary><h4>plugin output with verbosity 3</h4>: </summary> <!--- INSERT plugin output IN THE CODE SNIPPET BELOW or attach ---> ```text /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx → dist/index.esm.js... rpt2: built-in options overrides: { "noEmitHelpers": false, "importHelpers": true, "noResolve": false, "noEmit": false, "inlineSourceMap": false, "outDir": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/.cache/rollup-plugin-typescript2/placeholder", "moduleResolution": 2, "allowNonTsExtensions": true } rpt2: parsed tsconfig: { "options": { "isolatedModules": false, "declaration": true, "target": 2, "module": 99, "lib": [ "lib.es2016.d.ts", "lib.dom.d.ts" ], "sourceMap": true, "allowJs": true, "jsx": 2, "moduleResolution": 2, "experimentalDecorators": true, "rootDir": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable", "baseUrl": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src", "forceConsistentCasingInFileNames": true, "noImplicitReturns": true, "noImplicitThis": false, "noImplicitAny": false, "importHelpers": true, "strictNullChecks": false, "suppressImplicitAnyIndexErrors": true, "noUnusedLocals": true, "allowSyntheticDefaultImports": true, "esModuleInterop": false, "paths": { "@ies/kefu-tag-table": [ "src/lib/*" ] }, "plugins": [ { "transform": "ts-optchain/transform" } ], "declarationDir": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/dist/type", "configFilePath": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/tsconfig.json", "pathsBasePath": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable", "noEmitHelpers": false, "noResolve": false, "noEmit": false, "inlineSourceMap": false, "outDir": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/.cache/rollup-plugin-typescript2/placeholder", "allowNonTsExtensions": true }, "fileNames": [ "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx" ], "typeAcquisition": { "enable": false, "include": [], "exclude": [] }, "raw": { "files": [ "src/lib/index.tsx" ], "compilerOptions": { "isolatedModules": false, "declaration": true, "target": "es6", "module": "esnext", "lib": [ "es7", "dom" ], "sourceMap": true, "allowJs": true, "jsx": "react", "moduleResolution": "node", "experimentalDecorators": true, "rootDir": "./", "baseUrl": "./src", "forceConsistentCasingInFileNames": true, "noImplicitReturns": true, "noImplicitThis": false, "noImplicitAny": false, "importHelpers": true, "strictNullChecks": false, "suppressImplicitAnyIndexErrors": true, "noUnusedLocals": true, "allowSyntheticDefaultImports": true, "esModuleInterop": false, "paths": { "@ies/kefu-tag-table": [ "src/lib/*" ] }, "plugins": [ { "transform": "ts-optchain/transform" } ], "declarationDir": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/dist/type" }, "typeRoots": [ "node", "node_modules/@types", "./src/typings" ], "exclude": [ "node_modules" ], "compileOnSave": false }, "errors": [], "wildcardDirectories": {}, "compileOnSave": false } rpt2: typescript version: 4.3.5 rpt2: tslib version: 2.1.0 rpt2: rollup version: 2.45.2 rpt2: rollup-plugin-typescript2 version: 0.30.0 rpt2: plugin options: { "check": false, "tsconfig": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/tsconfig.json", "tsconfigOverride": { "compilerOptions": { "declaration": true, "declarationDir": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/dist/type" } }, "verbosity": 3, "useTsconfigDeclarationDir": true, "include": [ "*.ts+(|x)", "**/*.ts+(|x)", "*.js+(|x)", "**/*.js+(|x)" ], "clean": false, "cacheRoot": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/.cache/rollup-plugin-typescript2", "exclude": [ "*.d.ts", "**/*.d.ts" ], "abortOnError": true, "rollupCommonJSResolveHack": false, "transformers": [], "tsconfigDefaults": {}, "objectHashIgnoreUnknownHack": false, "cwd": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable", "typescript": "version 4.3.5" } rpt2: rollup config: { "input": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx", "plugins": [ { "name": "rpt2" }, { "name": "image" }, { "name": "postcss" }, { "name": "json" }, { "name": "alias" }, { "name": "node-resolve" }, { "name": "replace" }, { "name": "auto-external" }, {}, { "name": "stdin" } ], "output": [ { "file": "/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/dist/index.esm.js", "format": "es", "plugins": [], "sourcemap": true } ] } rpt2: tsconfig path: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/tsconfig.json rpt2: included: [ "*.ts+(|x)", "**/*.ts+(|x)", "*.js+(|x)", "**/*.js+(|x)" ] rpt2: excluded: [ "*.d.ts", "**/*.d.ts" ] rpt2: Ambient types: rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/babel__core/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/babel__generator/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/babel__template/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/babel__traverse/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/color-name/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/css-modules-loader-core/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/estree/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/graceful-fs/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/hast/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/istanbul-lib-coverage/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/istanbul-lib-report/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/istanbul-reports/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/jest/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/json-schema/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/mdast/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/node/ts3.7/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/normalize-package-data/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/parse-json/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/parse5/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/prettier/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/prop-types/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/q/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/react/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/react-dom/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/resolve/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/stack-utils/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/unist/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/yargs/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/yargs-parser/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/json-schema/index.d.ts rpt2: /Users/bytedance/node_modules/@types/json5/index.d.ts rpt2: /Users/bytedance/node_modules/@types/minimist/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/node/ts3.7/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/normalize-package-data/index.d.ts rpt2: /Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/@types/parse-json/index.d.ts rpt2: ambient types changed, redoing all semantic diagnostics rpt2: transpiling '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx' rpt2: cache: '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/.cache/rollup-plugin-typescript2/rpt2_9e69c65b2d74fee0ad388d42568ea4ce7ba1b27d/code/cache/7cdbf1c7db69c2fd53815824bca49513a129c642' rpt2: cache miss rpt2: generated declarations for '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx' rpt2: dependency '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/utils.ts' rpt2: imported by '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx' rpt2: resolving './utils' imported by '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx' rpt2: to '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/utils.ts' rpt2: dependency '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/mock.ts' rpt2: imported by '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx' rpt2: resolving './mock' imported by '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx' rpt2: to '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/mock.ts' rpt2: transpiling '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/mock.ts' rpt2: cache: '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/.cache/rollup-plugin-typescript2/rpt2_9e69c65b2d74fee0ad388d42568ea4ce7ba1b27d/code/cache/a3be5c51ebb3fef03fc9a43c539d188ad1577216' rpt2: cache hit rpt2: generated declarations for '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/mock.ts' rpt2: transpiling '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/utils.ts' rpt2: cache: '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/node_modules/.cache/rollup-plugin-typescript2/rpt2_9e69c65b2d74fee0ad388d42568ea4ce7ba1b27d/code/cache/d1d6e9acec5e9e0b4392fcbc5bf623dab081aa31' rpt2: cache hit rpt2: generated declarations for '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/utils.ts' rpt2: generating target 1 rpt2: rolling caches rpt2: emitting declarations for '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/index.tsx' to '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/dist/type/src/lib/index.d.ts' rpt2: emitting declarations for '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/mock.ts' to '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/dist/type/src/lib/mock.d.ts' rpt2: emitting declarations for '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/src/lib/utils.ts' to '/Users/bytedance/Public/helpdesk-semi-ui/packages/KefuCascaderTable/dist/type/src/lib/utils.d.ts' (!) Broken sourcemap https://rollupjs.org/guide/en/#warning-sourcemap-is-likely-to-be-incorrect Plugins that transform code (such as 'at position 9') should generate accompanying sourcemaps created dist/index.esm.js in 2.7s User/dist/type/src/lib/index.d.ts → dist/index.d.ts... [!] Error: Could not resolve './component' from dist/type/src/lib/index.d.ts Error: Could not resolve './component' from dist/type/src/lib/index.d.ts ``` </details>
non_process
some type only ts files are ignored when using tsconfig files what happens and why it is incorrect there are many files ending in ts in the src but only some have been transpiled does tsc have the same output if so please explain why this is incorrect behavior no, tsc run as expectly here s file structure but seems to ignore the file named component ts img width alt image src environment versions text system os macos cpu intel r core tm cpu memory mb gb shell bin zsh binaries node nvm versions node bin node yarn nvm versions node bin yarn npm nvm versions node bin npm npmpackages rollup rollup plugin typescript npmglobalpackages typescript rollup config js js import noderesolve from rollup plugin node resolve import replace from rollup plugin replace import postcss from rollup plugin postcss import image from rollup plugin image import ts from rollup plugin import dts from rollup plugin dts import json from rollup plugin json import alias from rollup plugin alias import autoexternal from rollup plugin auto external import path from path import importcss from rollup plugin import css const resolve dirs path resolve dirname dirs function touppercase match return match replace touppercase export default input resolve src lib index tsx output file resolve dist index esm js format es sourcemap true external id try const idsourcepath require resolve id paths return idsourcepath idsourcepath includes node modules catch error return false plugins ts check false tsconfig resolve tsconfig json tsconfigoverride compileroptions declaration true declarationdir resolve dist type verbosity usetsconfigdeclarationdir true include image postcss extensions automodules true extract index css namedexports name let reg g const temp name replace reg touppercase return temp json alias entries find src replacement resolve src noderesolve extensions some package json files have a browser field which specifies alternative files to load for people bundling for the browser if that s you either use this option or add browser to the mainfields option otherwise pkg browser will be ignored browser true preferbuiltins true mainfields replace process env node env json stringify development process env semi icon lazy load true preventassignment true autoexternal packagepath resolve importcss input resolve dist type src lib index d ts external id try const idsourcepath require resolve id paths return idsourcepath idsourcepath includes node modules catch error return false output plugins tsconfig json compileroptions target module esnext lib sourcemap true allowjs true jsx react moduleresolution node experimentaldecorators true rootdir baseurl src forceconsistentcasinginfilenames true noimplicitreturns true noimplicitthis false noimplicitany false importhelpers true strictnullchecks false suppressimplicitanyindexerrors true nounusedlocals true noemit true allowsyntheticdefaultimports true esmoduleinterop false paths ies kefu editor plugins transform ts optchain transform typeroots exclude package json json plugin output with verbosity text users bytedance public helpdesk semi ui packages kefucascadertable src lib index tsx → dist index esm js built in options overrides noemithelpers false importhelpers true noresolve false noemit false inlinesourcemap false outdir users bytedance public helpdesk semi ui packages kefucascadertable node modules cache rollup plugin placeholder moduleresolution allownontsextensions true parsed tsconfig options isolatedmodules false declaration true target module lib lib d ts lib dom d ts sourcemap true allowjs true jsx moduleresolution experimentaldecorators true rootdir users bytedance public helpdesk semi ui packages kefucascadertable baseurl users bytedance public helpdesk semi ui packages kefucascadertable src forceconsistentcasinginfilenames true noimplicitreturns true noimplicitthis false noimplicitany false importhelpers true strictnullchecks false suppressimplicitanyindexerrors true nounusedlocals true allowsyntheticdefaultimports true esmoduleinterop false paths ies kefu tag table src lib plugins transform ts optchain transform declarationdir users bytedance public helpdesk semi ui packages kefucascadertable dist type configfilepath users bytedance public helpdesk semi ui packages kefucascadertable tsconfig json pathsbasepath users bytedance public helpdesk semi ui packages kefucascadertable noemithelpers false noresolve false noemit false inlinesourcemap false outdir users bytedance public helpdesk semi ui packages kefucascadertable node modules cache rollup plugin placeholder allownontsextensions true filenames users bytedance public helpdesk semi ui packages kefucascadertable src lib index tsx typeacquisition enable false include exclude raw files src lib index tsx compileroptions isolatedmodules false declaration true target module esnext lib dom sourcemap true allowjs true jsx react moduleresolution node experimentaldecorators true rootdir baseurl src forceconsistentcasinginfilenames true noimplicitreturns true noimplicitthis false noimplicitany false importhelpers true strictnullchecks false suppressimplicitanyindexerrors true nounusedlocals true allowsyntheticdefaultimports true esmoduleinterop false paths ies kefu tag table src lib plugins transform ts optchain transform declarationdir users bytedance public helpdesk semi ui packages kefucascadertable dist type typeroots node node modules types src typings exclude node modules compileonsave false errors wildcarddirectories compileonsave false typescript version tslib version rollup version rollup plugin version plugin options check false tsconfig users bytedance public helpdesk semi ui packages kefucascadertable tsconfig json tsconfigoverride compileroptions declaration true declarationdir users bytedance public helpdesk semi ui packages kefucascadertable dist type verbosity usetsconfigdeclarationdir true include ts x ts x js x js x clean false cacheroot users bytedance public helpdesk semi ui packages kefucascadertable node modules cache rollup plugin exclude d ts d ts abortonerror true rollupcommonjsresolvehack false transformers tsconfigdefaults objecthashignoreunknownhack false cwd users bytedance public helpdesk semi ui packages kefucascadertable typescript version rollup config input users bytedance public helpdesk semi ui packages kefucascadertable src lib index tsx plugins name name image name postcss name json name alias name node resolve name replace name auto external name stdin output file users bytedance public helpdesk semi ui packages kefucascadertable dist index esm js format es plugins sourcemap true tsconfig path users bytedance public helpdesk semi ui packages kefucascadertable tsconfig json included ts x ts x js x js x excluded d ts d ts ambient types users bytedance public helpdesk semi ui packages kefucascadertable node modules types babel core index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types babel generator index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types babel template index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types babel traverse index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types color name index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types css modules loader core index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types estree index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types graceful fs index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types hast index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types istanbul lib coverage index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types istanbul lib report index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types istanbul reports index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types jest index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types json schema index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types mdast index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types node index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types normalize package data index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types parse json index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types prettier index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types prop types index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types q index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types react index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types react dom index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types resolve index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types stack utils index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types unist index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types yargs index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types yargs parser index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types json schema index d ts users bytedance node modules types index d ts users bytedance node modules types minimist index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types node index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types normalize package data index d ts users bytedance public helpdesk semi ui packages kefucascadertable node modules types parse json index d ts ambient types changed redoing all semantic diagnostics transpiling users bytedance public helpdesk semi ui packages kefucascadertable src lib index tsx cache users bytedance public helpdesk semi ui packages kefucascadertable node modules cache rollup plugin code cache cache miss generated declarations for users bytedance public helpdesk semi ui packages kefucascadertable src lib index tsx dependency users bytedance public helpdesk semi ui packages kefucascadertable src lib utils ts imported by users bytedance public helpdesk semi ui packages kefucascadertable src lib index tsx resolving utils imported by users bytedance public helpdesk semi ui packages kefucascadertable src lib index tsx to users bytedance public helpdesk semi ui packages kefucascadertable src lib utils ts dependency users bytedance public helpdesk semi ui packages kefucascadertable src lib mock ts imported by users bytedance public helpdesk semi ui packages kefucascadertable src lib index tsx resolving mock imported by users bytedance public helpdesk semi ui packages kefucascadertable src lib index tsx to users bytedance public helpdesk semi ui packages kefucascadertable src lib mock ts transpiling users bytedance public helpdesk semi ui packages kefucascadertable src lib mock ts cache users bytedance public helpdesk semi ui packages kefucascadertable node modules cache rollup plugin code cache cache hit generated declarations for users bytedance public helpdesk semi ui packages kefucascadertable src lib mock ts transpiling users bytedance public helpdesk semi ui packages kefucascadertable src lib utils ts cache users bytedance public helpdesk semi ui packages kefucascadertable node modules cache rollup plugin code cache cache hit generated declarations for users bytedance public helpdesk semi ui packages kefucascadertable src lib utils ts generating target rolling caches emitting declarations for users bytedance public helpdesk semi ui packages kefucascadertable src lib index tsx to users bytedance public helpdesk semi ui packages kefucascadertable dist type src lib index d ts emitting declarations for users bytedance public helpdesk semi ui packages kefucascadertable src lib mock ts to users bytedance public helpdesk semi ui packages kefucascadertable dist type src lib mock d ts emitting declarations for users bytedance public helpdesk semi ui packages kefucascadertable src lib utils ts to users bytedance public helpdesk semi ui packages kefucascadertable dist type src lib utils d ts broken sourcemap plugins that transform code such as at position should generate accompanying sourcemaps created dist index esm js in user dist type src lib index d ts → dist index d ts error could not resolve component from dist type src lib index d ts error could not resolve component from dist type src lib index d ts
0
17,404
23,219,835,412
IssuesEvent
2022-08-02 17:05:45
NVIDIA/aistore
https://api.github.com/repos/NVIDIA/aistore
closed
Query on aistore.pytorch.Dataset
in process
Hi, I'm following this doc to train Imagenet Dataset: `https://github.com/NVIDIA/aistore/blob/cc6e029721ef159f3df516ec9f8e3065ef6ac54d/docs/_posts/2021-10-22-ais-etl-2.md` I've a query specifically related to this part. `train_loader = torch.utils.data.DataLoader( aistore.pytorch.Dataset( "http://aistore-sample-proxy:51080", # AIS IP address or hostname Bck("imagenet"), prefix="train/", transform_id="my-first-etl", transform_filter=lambda object_name: object_name.endswith('.jpg'), ), batch_size=args.batch_size, shuffle=True, num_workers=args.workers, pin_memory=True)` I see a type error, when I try to use it as is in the training code. `pydantic.main.BaseModel.__init__ TypeError: __init__() takes exactly 1 positional argument (2 given)` Do you any insights on how to mitigate this error? Also I found that the implementation of `aistore.pytorch.Dataset` is present in one of the development branch(post-3).
1.0
Query on aistore.pytorch.Dataset - Hi, I'm following this doc to train Imagenet Dataset: `https://github.com/NVIDIA/aistore/blob/cc6e029721ef159f3df516ec9f8e3065ef6ac54d/docs/_posts/2021-10-22-ais-etl-2.md` I've a query specifically related to this part. `train_loader = torch.utils.data.DataLoader( aistore.pytorch.Dataset( "http://aistore-sample-proxy:51080", # AIS IP address or hostname Bck("imagenet"), prefix="train/", transform_id="my-first-etl", transform_filter=lambda object_name: object_name.endswith('.jpg'), ), batch_size=args.batch_size, shuffle=True, num_workers=args.workers, pin_memory=True)` I see a type error, when I try to use it as is in the training code. `pydantic.main.BaseModel.__init__ TypeError: __init__() takes exactly 1 positional argument (2 given)` Do you any insights on how to mitigate this error? Also I found that the implementation of `aistore.pytorch.Dataset` is present in one of the development branch(post-3).
process
query on aistore pytorch dataset hi i m following this doc to train imagenet dataset i ve a query specifically related to this part train loader torch utils data dataloader aistore pytorch dataset ais ip address or hostname bck imagenet prefix train transform id my first etl transform filter lambda object name object name endswith jpg batch size args batch size shuffle true num workers args workers pin memory true i see a type error when i try to use it as is in the training code pydantic main basemodel init typeerror init takes exactly positional argument given do you any insights on how to mitigate this error also i found that the implementation of aistore pytorch dataset is present in one of the development branch post
1
312,766
9,552,864,093
IssuesEvent
2019-05-02 17:45:13
WoWManiaUK/Blackwing-Lair
https://api.github.com/repos/WoWManiaUK/Blackwing-Lair
closed
[Quest] Aiding the Outrunners - missing prerequisite
Confirmed Fixed Confirmed Fixed in Dev Legacy (wotlk) Priority zone 1-20
in Sunstrider Isle the starting zone of Blood Elves the NPC named [Lanthan Perilon ](https://www.wowhead.com/npc=15281/lanthan-perilon)should give 3 quests total the first one is [Aggression](https://www.wowhead.com/quest=8334/aggression) and the second is [Aiding the Outrunners](https://www.wowhead.com/quest=8347/aiding-the-outrunners) both work just fine but the third quest that should give named [Felendren the Banished](https://www.wowhead.com/quest=8335/felendren-the-banished) isn't exist.
1.0
[Quest] Aiding the Outrunners - missing prerequisite - in Sunstrider Isle the starting zone of Blood Elves the NPC named [Lanthan Perilon ](https://www.wowhead.com/npc=15281/lanthan-perilon)should give 3 quests total the first one is [Aggression](https://www.wowhead.com/quest=8334/aggression) and the second is [Aiding the Outrunners](https://www.wowhead.com/quest=8347/aiding-the-outrunners) both work just fine but the third quest that should give named [Felendren the Banished](https://www.wowhead.com/quest=8335/felendren-the-banished) isn't exist.
non_process
aiding the outrunners missing prerequisite in sunstrider isle the starting zone of blood elves the npc named give quests total the first one is and the second is both work just fine but the third quest that should give named isn t exist
0
9,233
12,261,914,820
IssuesEvent
2020-05-06 20:59:54
googleapis/go-genproto
https://api.github.com/repos/googleapis/go-genproto
opened
recommendationengine: reenable generation of v1beta1
api: recommendationengine type: process
There is currently an issue with this library including the following import:`_ "google/cloud/recommendationengine/v1beta1"` Blocking generation of this client until issues are resolved so that other clients can re-gen.
1.0
recommendationengine: reenable generation of v1beta1 - There is currently an issue with this library including the following import:`_ "google/cloud/recommendationengine/v1beta1"` Blocking generation of this client until issues are resolved so that other clients can re-gen.
process
recommendationengine reenable generation of there is currently an issue with this library including the following import google cloud recommendationengine blocking generation of this client until issues are resolved so that other clients can re gen
1
2,808
5,738,518,363
IssuesEvent
2017-04-23 05:06:38
SIMEXP/niak
https://api.github.com/repos/SIMEXP/niak
closed
contrast of the T1 in the fMRI preproc QC
enhancement preprocessing quality control
The contrast on the anatomical image always starts out two dark and I adjust it every time. It would be cool to have it start out a bit brighter. From @illdopejake
1.0
contrast of the T1 in the fMRI preproc QC - The contrast on the anatomical image always starts out two dark and I adjust it every time. It would be cool to have it start out a bit brighter. From @illdopejake
process
contrast of the in the fmri preproc qc the contrast on the anatomical image always starts out two dark and i adjust it every time it would be cool to have it start out a bit brighter from illdopejake
1