Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
75,921
15,495,711,666
IssuesEvent
2021-03-11 01:21:20
nicktombeur/naturalcolors
https://api.github.com/repos/nicktombeur/naturalcolors
opened
CVE-2020-36181 (High) detected in jackson-databind-2.9.9.jar
security vulnerability
## CVE-2020-36181 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: naturalcolors/back-end/pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-2.1.6.RELEASE.jar (Root Library) - spring-boot-starter-json-2.1.6.RELEASE.jar - :x: **jackson-databind-2.9.9.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS. <p>Publish Date: 2021-01-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36181>CVE-2020-36181</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/3004">https://github.com/FasterXML/jackson-databind/issues/3004</a></p> <p>Release Date: 2021-01-06</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-36181 (High) detected in jackson-databind-2.9.9.jar - ## CVE-2020-36181 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: naturalcolors/back-end/pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-2.1.6.RELEASE.jar (Root Library) - spring-boot-starter-json-2.1.6.RELEASE.jar - :x: **jackson-databind-2.9.9.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS. <p>Publish Date: 2021-01-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36181>CVE-2020-36181</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/3004">https://github.com/FasterXML/jackson-databind/issues/3004</a></p> <p>Release Date: 2021-01-06</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file naturalcolors back end pom xml path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library spring boot starter json release jar x jackson databind jar vulnerable library vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp dbcp cpdsadapter driveradaptercpds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource
0
11,032
13,838,643,370
IssuesEvent
2020-10-14 06:38:19
amor71/LiuAlgoTrader
https://api.github.com/repos/amor71/LiuAlgoTrader
closed
Expose additional configuration options from config to System Variable or TOML File parameters
in-process
Please expose the following. Right now I see some of these available in config file. would be good to have a full control over these parameter by strategy. 1. Risk 2. Set Portfolio value 3. Number of minutes before market close for closing positions 4. Stoploss
1.0
Expose additional configuration options from config to System Variable or TOML File parameters - Please expose the following. Right now I see some of these available in config file. would be good to have a full control over these parameter by strategy. 1. Risk 2. Set Portfolio value 3. Number of minutes before market close for closing positions 4. Stoploss
process
expose additional configuration options from config to system variable or toml file parameters please expose the following right now i see some of these available in config file would be good to have a full control over these parameter by strategy risk set portfolio value number of minutes before market close for closing positions stoploss
1
320,455
23,811,217,304
IssuesEvent
2022-09-04 19:45:30
MudBlazor/MudBlazor
https://api.github.com/repos/MudBlazor/MudBlazor
closed
Menu: press goes through to component below on mobile
question won't fix has workaround answered needs documentation
### Bug type Component ### Component name MudMenu ### What happened? with the link open dev tools (f12) and select mobile view - select ipad air open menu press menu 1 the sort item from the table below will open -this only happen on touch devices ### Expected behavior sort menu will not open ### Reproduction link https://try.mudblazor.com/snippet/GkQckiQTUMXTsQkX ### Reproduction steps 1.with the link open dev tools (f12) and select mobile view - select ipad air open menu press menu 1 the sort item from the table below will open ### Relevant log output _No response_ ### Version (bug) 6.0.14 ### Version (working) _No response_ ### What browsers are you seeing the problem on? Chrome ### On what operating system are you experiencing the issue? Windows ### Pull Request - [ ] I would like to do a Pull Request ### Code of Conduct - [X] I agree to follow this project's Code of Conduct
1.0
Menu: press goes through to component below on mobile - ### Bug type Component ### Component name MudMenu ### What happened? with the link open dev tools (f12) and select mobile view - select ipad air open menu press menu 1 the sort item from the table below will open -this only happen on touch devices ### Expected behavior sort menu will not open ### Reproduction link https://try.mudblazor.com/snippet/GkQckiQTUMXTsQkX ### Reproduction steps 1.with the link open dev tools (f12) and select mobile view - select ipad air open menu press menu 1 the sort item from the table below will open ### Relevant log output _No response_ ### Version (bug) 6.0.14 ### Version (working) _No response_ ### What browsers are you seeing the problem on? Chrome ### On what operating system are you experiencing the issue? Windows ### Pull Request - [ ] I would like to do a Pull Request ### Code of Conduct - [X] I agree to follow this project's Code of Conduct
non_process
menu press goes through to component below on mobile bug type component component name mudmenu what happened with the link open dev tools and select mobile view select ipad air open menu press menu the sort item from the table below will open this only happen on touch devices expected behavior sort menu will not open reproduction link reproduction steps with the link open dev tools and select mobile view select ipad air open menu press menu the sort item from the table below will open relevant log output no response version bug version working no response what browsers are you seeing the problem on chrome on what operating system are you experiencing the issue windows pull request i would like to do a pull request code of conduct i agree to follow this project s code of conduct
0
279,437
21,160,379,524
IssuesEvent
2022-04-07 08:49:56
bounswe/bounswe2022group6
https://api.github.com/repos/bounswe/bounswe2022group6
closed
Class Diagram: Creating a Class for Account
Type: Documentation Priority: High State: In Progess
Fields and methods of the Account class need to be specified on the LucidChart diagram. @iremmer is the one responsible for this task. @berfinsimsekk is the reviewer of this task. Deadline: 04.04.2022 Sunday 23.59
1.0
Class Diagram: Creating a Class for Account - Fields and methods of the Account class need to be specified on the LucidChart diagram. @iremmer is the one responsible for this task. @berfinsimsekk is the reviewer of this task. Deadline: 04.04.2022 Sunday 23.59
non_process
class diagram creating a class for account fields and methods of the account class need to be specified on the lucidchart diagram iremmer is the one responsible for this task berfinsimsekk is the reviewer of this task deadline sunday
0
35,410
7,734,484,043
IssuesEvent
2018-05-27 01:53:59
bridgedotnet/Bridge
https://api.github.com/repos/bridgedotnet/Bridge
closed
StartsWith method does not compare strings correctly
defect premium
Check sample: ### Steps To Reproduce https://deck.net/3c0a66eae87385cde73a43f5cc5a433b https://dotnetfiddle.net/Lg7R3U ```csharp public class Program { public static void Main() { Console.WriteLine("V".StartsWith("v", StringComparison.InvariantCultureIgnoreCase)); } } ``` ### Expected Result ``` True ``` ### Actual Result ``` False ```
1.0
StartsWith method does not compare strings correctly - Check sample: ### Steps To Reproduce https://deck.net/3c0a66eae87385cde73a43f5cc5a433b https://dotnetfiddle.net/Lg7R3U ```csharp public class Program { public static void Main() { Console.WriteLine("V".StartsWith("v", StringComparison.InvariantCultureIgnoreCase)); } } ``` ### Expected Result ``` True ``` ### Actual Result ``` False ```
non_process
startswith method does not compare strings correctly check sample steps to reproduce csharp public class program public static void main console writeline v startswith v stringcomparison invariantcultureignorecase expected result true actual result false
0
8,108
20,968,779,721
IssuesEvent
2022-03-28 09:23:00
MicrosoftDocs/architecture-center
https://api.github.com/repos/MicrosoftDocs/architecture-center
closed
Error regarding blob name
cxp triaged product-question architecture-center/svc Pri2 azure-guide/subsvc
[I want to read a csv file located in the blob storage. However, I am getting error that specified blob does not exist. I am confused what should be the blob name. I gave LOCALFILENAME as my csv file name seen in the blob container] --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 647ed505-3e6e-3db7-63ee-025edf6e63f4 * Version Independent ID: d910ba75-6c86-6172-64be-e45ba2bf5a3d * Content: [Explore data in Azure Blob storage with pandas - Azure Architecture Center](https://docs.microsoft.com/en-us/azure/architecture/data-science-process/explore-data-blob) * Content Source: [docs/data-science-process/explore-data-blob.md](https://github.com/microsoftdocs/architecture-center/blob/main/docs/data-science-process/explore-data-blob.md) * Service: **architecture-center** * Sub-service: **azure-guide** * GitHub Login: @marktab * Microsoft Alias: **tdsp**
1.0
Error regarding blob name - [I want to read a csv file located in the blob storage. However, I am getting error that specified blob does not exist. I am confused what should be the blob name. I gave LOCALFILENAME as my csv file name seen in the blob container] --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 647ed505-3e6e-3db7-63ee-025edf6e63f4 * Version Independent ID: d910ba75-6c86-6172-64be-e45ba2bf5a3d * Content: [Explore data in Azure Blob storage with pandas - Azure Architecture Center](https://docs.microsoft.com/en-us/azure/architecture/data-science-process/explore-data-blob) * Content Source: [docs/data-science-process/explore-data-blob.md](https://github.com/microsoftdocs/architecture-center/blob/main/docs/data-science-process/explore-data-blob.md) * Service: **architecture-center** * Sub-service: **azure-guide** * GitHub Login: @marktab * Microsoft Alias: **tdsp**
non_process
error regarding blob name document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service architecture center sub service azure guide github login marktab microsoft alias tdsp
0
19,187
25,309,047,625
IssuesEvent
2022-11-17 16:05:33
TUM-Dev/NavigaTUM
https://api.github.com/repos/TUM-Dev/NavigaTUM
closed
[General] Hochschule für Politik
webform delete-after-processing general
Bitte auch die Hochschule für Politik direkt bei der Übersicht zum Stammgelände anzeigen. Sonst sieht es echt toll aus, danke!
1.0
[General] Hochschule für Politik - Bitte auch die Hochschule für Politik direkt bei der Übersicht zum Stammgelände anzeigen. Sonst sieht es echt toll aus, danke!
process
hochschule für politik bitte auch die hochschule für politik direkt bei der übersicht zum stammgelände anzeigen sonst sieht es echt toll aus danke
1
1,647
4,270,987,583
IssuesEvent
2016-07-13 09:24:38
withanage/mpt
https://api.github.com/repos/withanage/mpt
closed
post processer script
post process
Sollte in post-processor eingebaut werden, damit die interviews speaker geschnitten werden (\<p>)(\s*)(\w+)(:)
1.0
post processer script - Sollte in post-processor eingebaut werden, damit die interviews speaker geschnitten werden (\<p>)(\s*)(\w+)(:)
process
post processer script sollte in post processor eingebaut werden damit die interviews speaker geschnitten werden s w
1
685
3,172,211,051
IssuesEvent
2015-09-23 06:18:10
e-government-ua/i
https://api.github.com/repos/e-government-ua/i
closed
На главном дашборде скрывать для Киева два элемента расширенного фильтра "Область и Город"
active hi priority In process of testing question test version
https://test.kiev.igov.org.ua а потом сделать видимым, для остальных т.к. сейчас это было временно скрыто
1.0
На главном дашборде скрывать для Киева два элемента расширенного фильтра "Область и Город" - https://test.kiev.igov.org.ua а потом сделать видимым, для остальных т.к. сейчас это было временно скрыто
process
на главном дашборде скрывать для киева два элемента расширенного фильтра область и город а потом сделать видимым для остальных т к сейчас это было временно скрыто
1
9,473
12,467,568,638
IssuesEvent
2020-05-28 17:17:31
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
The latest resource checks aren't documented
Pri1 devops-cicd-process/tech devops/prod
Currently only the manual approval and evaluate artifact checks are mentioned in the documentation. There isn't any information on the others that are available now. It looks like more are in the works, which is awesome! But I'm looking forward to finding out more information on the existing ones in the meantime. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: b067a175-f640-7503-9c1e-f0130c6dbeda * Version Independent ID: ff743c7b-a103-eae6-4478-62ba995a4b36 * Content: [Pipeline deployment approvals - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass) * Content Source: [docs/pipelines/process/approvals.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/approvals.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @azooinmyluggage * Microsoft Alias: **shashban**
1.0
The latest resource checks aren't documented - Currently only the manual approval and evaluate artifact checks are mentioned in the documentation. There isn't any information on the others that are available now. It looks like more are in the works, which is awesome! But I'm looking forward to finding out more information on the existing ones in the meantime. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: b067a175-f640-7503-9c1e-f0130c6dbeda * Version Independent ID: ff743c7b-a103-eae6-4478-62ba995a4b36 * Content: [Pipeline deployment approvals - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass) * Content Source: [docs/pipelines/process/approvals.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/approvals.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @azooinmyluggage * Microsoft Alias: **shashban**
process
the latest resource checks aren t documented currently only the manual approval and evaluate artifact checks are mentioned in the documentation there isn t any information on the others that are available now it looks like more are in the works which is awesome but i m looking forward to finding out more information on the existing ones in the meantime document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login azooinmyluggage microsoft alias shashban
1
13,391
15,865,917,077
IssuesEvent
2021-04-08 15:12:25
COPIM/open-book-collective
https://api.github.com/repos/COPIM/open-book-collective
opened
Alerts about new publications
membership management (pillar 4) not needed organisational process userstory
As a librarian or institutional decision-maker visiting an open access initiative’s profile page ... ...I want, for OABPs, alerts about new publications by specific publishers, or in specific topic areas ... ... so that I can inform faculty members of new content As a scholar... I want to receive alerts about new titles in a selected area ... so that I can keep abreast of new OA publications in my field.
1.0
Alerts about new publications - As a librarian or institutional decision-maker visiting an open access initiative’s profile page ... ...I want, for OABPs, alerts about new publications by specific publishers, or in specific topic areas ... ... so that I can inform faculty members of new content As a scholar... I want to receive alerts about new titles in a selected area ... so that I can keep abreast of new OA publications in my field.
process
alerts about new publications as a librarian or institutional decision maker visiting an open access initiative’s profile page i want for oabps alerts about new publications by specific publishers or in specific topic areas so that i can inform faculty members of new content as a scholar i want to receive alerts about new titles in a selected area so that i can keep abreast of new oa publications in my field
1
250,288
27,066,433,877
IssuesEvent
2023-02-14 01:03:22
DevOps-PM-PGDip-2022-2023/easybuggy4django.old
https://api.github.com/repos/DevOps-PM-PGDip-2022-2023/easybuggy4django.old
opened
CVE-2021-33430 (Medium) detected in numpy-1.14.2.zip
security vulnerability
## CVE-2021-33430 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>numpy-1.14.2.zip</b></p></summary> <p>NumPy is the fundamental package for array computing with Python.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/0b/66/86185402ee2d55865c675c06a5cfef742e39f4635a4ce1b1aefd20711c13/numpy-1.14.2.zip">https://files.pythonhosted.org/packages/0b/66/86185402ee2d55865c675c06a5cfef742e39f4635a4ce1b1aefd20711c13/numpy-1.14.2.zip</a></p> <p> Dependency Hierarchy: - :x: **numpy-1.14.2.zip** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ** DISPUTED ** A Buffer Overflow vulnerability exists in NumPy 1.9.x in the PyArray_NewFromDescr_int function of ctors.c when specifying arrays of large dimensions (over 32) from Python code, which could let a malicious user cause a Denial of Service. NOTE: The vendor does not agree this is a vulneraility; In (very limited) circumstances a user may be able provoke the buffer overflow, the user is most likely already privileged to at least provoke denial of service by exhausting memory. Triggering this further requires the use of uncommon API (complicated structured dtypes), which is very unlikely to be available to an unprivileged user. Mend Note: After conducting further research, Mend has determined that numpy versions before 1.21.0 are vulnerable to CVE-2021-33430 <p>Publish Date: 2021-12-17 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-33430>CVE-2021-33430</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-33430">https://nvd.nist.gov/vuln/detail/CVE-2021-33430</a></p> <p>Release Date: 2021-12-17</p> <p>Fix Resolution: 1.21.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-33430 (Medium) detected in numpy-1.14.2.zip - ## CVE-2021-33430 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>numpy-1.14.2.zip</b></p></summary> <p>NumPy is the fundamental package for array computing with Python.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/0b/66/86185402ee2d55865c675c06a5cfef742e39f4635a4ce1b1aefd20711c13/numpy-1.14.2.zip">https://files.pythonhosted.org/packages/0b/66/86185402ee2d55865c675c06a5cfef742e39f4635a4ce1b1aefd20711c13/numpy-1.14.2.zip</a></p> <p> Dependency Hierarchy: - :x: **numpy-1.14.2.zip** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ** DISPUTED ** A Buffer Overflow vulnerability exists in NumPy 1.9.x in the PyArray_NewFromDescr_int function of ctors.c when specifying arrays of large dimensions (over 32) from Python code, which could let a malicious user cause a Denial of Service. NOTE: The vendor does not agree this is a vulneraility; In (very limited) circumstances a user may be able provoke the buffer overflow, the user is most likely already privileged to at least provoke denial of service by exhausting memory. Triggering this further requires the use of uncommon API (complicated structured dtypes), which is very unlikely to be available to an unprivileged user. Mend Note: After conducting further research, Mend has determined that numpy versions before 1.21.0 are vulnerable to CVE-2021-33430 <p>Publish Date: 2021-12-17 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-33430>CVE-2021-33430</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-33430">https://nvd.nist.gov/vuln/detail/CVE-2021-33430</a></p> <p>Release Date: 2021-12-17</p> <p>Fix Resolution: 1.21.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in numpy zip cve medium severity vulnerability vulnerable library numpy zip numpy is the fundamental package for array computing with python library home page a href dependency hierarchy x numpy zip vulnerable library found in base branch master vulnerability details disputed a buffer overflow vulnerability exists in numpy x in the pyarray newfromdescr int function of ctors c when specifying arrays of large dimensions over from python code which could let a malicious user cause a denial of service note the vendor does not agree this is a vulneraility in very limited circumstances a user may be able provoke the buffer overflow the user is most likely already privileged to at least provoke denial of service by exhausting memory triggering this further requires the use of uncommon api complicated structured dtypes which is very unlikely to be available to an unprivileged user mend note after conducting further research mend has determined that numpy versions before are vulnerable to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
39,366
5,231,995,116
IssuesEvent
2017-01-30 07:05:24
c2corg/v6_api
https://api.github.com/repos/c2corg/v6_api
closed
Associated images sorting
fixed and ready for testing Images
As for now, it seems that the images listed in the a document's ``associations`` attribute are id-sorted. It would make more sense to **have them chronologically sorted using the EXIF date** info if available. This is especially useful for outings to make sure associated images are shown in the order they have been taken. Which is not the case currently if several people upload there images or if a contributor uploads their image in whatever order (for instance when an upload has failed).
1.0
Associated images sorting - As for now, it seems that the images listed in the a document's ``associations`` attribute are id-sorted. It would make more sense to **have them chronologically sorted using the EXIF date** info if available. This is especially useful for outings to make sure associated images are shown in the order they have been taken. Which is not the case currently if several people upload there images or if a contributor uploads their image in whatever order (for instance when an upload has failed).
non_process
associated images sorting as for now it seems that the images listed in the a document s associations attribute are id sorted it would make more sense to have them chronologically sorted using the exif date info if available this is especially useful for outings to make sure associated images are shown in the order they have been taken which is not the case currently if several people upload there images or if a contributor uploads their image in whatever order for instance when an upload has failed
0
518,670
15,032,193,082
IssuesEvent
2021-02-02 09:52:51
eventespresso/barista
https://api.github.com/repos/eventespresso/barista
closed
Flatten Apollo Cache
C: data systems 🗑 D: Packages 📦 P3: med priority 😐 S:1 new 👶🏻 T: enhancement ✨
Please forgive me for any misconceptions in the following description. Currently our entities are stored in the Apollo cache using the query keys as accessors. As the complexity of our entity relations increases we need to add additional cache stores because of the differing query keys. This creates a lot of complications because the cache source needs to be included in our data flow in order to locate an entity. For example, in order to mutate a ticket entity, one would need to include what cache store to access to find that entity. By simplifying the query keys used we should be able to "flatten" the cache by moving all entities of the same type into the same data store. ### steps - use this issue to discuss any further problems and/or clarify the changes that need to be done - create a new `FET` branch for isolating these changes and coordinating their deployment with any corresponding changes in core or add-ons - create a proof of concept (POC), using the simplest most straightforward implementation, in order to gauge its effectiveness and identify any potential blockers - assuming the POC is successful, then implement changes via multiple PRs in a granular fashion until everything is complete - verify that unit tests exist for every section of our data system prior to implementing changes. create new tests for any mutations/queries that don't already have any
1.0
Flatten Apollo Cache - Please forgive me for any misconceptions in the following description. Currently our entities are stored in the Apollo cache using the query keys as accessors. As the complexity of our entity relations increases we need to add additional cache stores because of the differing query keys. This creates a lot of complications because the cache source needs to be included in our data flow in order to locate an entity. For example, in order to mutate a ticket entity, one would need to include what cache store to access to find that entity. By simplifying the query keys used we should be able to "flatten" the cache by moving all entities of the same type into the same data store. ### steps - use this issue to discuss any further problems and/or clarify the changes that need to be done - create a new `FET` branch for isolating these changes and coordinating their deployment with any corresponding changes in core or add-ons - create a proof of concept (POC), using the simplest most straightforward implementation, in order to gauge its effectiveness and identify any potential blockers - assuming the POC is successful, then implement changes via multiple PRs in a granular fashion until everything is complete - verify that unit tests exist for every section of our data system prior to implementing changes. create new tests for any mutations/queries that don't already have any
non_process
flatten apollo cache please forgive me for any misconceptions in the following description currently our entities are stored in the apollo cache using the query keys as accessors as the complexity of our entity relations increases we need to add additional cache stores because of the differing query keys this creates a lot of complications because the cache source needs to be included in our data flow in order to locate an entity for example in order to mutate a ticket entity one would need to include what cache store to access to find that entity by simplifying the query keys used we should be able to flatten the cache by moving all entities of the same type into the same data store steps use this issue to discuss any further problems and or clarify the changes that need to be done create a new fet branch for isolating these changes and coordinating their deployment with any corresponding changes in core or add ons create a proof of concept poc using the simplest most straightforward implementation in order to gauge its effectiveness and identify any potential blockers assuming the poc is successful then implement changes via multiple prs in a granular fashion until everything is complete verify that unit tests exist for every section of our data system prior to implementing changes create new tests for any mutations queries that don t already have any
0
4,817
7,704,323,019
IssuesEvent
2018-05-21 11:49:25
nodejs/node
https://api.github.com/repos/nodejs/node
closed
Childprocess behaving differently with exec than spawn
child_process
<!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: 8.11.2 LTS Platform: Windows 10 64BIT Subsystem: childprocess.js If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: 8.11.2 LTS * **Platform**: Windows 10 64BIT * **Subsystem**: childprocess.js <!-- Enter your issue details below this comment. --> Im having lots of trouble trying to spawn a .exe file with some parameters. When I use exec it works beautifully but when i use spawn it launches the exe file as if it had read no parameters. I cant use exec or shell: true because i need to be able to get the PID of the exe This is what works as it should (reads the parameters): ```js child = exec("unturned -nographics -batchmode +lanserver/server1") ``` This is what doesnt work (doesnt read the parameters): ```js child = spawn("unturned", ["-nographics", "-batchmode", "+lanserver/server1"]) ``` At first i thought the trouble was the directory unturned was in, but that wasn't the case because i made the original directory (C:\Program Files (x86)\Steam\steamapps\common\Unturned\Unturned.exe) a system path variable and got the same results. Unturned is a game available for free download on steam incase anyone wants to try debugging it directly: https://store.steampowered.com/app/304930/Unturned/ EDIT(@ryzokuken): Added backticks.
1.0
Childprocess behaving differently with exec than spawn - <!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: 8.11.2 LTS Platform: Windows 10 64BIT Subsystem: childprocess.js If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: 8.11.2 LTS * **Platform**: Windows 10 64BIT * **Subsystem**: childprocess.js <!-- Enter your issue details below this comment. --> Im having lots of trouble trying to spawn a .exe file with some parameters. When I use exec it works beautifully but when i use spawn it launches the exe file as if it had read no parameters. I cant use exec or shell: true because i need to be able to get the PID of the exe This is what works as it should (reads the parameters): ```js child = exec("unturned -nographics -batchmode +lanserver/server1") ``` This is what doesnt work (doesnt read the parameters): ```js child = spawn("unturned", ["-nographics", "-batchmode", "+lanserver/server1"]) ``` At first i thought the trouble was the directory unturned was in, but that wasn't the case because i made the original directory (C:\Program Files (x86)\Steam\steamapps\common\Unturned\Unturned.exe) a system path variable and got the same results. Unturned is a game available for free download on steam incase anyone wants to try debugging it directly: https://store.steampowered.com/app/304930/Unturned/ EDIT(@ryzokuken): Added backticks.
process
childprocess behaving differently with exec than spawn thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version lts platform windows subsystem childprocess js if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version lts platform windows subsystem childprocess js im having lots of trouble trying to spawn a exe file with some parameters when i use exec it works beautifully but when i use spawn it launches the exe file as if it had read no parameters i cant use exec or shell true because i need to be able to get the pid of the exe this is what works as it should reads the parameters js child exec unturned nographics batchmode lanserver this is what doesnt work doesnt read the parameters js child spawn unturned at first i thought the trouble was the directory unturned was in but that wasn t the case because i made the original directory c program files steam steamapps common unturned unturned exe a system path variable and got the same results unturned is a game available for free download on steam incase anyone wants to try debugging it directly edit ryzokuken added backticks
1
10,144
13,044,162,528
IssuesEvent
2020-07-29 03:47:33
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `Lock` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `Lock` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @andylokandy ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `Lock` from TiDB - ## Description Port the scalar function `Lock` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @andylokandy ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function lock from tidb description port the scalar function lock from tidb to coprocessor score mentor s andylokandy recommended skills rust programming learning materials already implemented expressions ported from tidb
1
17,183
22,764,702,522
IssuesEvent
2022-07-08 02:16:53
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
System.ServiceProcess: Backport MS Docs documentation to triple slash
documentation area-System.ServiceProcess
We are working on a [new documentation process plan](https://github.com/dotnet/runtime/issues/44969#issuecomment-788536998), in which the main objective is to make triple slash comments the source of truth for documentation, instead of MS Docs: We want developers/maintainers to have an easier time maintaining the documentation for their APIs. You can use the [DocsPortingTool](https://github.com/carlossanlop/DocsPortingTool) to automate the backport process: Run the tool targeting the assembly, then submit a PR with the changes. You can find detailed instructions for the backporting process [here](https://github.com/carlossanlop/DocsPortingTool/blob/master/BackportInstructions.md). Area owners are free to decide if they want to address this in 6.0 or in Future.
1.0
System.ServiceProcess: Backport MS Docs documentation to triple slash - We are working on a [new documentation process plan](https://github.com/dotnet/runtime/issues/44969#issuecomment-788536998), in which the main objective is to make triple slash comments the source of truth for documentation, instead of MS Docs: We want developers/maintainers to have an easier time maintaining the documentation for their APIs. You can use the [DocsPortingTool](https://github.com/carlossanlop/DocsPortingTool) to automate the backport process: Run the tool targeting the assembly, then submit a PR with the changes. You can find detailed instructions for the backporting process [here](https://github.com/carlossanlop/DocsPortingTool/blob/master/BackportInstructions.md). Area owners are free to decide if they want to address this in 6.0 or in Future.
process
system serviceprocess backport ms docs documentation to triple slash we are working on a in which the main objective is to make triple slash comments the source of truth for documentation instead of ms docs we want developers maintainers to have an easier time maintaining the documentation for their apis you can use the to automate the backport process run the tool targeting the assembly then submit a pr with the changes you can find detailed instructions for the backporting process area owners are free to decide if they want to address this in or in future
1
18,322
24,440,356,620
IssuesEvent
2022-10-06 14:13:33
nextflow-io/nextflow
https://api.github.com/repos/nextflow-io/nextflow
opened
Processes as operator closures
lang/operators lang/processes
Based on this discussion: https://github.com/nextflow-io/nextflow/discussions/2521#discussioncomment-2082904 Processes and operators are functionally equivalent, in that they transform input channels into output channels. The different is that operators execute native Groovy code while processes execute bash scripts. Ultimately a process is just syntax sugar for an operator that executes a shell command, and it makes it much easier to work with files. Some operators take a closure argument, for example, `reduce`. But, what if the closure (i.e. reduction kernel) was a process? That would make it much easier to perform reductions over files, whereas currently you have to use the experimental recursion feature. If we could simply make processes interchangeable with operator closures, it would make operators a lot more powerful without the need for an entirely separate feature like recursion (although explicit recursion might still have its use).
1.0
Processes as operator closures - Based on this discussion: https://github.com/nextflow-io/nextflow/discussions/2521#discussioncomment-2082904 Processes and operators are functionally equivalent, in that they transform input channels into output channels. The different is that operators execute native Groovy code while processes execute bash scripts. Ultimately a process is just syntax sugar for an operator that executes a shell command, and it makes it much easier to work with files. Some operators take a closure argument, for example, `reduce`. But, what if the closure (i.e. reduction kernel) was a process? That would make it much easier to perform reductions over files, whereas currently you have to use the experimental recursion feature. If we could simply make processes interchangeable with operator closures, it would make operators a lot more powerful without the need for an entirely separate feature like recursion (although explicit recursion might still have its use).
process
processes as operator closures based on this discussion processes and operators are functionally equivalent in that they transform input channels into output channels the different is that operators execute native groovy code while processes execute bash scripts ultimately a process is just syntax sugar for an operator that executes a shell command and it makes it much easier to work with files some operators take a closure argument for example reduce but what if the closure i e reduction kernel was a process that would make it much easier to perform reductions over files whereas currently you have to use the experimental recursion feature if we could simply make processes interchangeable with operator closures it would make operators a lot more powerful without the need for an entirely separate feature like recursion although explicit recursion might still have its use
1
158,827
24,902,232,278
IssuesEvent
2022-10-28 22:32:44
Office-of-Digital-Services/California-State-Web-Template-Website
https://api.github.com/repos/Office-of-Digital-Services/California-State-Web-Template-Website
closed
Create new Visual Design landing page that will combine content from Typography, Color Schemes, and Icons pages
eng design P1
Combine content from typography, color schemes and icons pages into one Visual Design page. For icons section make some intro text and the link to separate icons page. Add "On this page" page navigation with anchors to Typography, Color Schemes and Icons sections
1.0
Create new Visual Design landing page that will combine content from Typography, Color Schemes, and Icons pages - Combine content from typography, color schemes and icons pages into one Visual Design page. For icons section make some intro text and the link to separate icons page. Add "On this page" page navigation with anchors to Typography, Color Schemes and Icons sections
non_process
create new visual design landing page that will combine content from typography color schemes and icons pages combine content from typography color schemes and icons pages into one visual design page for icons section make some intro text and the link to separate icons page add on this page page navigation with anchors to typography color schemes and icons sections
0
316,575
27,167,215,591
IssuesEvent
2023-02-17 16:14:27
unifyai/ivy
https://api.github.com/repos/unifyai/ivy
reopened
Fix creation_ops.test_torch_tensor
PyTorch Frontend Sub Task Failing Test
| | | |---|---| |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4013142181/jobs/6892176242" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/4013142181/jobs/6892176242" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4013142181/jobs/6892176242" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/4013142181/jobs/6892176242" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> <details> <summary>FAILED ivy_tests/test_ivy/test_frontends/test_torch/test_creation_ops.py::test_torch_tensor[cpu-ivy.functional.backends.jax-False-False]</summary> 2023-01-26T07:57:43.5260826Z E AssertionError: -128 != 127 2023-01-26T07:57:43.5261228Z E Falsifying example: test_torch_tensor( 2023-01-26T07:57:43.5261765Z E dtype_and_x=(['float16'], [array(-129., dtype=float16)]), 2023-01-26T07:57:43.5262291Z E dtype=['int8'], 2023-01-26T07:57:43.5262804Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. , 2023-01-26T07:57:43.5263467Z E fn_tree='ivy.functional.frontends.torch.tensor', 2023-01-26T07:57:43.5263939Z E on_device='cpu', 2023-01-26T07:57:43.5264332Z E frontend='torch', 2023-01-26T07:57:43.5264657Z E ) 2023-01-26T07:57:43.5264946Z E 2023-01-26T07:57:43.5265908Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2IAAkYGCGiEMAAD2ACG') as a decorator on your test case </details>
1.0
Fix creation_ops.test_torch_tensor - | | | |---|---| |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4013142181/jobs/6892176242" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/4013142181/jobs/6892176242" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4013142181/jobs/6892176242" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/4013142181/jobs/6892176242" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> <details> <summary>FAILED ivy_tests/test_ivy/test_frontends/test_torch/test_creation_ops.py::test_torch_tensor[cpu-ivy.functional.backends.jax-False-False]</summary> 2023-01-26T07:57:43.5260826Z E AssertionError: -128 != 127 2023-01-26T07:57:43.5261228Z E Falsifying example: test_torch_tensor( 2023-01-26T07:57:43.5261765Z E dtype_and_x=(['float16'], [array(-129., dtype=float16)]), 2023-01-26T07:57:43.5262291Z E dtype=['int8'], 2023-01-26T07:57:43.5262804Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. , 2023-01-26T07:57:43.5263467Z E fn_tree='ivy.functional.frontends.torch.tensor', 2023-01-26T07:57:43.5263939Z E on_device='cpu', 2023-01-26T07:57:43.5264332Z E frontend='torch', 2023-01-26T07:57:43.5264657Z E ) 2023-01-26T07:57:43.5264946Z E 2023-01-26T07:57:43.5265908Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2IAAkYGCGiEMAAD2ACG') as a decorator on your test case </details>
non_process
fix creation ops test torch tensor tensorflow img src torch img src numpy img src jax img src failed ivy tests test ivy test frontends test torch test creation ops py test torch tensor e assertionerror e falsifying example test torch tensor e dtype and x e dtype e test flags num positional args with out false inplace false native arrays as variable e fn tree ivy functional frontends torch tensor e on device cpu e frontend torch e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case
0
60,527
17,023,448,599
IssuesEvent
2021-07-03 02:05:09
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Coastline Error checker unavailable
Component: utils Priority: major Resolution: invalid Type: defect
**[Submitted to the original trac issue database at 9.35am, Monday, 27th July 2009]** The coastline error checker[1] has been offline since June 2009. Not only does this hinder fixing of coastline errors, but since it generates the shapefiles used for the Mapnik caostline data it means that the Mapnik layer is outdated. [1] http://wiki.openstreetmap.org/wiki/Coastline_error_checker
1.0
Coastline Error checker unavailable - **[Submitted to the original trac issue database at 9.35am, Monday, 27th July 2009]** The coastline error checker[1] has been offline since June 2009. Not only does this hinder fixing of coastline errors, but since it generates the shapefiles used for the Mapnik caostline data it means that the Mapnik layer is outdated. [1] http://wiki.openstreetmap.org/wiki/Coastline_error_checker
non_process
coastline error checker unavailable the coastline error checker has been offline since june not only does this hinder fixing of coastline errors but since it generates the shapefiles used for the mapnik caostline data it means that the mapnik layer is outdated
0
16,943
4,107,649,385
IssuesEvent
2016-06-06 13:46:25
apinf/api-umbrella-dashboard
https://api.github.com/repos/apinf/api-umbrella-dashboard
closed
'Edit API backend' view in 'Suomi' language with partly translated text strings
API Backend bug Documentation Viewer in progress
Steps: 1. Visit https://nightly.apinf.io 2. Change language to 'Suomi' 3. Login 4. Go to dashboard 5. Add API Backend 6. Go to Manage API Backends 7. Click on 'Edit' 8. Go to 'Documentation' Findings: - Title text strings for input fields shown in English language - Title text for 'Browse' and 'Create API documentation' buttons in 'Suomi' view shown in English language - User information text strings are shown in English language Screenshots: ![suomi_view_documentation](https://cloud.githubusercontent.com/assets/13309358/13177236/7354bb48-d720-11e5-8279-776fd01ecdaa.png) ![suomi_view_global_req_settings](https://cloud.githubusercontent.com/assets/13309358/13177247/7d1574e2-d720-11e5-90e2-da61b6594ba9.png) ![suomi_view_advanced_req_rewriting](https://cloud.githubusercontent.com/assets/13309358/13177253/8612f2e0-d720-11e5-8037-f8cfb0a7c97f.png) ![suomi_view_advanced_settings](https://cloud.githubusercontent.com/assets/13309358/13177260/90a57692-d720-11e5-8a01-ddee18048374.png) ![edit_suomi_view_info_text_english1](https://cloud.githubusercontent.com/assets/13309358/13217158/282457fa-d969-11e5-88b2-1e8fd53b631f.png) ![edit_suomi_page_info_english_text2](https://cloud.githubusercontent.com/assets/13309358/13217156/1f0752ee-d969-11e5-9185-92528c7b025a.png)
1.0
'Edit API backend' view in 'Suomi' language with partly translated text strings - Steps: 1. Visit https://nightly.apinf.io 2. Change language to 'Suomi' 3. Login 4. Go to dashboard 5. Add API Backend 6. Go to Manage API Backends 7. Click on 'Edit' 8. Go to 'Documentation' Findings: - Title text strings for input fields shown in English language - Title text for 'Browse' and 'Create API documentation' buttons in 'Suomi' view shown in English language - User information text strings are shown in English language Screenshots: ![suomi_view_documentation](https://cloud.githubusercontent.com/assets/13309358/13177236/7354bb48-d720-11e5-8279-776fd01ecdaa.png) ![suomi_view_global_req_settings](https://cloud.githubusercontent.com/assets/13309358/13177247/7d1574e2-d720-11e5-90e2-da61b6594ba9.png) ![suomi_view_advanced_req_rewriting](https://cloud.githubusercontent.com/assets/13309358/13177253/8612f2e0-d720-11e5-8037-f8cfb0a7c97f.png) ![suomi_view_advanced_settings](https://cloud.githubusercontent.com/assets/13309358/13177260/90a57692-d720-11e5-8a01-ddee18048374.png) ![edit_suomi_view_info_text_english1](https://cloud.githubusercontent.com/assets/13309358/13217158/282457fa-d969-11e5-88b2-1e8fd53b631f.png) ![edit_suomi_page_info_english_text2](https://cloud.githubusercontent.com/assets/13309358/13217156/1f0752ee-d969-11e5-9185-92528c7b025a.png)
non_process
edit api backend view in suomi language with partly translated text strings steps visit change language to suomi login go to dashboard add api backend go to manage api backends click on edit go to documentation findings title text strings for input fields shown in english language title text for browse and create api documentation buttons in suomi view shown in english language user information text strings are shown in english language screenshots
0
5,671
8,556,325,892
IssuesEvent
2018-11-08 12:50:14
kiwicom/orbit-components
https://api.github.com/repos/kiwicom/orbit-components
closed
<Radio /> pass JSX to the label
Enhancement Processing
**Is your feature request related to a problem? Please describe.** I would like to be able to pass JSX to the label of the Radio component so that I can nest another component inside the label. See the screenshot below for an example use case. **Describe the solution you'd like** To be able to pass JSX as a prop to the label. **Describe alternatives you've considered** Cannot think of any. **Additional context** ![image](https://user-images.githubusercontent.com/16268406/44906791-32533600-ad16-11e8-8bb3-7fccd6d0911d.png)
1.0
<Radio /> pass JSX to the label - **Is your feature request related to a problem? Please describe.** I would like to be able to pass JSX to the label of the Radio component so that I can nest another component inside the label. See the screenshot below for an example use case. **Describe the solution you'd like** To be able to pass JSX as a prop to the label. **Describe alternatives you've considered** Cannot think of any. **Additional context** ![image](https://user-images.githubusercontent.com/16268406/44906791-32533600-ad16-11e8-8bb3-7fccd6d0911d.png)
process
pass jsx to the label is your feature request related to a problem please describe i would like to be able to pass jsx to the label of the radio component so that i can nest another component inside the label see the screenshot below for an example use case describe the solution you d like to be able to pass jsx as a prop to the label describe alternatives you ve considered cannot think of any additional context
1
15,956
20,173,302,597
IssuesEvent
2022-02-10 12:24:48
ooi-data/CE01ISSM-MFD35-04-ADCPTM000-recovered_inst-adcpt_m_wvs_recovered
https://api.github.com/repos/ooi-data/CE01ISSM-MFD35-04-ADCPTM000-recovered_inst-adcpt_m_wvs_recovered
opened
🛑 Processing failed: ZeroDivisionError
process
## Overview `ZeroDivisionError` found in `processing_task` task during run ended on 2022-02-10T12:24:47.545345. ## Details Flow name: `CE01ISSM-MFD35-04-ADCPTM000-recovered_inst-adcpt_m_wvs_recovered` Task name: `processing_task` Error type: `ZeroDivisionError` Error message: division by zero <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 157, in processing process_dataset( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 135, in process_dataset mod_ds, enc = chunk_ds(ds, max_chunk='100MB') File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 206, in chunk_ds chunks = _calc_chunks(v, max_chunk=max_chunk) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 194, in _calc_chunks max_chunk_size / prod(dim_shape.values()) / variable.dtype.itemsize ZeroDivisionError: division by zero ``` </details>
1.0
🛑 Processing failed: ZeroDivisionError - ## Overview `ZeroDivisionError` found in `processing_task` task during run ended on 2022-02-10T12:24:47.545345. ## Details Flow name: `CE01ISSM-MFD35-04-ADCPTM000-recovered_inst-adcpt_m_wvs_recovered` Task name: `processing_task` Error type: `ZeroDivisionError` Error message: division by zero <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 157, in processing process_dataset( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 135, in process_dataset mod_ds, enc = chunk_ds(ds, max_chunk='100MB') File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 206, in chunk_ds chunks = _calc_chunks(v, max_chunk=max_chunk) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 194, in _calc_chunks max_chunk_size / prod(dim_shape.values()) / variable.dtype.itemsize ZeroDivisionError: division by zero ``` </details>
process
🛑 processing failed zerodivisionerror overview zerodivisionerror found in processing task task during run ended on details flow name recovered inst adcpt m wvs recovered task name processing task error type zerodivisionerror error message division by zero traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing process dataset file srv conda envs notebook lib site packages ooi harvester processor init py line in process dataset mod ds enc chunk ds ds max chunk file srv conda envs notebook lib site packages ooi harvester processor init py line in chunk ds chunks calc chunks v max chunk max chunk file srv conda envs notebook lib site packages ooi harvester processor init py line in calc chunks max chunk size prod dim shape values variable dtype itemsize zerodivisionerror division by zero
1
6,013
8,822,090,178
IssuesEvent
2019-01-02 07:33:49
linnovate/root
https://api.github.com/repos/linnovate/root
closed
multiple selection my tasks bug
2.0.7 Fixed Process bug
click on ICU click on watched tasks twice click on a multiple selection option on a task it selects every task on the list and you cant un-select it after ![image](https://user-images.githubusercontent.com/38312178/50212172-0c17db80-0383-11e9-8c99-305da04fbe89.png)
1.0
multiple selection my tasks bug - click on ICU click on watched tasks twice click on a multiple selection option on a task it selects every task on the list and you cant un-select it after ![image](https://user-images.githubusercontent.com/38312178/50212172-0c17db80-0383-11e9-8c99-305da04fbe89.png)
process
multiple selection my tasks bug click on icu click on watched tasks twice click on a multiple selection option on a task it selects every task on the list and you cant un select it after
1
34,979
7,497,166,764
IssuesEvent
2018-04-08 17:00:56
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
[Bug?] Example of Bland's Rule for optimize.linprog (simplex) cycling/preventing termination
defect scipy.optimize
Forcing Bland's Rule with the "bland":True options seems to prevent the code example below from terminating, while the purpose of Bland's Rule is to make sure it terminates. The problem is solved quickly when not forcing Bland's Rule, giving an optimal value of -6.044533469014448. By "not terminating" i mean that it's still running after running for a couple of hours on a MacBook Pro, while my own implementation of Bland's Rule finds it in seconds. ### Reproducing code example: ``` import numpy as np import scipy.optimize as opt import sys np.random.seed(4) m = int(np.round(10 * np.exp(np.log(50) * np.random.rand()))) n = int(np.round(10 * np.exp(np.log(50) * np.random.rand()))) c, A, b = np.round(10*np.random.randn(n)),np.round(10*np.random.randn(m,n)),np.round(10*np.abs(np.random.randn(m))) opt.linprog(-1*c, A, b, options={"maxiter": sys.maxsize, "bland": True}) ``` ### Scipy/Numpy/Python version information: 1.0.0 1.14.0 sys.version_info(major=3, minor=6, micro=4, releaselevel='final', serial=0)
1.0
[Bug?] Example of Bland's Rule for optimize.linprog (simplex) cycling/preventing termination - Forcing Bland's Rule with the "bland":True options seems to prevent the code example below from terminating, while the purpose of Bland's Rule is to make sure it terminates. The problem is solved quickly when not forcing Bland's Rule, giving an optimal value of -6.044533469014448. By "not terminating" i mean that it's still running after running for a couple of hours on a MacBook Pro, while my own implementation of Bland's Rule finds it in seconds. ### Reproducing code example: ``` import numpy as np import scipy.optimize as opt import sys np.random.seed(4) m = int(np.round(10 * np.exp(np.log(50) * np.random.rand()))) n = int(np.round(10 * np.exp(np.log(50) * np.random.rand()))) c, A, b = np.round(10*np.random.randn(n)),np.round(10*np.random.randn(m,n)),np.round(10*np.abs(np.random.randn(m))) opt.linprog(-1*c, A, b, options={"maxiter": sys.maxsize, "bland": True}) ``` ### Scipy/Numpy/Python version information: 1.0.0 1.14.0 sys.version_info(major=3, minor=6, micro=4, releaselevel='final', serial=0)
non_process
example of bland s rule for optimize linprog simplex cycling preventing termination forcing bland s rule with the bland true options seems to prevent the code example below from terminating while the purpose of bland s rule is to make sure it terminates the problem is solved quickly when not forcing bland s rule giving an optimal value of by not terminating i mean that it s still running after running for a couple of hours on a macbook pro while my own implementation of bland s rule finds it in seconds reproducing code example import numpy as np import scipy optimize as opt import sys np random seed m int np round np exp np log np random rand n int np round np exp np log np random rand c a b np round np random randn n np round np random randn m n np round np abs np random randn m opt linprog c a b options maxiter sys maxsize bland true scipy numpy python version information sys version info major minor micro releaselevel final serial
0
15,815
20,014,061,371
IssuesEvent
2022-02-01 10:12:09
gradle/gradle
https://api.github.com/repos/gradle/gradle
closed
Wrong code gets generated when using MapStruct decorators with incremental compilation in Gradle 7.3
@execution in:annotation-processing
I've ran into a problem with using MapStruct's annotation processor with incremental compilation after upgrading to Gradle 7.3, I have also created a MapStruct issue here: https://github.com/mapstruct/mapstruct/issues/2682 ### Expected Behavior Expected the generated `PersonaMapperImpl.java` after incremental compilation to be: ```java @Generated( value = "org.mapstruct.ap.MappingProcessor", date = "2021-12-08T14:16:55+1300", comments = "version: 1.5.0.Beta1, compiler: IncrementalProcessingEnvironment from gradle-language-java-7.3.1.jar, environment: Java 17 (Private Build)" ) @Component @Primary class PersonMapperImpl extends PersonMapperDecorator { } ``` ### Current Behavior Actual content of `PersonaMapperImpl.java`: ```java @Generated( value = "org.mapstruct.ap.MappingProcessor", date = "2021-12-08T14:14:11+1300", comments = "version: 1.5.0.Beta1, compiler: IncrementalProcessingEnvironment from gradle-language-java-7.3.1.jar, environment: Java 17 (Private Build)" ) @Component class PersonMapperImpl implements PersonMapper { @Override public PersonDto personToPersonDto(Person person) { if ( person == null ) { return null; } PersonDto personDto = new PersonDto(); personDto.name = person.name; return personDto; } } ``` ### Context Used to work as expected in Gradle 7.2 but no longer works in 7.3, I have to force a full compilation to get the expected code to be generated, which is not practical for a large project. ### Steps to Reproduce 1. Check out the demo project for this issue: https://github.com/laech/mapstruct-spring-gradle-incremental-issue 2. `./gradlew compileJava` (`PersonaMapperImpl.java` has the correct content at this point in time) 3. Then make a change in `PersonMapperDecorator.java`, such as adding a random comment. 4. `./gradlew compileJava` (this will run an incremental build, `PersonaMapperImpl.java` now has the wrong content) ### Your Environment Ubuntu 21.10 JDK 17
1.0
Wrong code gets generated when using MapStruct decorators with incremental compilation in Gradle 7.3 - I've ran into a problem with using MapStruct's annotation processor with incremental compilation after upgrading to Gradle 7.3, I have also created a MapStruct issue here: https://github.com/mapstruct/mapstruct/issues/2682 ### Expected Behavior Expected the generated `PersonaMapperImpl.java` after incremental compilation to be: ```java @Generated( value = "org.mapstruct.ap.MappingProcessor", date = "2021-12-08T14:16:55+1300", comments = "version: 1.5.0.Beta1, compiler: IncrementalProcessingEnvironment from gradle-language-java-7.3.1.jar, environment: Java 17 (Private Build)" ) @Component @Primary class PersonMapperImpl extends PersonMapperDecorator { } ``` ### Current Behavior Actual content of `PersonaMapperImpl.java`: ```java @Generated( value = "org.mapstruct.ap.MappingProcessor", date = "2021-12-08T14:14:11+1300", comments = "version: 1.5.0.Beta1, compiler: IncrementalProcessingEnvironment from gradle-language-java-7.3.1.jar, environment: Java 17 (Private Build)" ) @Component class PersonMapperImpl implements PersonMapper { @Override public PersonDto personToPersonDto(Person person) { if ( person == null ) { return null; } PersonDto personDto = new PersonDto(); personDto.name = person.name; return personDto; } } ``` ### Context Used to work as expected in Gradle 7.2 but no longer works in 7.3, I have to force a full compilation to get the expected code to be generated, which is not practical for a large project. ### Steps to Reproduce 1. Check out the demo project for this issue: https://github.com/laech/mapstruct-spring-gradle-incremental-issue 2. `./gradlew compileJava` (`PersonaMapperImpl.java` has the correct content at this point in time) 3. Then make a change in `PersonMapperDecorator.java`, such as adding a random comment. 4. `./gradlew compileJava` (this will run an incremental build, `PersonaMapperImpl.java` now has the wrong content) ### Your Environment Ubuntu 21.10 JDK 17
process
wrong code gets generated when using mapstruct decorators with incremental compilation in gradle i ve ran into a problem with using mapstruct s annotation processor with incremental compilation after upgrading to gradle i have also created a mapstruct issue here expected behavior expected the generated personamapperimpl java after incremental compilation to be java generated value org mapstruct ap mappingprocessor date comments version compiler incrementalprocessingenvironment from gradle language java jar environment java private build component primary class personmapperimpl extends personmapperdecorator current behavior actual content of personamapperimpl java java generated value org mapstruct ap mappingprocessor date comments version compiler incrementalprocessingenvironment from gradle language java jar environment java private build component class personmapperimpl implements personmapper override public persondto persontopersondto person person if person null return null persondto persondto new persondto persondto name person name return persondto context used to work as expected in gradle but no longer works in i have to force a full compilation to get the expected code to be generated which is not practical for a large project steps to reproduce check out the demo project for this issue gradlew compilejava personamapperimpl java has the correct content at this point in time then make a change in personmapperdecorator java such as adding a random comment gradlew compilejava this will run an incremental build personamapperimpl java now has the wrong content your environment ubuntu jdk
1
10,347
13,172,726,999
IssuesEvent
2020-08-11 18:58:06
hashgraph/hedera-mirror-node
https://api.github.com/repos/hashgraph/hedera-mirror-node
closed
Kubernetes: Continuous Deployment from master
P2 enhancement process
**Problem** We should continuously deploy master to a development environment Kubernetes cluster. **Solution** - Implement #625 - Use FluxCD **Alternatives** **Additional Context**
1.0
Kubernetes: Continuous Deployment from master - **Problem** We should continuously deploy master to a development environment Kubernetes cluster. **Solution** - Implement #625 - Use FluxCD **Alternatives** **Additional Context**
process
kubernetes continuous deployment from master problem we should continuously deploy master to a development environment kubernetes cluster solution implement use fluxcd alternatives additional context
1
22,126
30,671,535,739
IssuesEvent
2023-07-25 23:08:46
h4sh5/pypi-auto-scanner
https://api.github.com/repos/h4sh5/pypi-auto-scanner
opened
opilot 1.0.0.dev0 has 2 GuardDog issues
guarddog exec-base64 silent-process-execution
https://pypi.org/project/opilot https://inspector.pypi.io/project/opilot ```{ "dependency": "opilot", "version": "1.0.0.dev0", "result": { "issues": 2, "errors": {}, "results": { "silent-process-execution": [ { "location": "opilot-1.0.0.dev0/sky/skylet/log_lib.py:219", "code": " subprocess.Popen(\n daemon_cmd,\n start_new_session=True,\n # Suppress output\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n # Disa... )", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ], "exec-base64": [ { "location": "opilot-1.0.0.dev0/sky/cloud_stores.py:113", "code": " p = subprocess.run(command,\n stdout=subprocess.PIPE,\n shell=True,\n check=True,\n executable='/bin/bash')", "message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n" } ] }, "path": "/tmp/tmpysbzk0ef/opilot" } }```
1.0
opilot 1.0.0.dev0 has 2 GuardDog issues - https://pypi.org/project/opilot https://inspector.pypi.io/project/opilot ```{ "dependency": "opilot", "version": "1.0.0.dev0", "result": { "issues": 2, "errors": {}, "results": { "silent-process-execution": [ { "location": "opilot-1.0.0.dev0/sky/skylet/log_lib.py:219", "code": " subprocess.Popen(\n daemon_cmd,\n start_new_session=True,\n # Suppress output\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n # Disa... )", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ], "exec-base64": [ { "location": "opilot-1.0.0.dev0/sky/cloud_stores.py:113", "code": " p = subprocess.run(command,\n stdout=subprocess.PIPE,\n shell=True,\n check=True,\n executable='/bin/bash')", "message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n" } ] }, "path": "/tmp/tmpysbzk0ef/opilot" } }```
process
opilot has guarddog issues dependency opilot version result issues errors results silent process execution location opilot sky skylet log lib py code subprocess popen n daemon cmd n start new session true n suppress output n stdout subprocess devnull n stderr subprocess devnull n disa message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null exec location opilot sky cloud stores py code p subprocess run command n stdout subprocess pipe n shell true n check true n executable bin bash message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n path tmp opilot
1
22,613
31,837,252,653
IssuesEvent
2023-09-14 14:12:48
h4sh5/npm-auto-scanner
https://api.github.com/repos/h4sh5/npm-auto-scanner
opened
@plone/volto 16.24.0 has 29 guarddog issues
npm-install-script shady-links npm-silent-process-execution
```{"npm-install-script":[{"code":" \"postinstall\": \"make patches\",","location":"package/package.json:36","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\",","location":"package/package.json:41","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run setup-burntsushi-toml-suite \u0026\u0026 npm run setup-iarna-toml-suite\"","location":"package/src/addons/volto-test-addon/node_modules/@iarna/toml/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"npm run build\"","location":"package/src/addons/volto-test-addon/node_modules/@sindresorhus/is/package.json:22","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"npm run build\",","location":"package/src/addons/volto-test-addon/node_modules/@szmarczak/http-timer/package.json:13","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"cd ..; npm run build:main \u0026\u0026 npm run build:bin\"","location":"package/src/addons/volto-test-addon/node_modules/acorn/package.json:32","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run lint \u0026\u0026 npm run flow \u0026\u0026 npm run test \u0026\u0026 npm run build\",","location":"package/src/addons/volto-test-addon/node_modules/axobject-query/package.json:11","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"npm run build\",","location":"package/src/addons/volto-test-addon/node_modules/cacheable-request/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\"","location":"package/src/addons/volto-test-addon/node_modules/ci-info/package.json:34","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"postinstall\": \"node -e \\\"try{require('./postinstall')}catch(e){}\\\"\"","location":"package/src/addons/volto-test-addon/node_modules/core-js-pure/package.json:71","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/src/addons/volto-test-addon/node_modules/defer-to-connect/package.json:14","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run cleanup \u0026\u0026 npm run build\",","location":"package/src/addons/volto-test-addon/node_modules/form-data-encoder/package.json:36","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"npm run build\"","location":"package/src/addons/volto-test-addon/node_modules/got/package.json:20","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"rollup -c\"","location":"package/src/addons/volto-test-addon/node_modules/is-plain-object/package.json:41","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"yarn build\",","location":"package/src/addons/volto-test-addon/node_modules/keyv/package.json:8","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/src/addons/volto-test-addon/node_modules/node-fetch/package.json:19","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"npm run build\"","location":"package/src/addons/volto-test-addon/node_modules/package-json/node_modules/got/package.json:18","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"del-cli dist \u0026\u0026 BABEL_ENV=publish babel src --out-dir dist --ignore /__tests__/\",","location":"package/src/addons/volto-test-addon/node_modules/postcss-selector-parser/package.json:37","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/src/addons/volto-test-addon/node_modules/proxy-agent/node_modules/lru-cache/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"postinstall\": \"lerna bootstrap\",","location":"package/src/addons/volto-test-addon/node_modules/resolve/test/resolver/multirepo/package.json:8","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/src/addons/volto-test-addon/node_modules/schema-utils/package.json:37","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"husky install\"","location":"package/src/addons/volto-test-addon/node_modules/stylelint-config-sass-guidelines/node_modules/postcss-sorting/package.json:52","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"husky install\"","location":"package/src/addons/volto-test-addon/node_modules/stylelint-config-sass-guidelines/node_modules/stylelint-order/package.json:56","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\"","location":"package/src/addons/volto-test-addon/node_modules/stylelint-scss/package.json:66","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\"","location":"package/src/addons/volto-test-addon/node_modules/web-streams-polyfill/package.json:27","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\"","location":"package/src/addons/volto-test-addon/node_modules/yargs-parser/package.json:34","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":"\t\tspawn(process.execPath, [path.join(__dirname, 'check.js'), JSON.stringify(this.#options)], {\n\t\t\tdetached: true,\n\t\t\tstdio: 'ignore',\n\t\t}).unref();","location":"package/src/addons/volto-test-addon/node_modules/update-notifier/update-notifier.js:111","message":"This package is silently executing another executable"}],"shady-links":[{"code":"// https://tc39.es/ecma262/#sec-string.prototype.link","location":"package/src/addons/volto-test-addon/node_modules/core-js-pure/modules/es.string.link.js:7","message":"This package contains an URL to a domain with a suspicious extension"},{"code":" // collaborator checks. Ref: https://bit.ly/2vsyRzu","location":"package/src/addons/volto-test-addon/node_modules/release-it/lib/plugin/github/GitHub.js:67","message":"This package contains an URL to a domain with a suspicious extension"}]}```
1.0
@plone/volto 16.24.0 has 29 guarddog issues - ```{"npm-install-script":[{"code":" \"postinstall\": \"make patches\",","location":"package/package.json:36","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\",","location":"package/package.json:41","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run setup-burntsushi-toml-suite \u0026\u0026 npm run setup-iarna-toml-suite\"","location":"package/src/addons/volto-test-addon/node_modules/@iarna/toml/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"npm run build\"","location":"package/src/addons/volto-test-addon/node_modules/@sindresorhus/is/package.json:22","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"npm run build\",","location":"package/src/addons/volto-test-addon/node_modules/@szmarczak/http-timer/package.json:13","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"cd ..; npm run build:main \u0026\u0026 npm run build:bin\"","location":"package/src/addons/volto-test-addon/node_modules/acorn/package.json:32","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run lint \u0026\u0026 npm run flow \u0026\u0026 npm run test \u0026\u0026 npm run build\",","location":"package/src/addons/volto-test-addon/node_modules/axobject-query/package.json:11","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"npm run build\",","location":"package/src/addons/volto-test-addon/node_modules/cacheable-request/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\"","location":"package/src/addons/volto-test-addon/node_modules/ci-info/package.json:34","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"postinstall\": \"node -e \\\"try{require('./postinstall')}catch(e){}\\\"\"","location":"package/src/addons/volto-test-addon/node_modules/core-js-pure/package.json:71","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/src/addons/volto-test-addon/node_modules/defer-to-connect/package.json:14","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run cleanup \u0026\u0026 npm run build\",","location":"package/src/addons/volto-test-addon/node_modules/form-data-encoder/package.json:36","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"npm run build\"","location":"package/src/addons/volto-test-addon/node_modules/got/package.json:20","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"rollup -c\"","location":"package/src/addons/volto-test-addon/node_modules/is-plain-object/package.json:41","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"yarn build\",","location":"package/src/addons/volto-test-addon/node_modules/keyv/package.json:8","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/src/addons/volto-test-addon/node_modules/node-fetch/package.json:19","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"npm run build\"","location":"package/src/addons/volto-test-addon/node_modules/package-json/node_modules/got/package.json:18","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"del-cli dist \u0026\u0026 BABEL_ENV=publish babel src --out-dir dist --ignore /__tests__/\",","location":"package/src/addons/volto-test-addon/node_modules/postcss-selector-parser/package.json:37","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/src/addons/volto-test-addon/node_modules/proxy-agent/node_modules/lru-cache/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"postinstall\": \"lerna bootstrap\",","location":"package/src/addons/volto-test-addon/node_modules/resolve/test/resolver/multirepo/package.json:8","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/src/addons/volto-test-addon/node_modules/schema-utils/package.json:37","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"husky install\"","location":"package/src/addons/volto-test-addon/node_modules/stylelint-config-sass-guidelines/node_modules/postcss-sorting/package.json:52","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"husky install\"","location":"package/src/addons/volto-test-addon/node_modules/stylelint-config-sass-guidelines/node_modules/stylelint-order/package.json:56","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\"","location":"package/src/addons/volto-test-addon/node_modules/stylelint-scss/package.json:66","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\"","location":"package/src/addons/volto-test-addon/node_modules/web-streams-polyfill/package.json:27","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\"","location":"package/src/addons/volto-test-addon/node_modules/yargs-parser/package.json:34","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":"\t\tspawn(process.execPath, [path.join(__dirname, 'check.js'), JSON.stringify(this.#options)], {\n\t\t\tdetached: true,\n\t\t\tstdio: 'ignore',\n\t\t}).unref();","location":"package/src/addons/volto-test-addon/node_modules/update-notifier/update-notifier.js:111","message":"This package is silently executing another executable"}],"shady-links":[{"code":"// https://tc39.es/ecma262/#sec-string.prototype.link","location":"package/src/addons/volto-test-addon/node_modules/core-js-pure/modules/es.string.link.js:7","message":"This package contains an URL to a domain with a suspicious extension"},{"code":" // collaborator checks. Ref: https://bit.ly/2vsyRzu","location":"package/src/addons/volto-test-addon/node_modules/release-it/lib/plugin/github/GitHub.js:67","message":"This package contains an URL to a domain with a suspicious extension"}]}```
process
plone volto has guarddog issues npm install script npm silent process execution n t t tdetached true n t t tstdio ignore n t t unref location package src addons volto test addon node modules update notifier update notifier js message this package is silently executing another executable shady links
1
114,211
17,195,798,287
IssuesEvent
2021-07-16 17:09:39
harrinry/carbon
https://api.github.com/repos/harrinry/carbon
opened
CVE-2020-28500 (Medium) detected in lodash-4.17.15.tgz
security vulnerability
## CVE-2020-28500 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p> <p>Path to dependency file: carbon/package.json</p> <p>Path to vulnerable library: carbon/node_modules/@commitlint/ensure/node_modules/lodash/package.json,carbon/node_modules/@commitlint/load/node_modules/lodash/package.json,carbon/node_modules/@commitlint/lint/node_modules/lodash/package.json,carbon/node_modules/@commitlint/resolve-extends/node_modules/lodash/package.json,carbon/node_modules/@commitlint/cli/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - cli-8.3.5.tgz (Root Library) - :x: **lodash-4.17.15.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/harrinry/carbon/commit/94195156354fb4a892f42b4f0adb11e9d40c606b">94195156354fb4a892f42b4f0adb11e9d40c606b</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p> <p>Release Date: 2021-02-15</p> <p>Fix Resolution: lodash-4.17.21</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.17.15","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@commitlint/cli:8.3.5;lodash:4.17.15","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash-4.17.21"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-28500","vulnerabilityDetails":"Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-28500 (Medium) detected in lodash-4.17.15.tgz - ## CVE-2020-28500 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p> <p>Path to dependency file: carbon/package.json</p> <p>Path to vulnerable library: carbon/node_modules/@commitlint/ensure/node_modules/lodash/package.json,carbon/node_modules/@commitlint/load/node_modules/lodash/package.json,carbon/node_modules/@commitlint/lint/node_modules/lodash/package.json,carbon/node_modules/@commitlint/resolve-extends/node_modules/lodash/package.json,carbon/node_modules/@commitlint/cli/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - cli-8.3.5.tgz (Root Library) - :x: **lodash-4.17.15.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/harrinry/carbon/commit/94195156354fb4a892f42b4f0adb11e9d40c606b">94195156354fb4a892f42b4f0adb11e9d40c606b</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p> <p>Release Date: 2021-02-15</p> <p>Fix Resolution: lodash-4.17.21</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.17.15","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@commitlint/cli:8.3.5;lodash:4.17.15","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash-4.17.21"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-28500","vulnerabilityDetails":"Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_process
cve medium detected in lodash tgz cve medium severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file carbon package json path to vulnerable library carbon node modules commitlint ensure node modules lodash package json carbon node modules commitlint load node modules lodash package json carbon node modules commitlint lint node modules lodash package json carbon node modules commitlint resolve extends node modules lodash package json carbon node modules commitlint cli node modules lodash package json dependency hierarchy cli tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash versions prior to are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree commitlint cli lodash isminimumfixversionavailable true minimumfixversion lodash basebranches vulnerabilityidentifier cve vulnerabilitydetails lodash versions prior to are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions vulnerabilityurl
0
18,324
24,443,109,370
IssuesEvent
2022-10-06 15:49:35
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
[processor/transform]: Allow transformation of Trace attribute keys
good first issue priority:p2 processor/transform pkg/ottl
**Is your feature request related to a problem? Please describe.** I want to able to rename Trace attribute keys with the transform processor. **Describe the solution you'd like** ```yaml transform/key_rename: traces: queries: - replace_all_patterns(attributes.keys, "^prefix_+", "prefix.") ``` **Describe alternatives you've considered** None as I know of no other alternatives. **Additional context** Regex named capture groups don't allow for `.` characters in the capture group and I want my attribute keys to follow the Otel naming standard of using `.`s. **Attempted Implementation** ```yaml attributes/extract: actions: - key: http.url pattern: ^(?P<prefix_http_protocol>.*):\/\/(?P<prefix_http_domain>.*)\/(?P<prefix_http_path>.*) action: extract transform/key_rename: traces: queries: - replace_all_patterns(attributes.keys, "^prefix_+", "prefix.") ``` **Code Change Required** https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/transformprocessor/internal/traces/traces.go#L147
1.0
[processor/transform]: Allow transformation of Trace attribute keys - **Is your feature request related to a problem? Please describe.** I want to able to rename Trace attribute keys with the transform processor. **Describe the solution you'd like** ```yaml transform/key_rename: traces: queries: - replace_all_patterns(attributes.keys, "^prefix_+", "prefix.") ``` **Describe alternatives you've considered** None as I know of no other alternatives. **Additional context** Regex named capture groups don't allow for `.` characters in the capture group and I want my attribute keys to follow the Otel naming standard of using `.`s. **Attempted Implementation** ```yaml attributes/extract: actions: - key: http.url pattern: ^(?P<prefix_http_protocol>.*):\/\/(?P<prefix_http_domain>.*)\/(?P<prefix_http_path>.*) action: extract transform/key_rename: traces: queries: - replace_all_patterns(attributes.keys, "^prefix_+", "prefix.") ``` **Code Change Required** https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/transformprocessor/internal/traces/traces.go#L147
process
allow transformation of trace attribute keys is your feature request related to a problem please describe i want to able to rename trace attribute keys with the transform processor describe the solution you d like yaml transform key rename traces queries replace all patterns attributes keys prefix prefix describe alternatives you ve considered none as i know of no other alternatives additional context regex named capture groups don t allow for characters in the capture group and i want my attribute keys to follow the otel naming standard of using s attempted implementation yaml attributes extract actions key http url pattern p p p action extract transform key rename traces queries replace all patterns attributes keys prefix prefix code change required
1
16,959
22,320,861,669
IssuesEvent
2022-06-14 06:12:58
hashgraph/hedera-json-rpc-relay
https://api.github.com/repos/hashgraph/hedera-json-rpc-relay
closed
Add acceptance test support for eth_getTransactionReceipt
enhancement P2 process
### Problem The current acceptance tests implemented in #119 were not able to include `eth_getTransactionReceipt` since it kept returning ``` err: { "type": "PrecheckStatusError", "message": "transaction 0.0.2@1654029652.985219400 failed precheck with status INVALID_ACCOUNT_ID", "stack": StatusError: transaction 0.0.2@1654029652.985219400 failed precheck with status INVALID_ACCOUNT_ID at new PrecheckStatusError (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/PrecheckStatusError.cjs:43:5) at AccountInfoQuery._mapStatusError (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/query/Query.cjs:431:12) at CostQuery._mapStatusError (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/query/CostQuery.cjs:155:24) at CostQuery.execute (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/Executable.cjs:519:22) at processTicksAndRejections (node:internal/process/task_queues:96:5) at AccountInfoQuery.getCost (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/account/AccountInfoQuery.cjs:144:16) at AccountInfoQuery._beforeExecute (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/query/Query.cjs:267:28) at AccountInfoQuery.execute (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/Executable.cjs:411:5) "name": "StatusError", "status": { "_code": 15 }, ``` ### Solution Add a test that calls `eth_getTransactionReceipt` for the contract executions (EthereumTranaction and non) in setup ### Alternatives _No response_
1.0
Add acceptance test support for eth_getTransactionReceipt - ### Problem The current acceptance tests implemented in #119 were not able to include `eth_getTransactionReceipt` since it kept returning ``` err: { "type": "PrecheckStatusError", "message": "transaction 0.0.2@1654029652.985219400 failed precheck with status INVALID_ACCOUNT_ID", "stack": StatusError: transaction 0.0.2@1654029652.985219400 failed precheck with status INVALID_ACCOUNT_ID at new PrecheckStatusError (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/PrecheckStatusError.cjs:43:5) at AccountInfoQuery._mapStatusError (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/query/Query.cjs:431:12) at CostQuery._mapStatusError (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/query/CostQuery.cjs:155:24) at CostQuery.execute (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/Executable.cjs:519:22) at processTicksAndRejections (node:internal/process/task_queues:96:5) at AccountInfoQuery.getCost (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/account/AccountInfoQuery.cjs:144:16) at AccountInfoQuery._beforeExecute (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/query/Query.cjs:267:28) at AccountInfoQuery.execute (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/Executable.cjs:411:5) "name": "StatusError", "status": { "_code": 15 }, ``` ### Solution Add a test that calls `eth_getTransactionReceipt` for the contract executions (EthereumTranaction and non) in setup ### Alternatives _No response_
process
add acceptance test support for eth gettransactionreceipt problem the current acceptance tests implemented in were not able to include eth gettransactionreceipt since it kept returning err type precheckstatuserror message transaction failed precheck with status invalid account id stack statuserror transaction failed precheck with status invalid account id at new precheckstatuserror hedera json rpc relay packages relay node modules hashgraph sdk lib precheckstatuserror cjs at accountinfoquery mapstatuserror hedera json rpc relay packages relay node modules hashgraph sdk lib query query cjs at costquery mapstatuserror hedera json rpc relay packages relay node modules hashgraph sdk lib query costquery cjs at costquery execute hedera json rpc relay packages relay node modules hashgraph sdk lib executable cjs at processticksandrejections node internal process task queues at accountinfoquery getcost hedera json rpc relay packages relay node modules hashgraph sdk lib account accountinfoquery cjs at accountinfoquery beforeexecute hedera json rpc relay packages relay node modules hashgraph sdk lib query query cjs at accountinfoquery execute hedera json rpc relay packages relay node modules hashgraph sdk lib executable cjs name statuserror status code solution add a test that calls eth gettransactionreceipt for the contract executions ethereumtranaction and non in setup alternatives no response
1
11,486
13,482,196,140
IssuesEvent
2020-09-11 00:56:50
enigma-dev/enigma-dev
https://api.github.com/repos/enigma-dev/enigma-dev
closed
Show Debug Message
Compatibility Editor
It seems 8 years ago polygonz (7a00fb0fba7c4c1cf767e6929810bc89edf66604) implemented `show_debug_message` in the regular terminal I/O instead of the Widget Systems. Need to test GM8.1 and GMSv1.4 to see what the behavior is. User reported that ENIGMA's only shows the message in debug mode, while GMSv1.4 will show it in run and debug mode.
True
Show Debug Message - It seems 8 years ago polygonz (7a00fb0fba7c4c1cf767e6929810bc89edf66604) implemented `show_debug_message` in the regular terminal I/O instead of the Widget Systems. Need to test GM8.1 and GMSv1.4 to see what the behavior is. User reported that ENIGMA's only shows the message in debug mode, while GMSv1.4 will show it in run and debug mode.
non_process
show debug message it seems years ago polygonz implemented show debug message in the regular terminal i o instead of the widget systems need to test and to see what the behavior is user reported that enigma s only shows the message in debug mode while will show it in run and debug mode
0
96,144
3,964,995,554
IssuesEvent
2016-05-03 05:26:25
firelab/windninja
https://api.github.com/repos/firelab/windninja
closed
UCAR NAM is down
component:wx priority:low severity:high
On there end, even using the web interface I get: `Message missing required fields: cdmHash`
1.0
UCAR NAM is down - On there end, even using the web interface I get: `Message missing required fields: cdmHash`
non_process
ucar nam is down on there end even using the web interface i get message missing required fields cdmhash
0
5,133
7,918,567,851
IssuesEvent
2018-07-04 13:43:37
emacs-ess/ESS
https://api.github.com/repos/emacs-ess/ESS
closed
org-babel-R output broken by recent ESS commit
literate:org process:eval
Recent commit 74d5fbbcde8c2aea5d87c82875e37c134d1af4cb ("Improve on + + replacement and offset detection") has broken org-babel-R output. In particular consider the following example: ``` #+BEGIN_SRC R :session :results output print("hello") #+END_SRC ``` Before the commit, org-babel successfully prints: ``` #+RESULTS: : [1] "hello" ``` However after the commit it fails to capture the output, simply printing: ``` #+RESULTS: ``` I'm not totally sure, but I think the issue is that org-babel prints an extra token to mark the end of the output, and expects this token to be preceded by a prompt, which it later uses to strip away this token. But it can no longer find the prompt so just strips away the whole output. See also: `org-babel-R-evaluate-session`, `org-babel-comint-with-output` EDIT to add: I've also posted this issue to the emacs org-mode mailing list, http://lists.gnu.org/archive/html/emacs-orgmode/2018-07/msg00011.html
1.0
org-babel-R output broken by recent ESS commit - Recent commit 74d5fbbcde8c2aea5d87c82875e37c134d1af4cb ("Improve on + + replacement and offset detection") has broken org-babel-R output. In particular consider the following example: ``` #+BEGIN_SRC R :session :results output print("hello") #+END_SRC ``` Before the commit, org-babel successfully prints: ``` #+RESULTS: : [1] "hello" ``` However after the commit it fails to capture the output, simply printing: ``` #+RESULTS: ``` I'm not totally sure, but I think the issue is that org-babel prints an extra token to mark the end of the output, and expects this token to be preceded by a prompt, which it later uses to strip away this token. But it can no longer find the prompt so just strips away the whole output. See also: `org-babel-R-evaluate-session`, `org-babel-comint-with-output` EDIT to add: I've also posted this issue to the emacs org-mode mailing list, http://lists.gnu.org/archive/html/emacs-orgmode/2018-07/msg00011.html
process
org babel r output broken by recent ess commit recent commit improve on replacement and offset detection has broken org babel r output in particular consider the following example begin src r session results output print hello end src before the commit org babel successfully prints results hello however after the commit it fails to capture the output simply printing results i m not totally sure but i think the issue is that org babel prints an extra token to mark the end of the output and expects this token to be preceded by a prompt which it later uses to strip away this token but it can no longer find the prompt so just strips away the whole output see also org babel r evaluate session org babel comint with output edit to add i ve also posted this issue to the emacs org mode mailing list
1
667
3,137,196,236
IssuesEvent
2015-09-11 00:14:48
MozillaFoundation/plan
https://api.github.com/repos/MozillaFoundation/plan
closed
PROCESS: Figure out and commit to quality-ensuring process
process
Given the stakes of these pages, it is highly desirable to establish a checklist and run through it often. I suspect we'll need two checklists, one for new pages, and one for changes to pages. ## For New Flows (e.g. new forms, new donation flows, new payment processors) - [ ] Dev: GA tracking in place with the right code (currently `UA-35433268-12`) - [ ] Dev: Webmaker Referred ID tracking as appropriate - [ ] Dev: Optimizely snippet in place - [ ] QA: GA tracking showing traffic - [ ] QA: Optimizely dashboard showing events - [ ] QA: Webmaker referrer dashboard showing events I don't know that the above are the right checkboxes, but I do want us to have some and clone them for each new page. ## For Pushes to existing pages We probably want some lighter weight process that happens e.g. daily to catch regressions, and we want more test automation. - [ ] step 1 - [ ] step 2
1.0
PROCESS: Figure out and commit to quality-ensuring process - Given the stakes of these pages, it is highly desirable to establish a checklist and run through it often. I suspect we'll need two checklists, one for new pages, and one for changes to pages. ## For New Flows (e.g. new forms, new donation flows, new payment processors) - [ ] Dev: GA tracking in place with the right code (currently `UA-35433268-12`) - [ ] Dev: Webmaker Referred ID tracking as appropriate - [ ] Dev: Optimizely snippet in place - [ ] QA: GA tracking showing traffic - [ ] QA: Optimizely dashboard showing events - [ ] QA: Webmaker referrer dashboard showing events I don't know that the above are the right checkboxes, but I do want us to have some and clone them for each new page. ## For Pushes to existing pages We probably want some lighter weight process that happens e.g. daily to catch regressions, and we want more test automation. - [ ] step 1 - [ ] step 2
process
process figure out and commit to quality ensuring process given the stakes of these pages it is highly desirable to establish a checklist and run through it often i suspect we ll need two checklists one for new pages and one for changes to pages for new flows e g new forms new donation flows new payment processors dev ga tracking in place with the right code currently ua dev webmaker referred id tracking as appropriate dev optimizely snippet in place qa ga tracking showing traffic qa optimizely dashboard showing events qa webmaker referrer dashboard showing events i don t know that the above are the right checkboxes but i do want us to have some and clone them for each new page for pushes to existing pages we probably want some lighter weight process that happens e g daily to catch regressions and we want more test automation step step
1
4,823
24,857,939,086
IssuesEvent
2022-10-27 05:15:40
aws/aws-sam-cli
https://api.github.com/repos/aws/aws-sam-cli
closed
Bug: sam invoke local - Docker 404 Client Error: Not Found
type/question blocked/more-info-needed maintainer/need-followup
### Description: I'm having a hard time trying to locally invoking lambda code with hello-world templates ### Steps to reproduce: `sam init -r python3.8` `1` `1` `N` `sam-app-py38` `cd sam-app-py38` `sam build --debug` 2022-10-19 19:05:12,888 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2022-10-19 19:05:12,888 | Using config file: samconfig.toml, config environment: default 2022-10-19 19:05:12,888 | Expand command line arguments to: 2022-10-19 19:05:12,888 | --template_file=/PATH_TO_SAM_APP/template.yml --build_dir=.aws-sam/build --cache_dir=.aws-sam/cache 2022-10-19 19:05:13,127 | 'build' command is called 2022-10-19 19:05:13,127 | Template is not provided in context, skip adding project type metric 2022-10-19 19:05:13,154 | Sending Telemetry: {'metrics': [{'commandRun': {'requestId': '5dab619d-ddc3-435c-a390-8deddd4ff231', 'installationId': '8ada282a-332d-4b64-98e1-4f7fb559111a', 'sessionId': '109eed1c-a0cf-4170-a22a-0b43307498a9', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.15', 'samcliVersion': '1.60.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam build', 'metricSpecificAttributes': {'gitOrigin': None, 'projectName': '894b4894943a1373a55cb1b7c0a8aa33530234fd6f4de1e562bc37c62c949767', 'initialCommit': None}, 'duration': 265, 'exitReason': 'TemplateNotFoundException', 'exitCode': 1}}]} 2022-10-19 19:05:13,853 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) Error: Template file not found at /PATH_TO_SAM_APP/template.yml afrigerio@Alessandros-MacBook-Pro lambdas % cd sam-app-py38 afrigerio@Alessandros-MacBook-Pro sam-app-py38 % sam build --debug 2022-10-19 19:06:07,009 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2022-10-19 19:06:07,009 | Using config file: samconfig.toml, config environment: default 2022-10-19 19:06:07,009 | Expand command line arguments to: 2022-10-19 19:06:07,009 | --template_file=/PATH_TO_SAM_APP/sam-app-py38/template.yaml --build_dir=.aws-sam/build --cache_dir=.aws-sam/cache 2022-10-19 19:06:07,144 | 'build' command is called 2022-10-19 19:06:07,147 | No Parameters detected in the template 2022-10-19 19:06:07,158 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,158 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,158 | 0 stacks found in the template 2022-10-19 19:06:07,158 | No Parameters detected in the template 2022-10-19 19:06:07,163 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,164 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,164 | 2 resources found in the stack 2022-10-19 19:06:07,164 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/' 2022-10-19 19:06:07,164 | --base-dir is not presented, adjusting uri hello_world/ relative to /PATH_TO_SAM_APP/sam-app-py38/template.yaml 2022-10-19 19:06:07,166 | Your template contains a resource with logical ID "ServerlessRestApi", which is a reserved logical ID in AWS SAM. It could result in unexpected behaviors and is not recommended. 2022-10-19 19:06:07,166 | 2 resources found in the stack 2022-10-19 19:06:07,166 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/' 2022-10-19 19:06:07,167 | Instantiating build definitions 2022-10-19 19:06:07,167 | No previous build graph found, generating new one 2022-10-19 19:06:07,167 | Unique function build definition found, adding as new (Function Build Definition: BuildDefinition(python3.8, /PATH_TO_SAM_APP/sam-app-py38/hello_world, Zip, , 4a7df146-0be6-426a-b355-2e43bc6fce03, {}, {}, x86_64, []), Function: Function(function_id='HelloWorldFunction', name='HelloWorldFunction', functionname='HelloWorldFunction', runtime='python3.8', memory=None, timeout=3, handler='app.lambda_handler', imageuri=None, packagetype='Zip', imageconfig=None, codeuri='/PATH_TO_SAM_APP/sam-app-py38/hello_world', environment=None, rolearn=None, layers=[], events={'HelloWorld': {'Type': 'Api', 'Properties': {'Path': '/hello', 'Method': 'get', 'RestApiId': 'ServerlessRestApi'}}}, metadata={'SamResourceId': 'HelloWorldFunction'}, inlinecode=None, codesign_config_arn=None, architectures=['x86_64'], function_url_config=None, stack_path='')) 2022-10-19 19:06:07,168 | Building codeuri: /PATH_TO_SAM_APP/sam-app-py38/hello_world runtime: python3.8 metadata: {} architecture: x86_64 functions: HelloWorldFunction 2022-10-19 19:06:07,168 | Building to following folder /PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction 2022-10-19 19:06:07,168 | Loading workflow module 'aws_lambda_builders.workflows' 2022-10-19 19:06:07,172 | Registering workflow 'PythonPipBuilder' with capability 'Capability(language='python', dependency_manager='pip', application_framework=None)' 2022-10-19 19:06:07,173 | Registering workflow 'NodejsNpmBuilder' with capability 'Capability(language='nodejs', dependency_manager='npm', application_framework=None)' 2022-10-19 19:06:07,174 | Registering workflow 'RubyBundlerBuilder' with capability 'Capability(language='ruby', dependency_manager='bundler', application_framework=None)' 2022-10-19 19:06:07,176 | Registering workflow 'GoModulesBuilder' with capability 'Capability(language='go', dependency_manager='modules', application_framework=None)' 2022-10-19 19:06:07,178 | Registering workflow 'JavaGradleWorkflow' with capability 'Capability(language='java', dependency_manager='gradle', application_framework=None)' 2022-10-19 19:06:07,179 | Registering workflow 'JavaMavenWorkflow' with capability 'Capability(language='java', dependency_manager='maven', application_framework=None)' 2022-10-19 19:06:07,181 | Registering workflow 'DotnetCliPackageBuilder' with capability 'Capability(language='dotnet', dependency_manager='cli-package', application_framework=None)' 2022-10-19 19:06:07,182 | Registering workflow 'CustomMakeBuilder' with capability 'Capability(language='provided', dependency_manager=None, application_framework=None)' 2022-10-19 19:06:07,183 | Registering workflow 'NodejsNpmEsbuildBuilder' with capability 'Capability(language='nodejs', dependency_manager='npm-esbuild', application_framework=None)' 2022-10-19 19:06:07,183 | Found workflow 'PythonPipBuilder' to support capabilities 'Capability(language='python', dependency_manager='pip', application_framework=None)' 2022-10-19 19:06:07,198 | Running workflow 'PythonPipBuilder' 2022-10-19 19:06:07,199 | Running PythonPipBuilder:ResolveDependencies 2022-10-19 19:06:07,216 | calling pip download -r /PATH_TO_SAM_APP/sam-app-py38/hello_world/requirements.txt --dest /var/folders/7s/02wlz_dx27q5f_hjqbrsjdsr0000gn/T/tmpm3ffmr1d --exists-action i 2022-10-19 19:06:07,782 | Full dependency closure: {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | initial compatible: {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | initial incompatible: set() 2022-10-19 19:06:07,783 | Downloading missing wheels: set() 2022-10-19 19:06:07,783 | compatible wheels after second download pass: {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | Build missing wheels from sdists (C compiling True): set() 2022-10-19 19:06:07,783 | compatible after building wheels (no C compiling): {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | Build missing wheels from sdists (C compiling False): set() 2022-10-19 19:06:07,783 | compatible after building wheels (C compiling): {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | Final compatible: {idna==3.4(wheel), urllib3==1.26.12(wheel), requests==2.28.1(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | Final incompatible: set() 2022-10-19 19:06:07,783 | Final missing wheels: set() 2022-10-19 19:06:07,797 | PythonPipBuilder:ResolveDependencies succeeded 2022-10-19 19:06:07,797 | Running PythonPipBuilder:CopySource 2022-10-19 19:06:07,798 | Copying source file (/PATH_TO_SAM_APP/sam-app-py38/hello_world/requirements.txt) to destination (/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction/requirements.txt) 2022-10-19 19:06:07,798 | Copying source file (/PATH_TO_SAM_APP/sam-app-py38/hello_world/__init__.py) to destination (/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction/__init__.py) 2022-10-19 19:06:07,798 | Copying source file (/PATH_TO_SAM_APP/sam-app-py38/hello_world/app.py) to destination (/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction/app.py) 2022-10-19 19:06:07,798 | PythonPipBuilder:CopySource succeeded 2022-10-19 19:06:07,799 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,799 | 2 resources found in the stack 2022-10-19 19:06:07,799 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/' Build Succeeded Built Artifacts : .aws-sam/build Built Template : .aws-sam/build/template.yaml Commands you can use next [*] Validate SAM template: sam validate [*] Invoke Function: sam local invoke [*] Test Function in the Cloud: sam sync --stack-name {stack-name} --watch [*] Deploy: sam deploy --guided 2022-10-19 19:06:07,826 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2022-10-19 19:06:07,826 | Unable to find Click Context for getting session_id. 2022-10-19 19:06:07,827 | Sending Telemetry: {'metrics': [{'commandRun': {'requestId': '7ac7183d-5134-44d2-a310-1ee8d30ece10', 'installationId': '8ada282a-332d-4b64-98e1-4f7fb559111a', 'sessionId': 'b549e766-17a5-4641-b0de-d53095b082cb', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.15', 'samcliVersion': '1.60.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam build', 'metricSpecificAttributes': {'projectType': 'CFN', 'gitOrigin': None, 'projectName': '0fdcacdf0a5a5c982672247f3c17ee797bd2e5c80105099d05b5329699391655', 'initialCommit': None}, 'duration': 816, 'exitReason': 'success', 'exitCode': 0}}]} 2022-10-19 19:06:07,827 | Sending Telemetry: {'metrics': [{'events': {'requestId': 'dd4677f9-d7d4-4ebb-9214-aa1aebc9712c', 'installationId': '8ada282a-332d-4b64-98e1-4f7fb559111a', 'sessionId': 'b549e766-17a5-4641-b0de-d53095b082cb', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.15', 'samcliVersion': '1.60.0', 'metricSpecificAttributes': {'events': [{'event_name': 'BuildWorkflowUsed', 'event_value': 'python-pip', 'thread_id': 4376790400, 'time_stamp': '2022-10-19 17:06:07.168'}, {'event_name': 'BuildFunctionRuntime', 'event_value': 'python3.8', 'thread_id': 4376790400, 'time_stamp': '2022-10-19 17:06:07.801'}]}}}]} 2022-10-19 19:06:08,521 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) 2022-10-19 19:06:08,522 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) `sam local invoke --debug` 2022-10-19 19:15:55,258 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2022-10-19 19:15:55,258 | Using config file: samconfig.toml, config environment: default 2022-10-19 19:15:55,258 | Expand command line arguments to: 2022-10-19 19:15:55,258 | --template_file=/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/template.yaml --event=events/event.json --no_event --layer_cache_basedir=/Users/afrigerio/.aws-sam/layers-pkg --container_host=localhost --container_host_interface=127.0.0.1 2022-10-19 19:15:55,258 | local invoke command is called 2022-10-19 19:15:55,261 | No Parameters detected in the template 2022-10-19 19:15:55,270 | Sam customer defined id is more priority than other IDs. Customer defined id for resource HelloWorldFunction is HelloWorldFunction 2022-10-19 19:15:55,270 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id 2022-10-19 19:15:55,270 | 0 stacks found in the template 2022-10-19 19:15:55,270 | No Parameters detected in the template 2022-10-19 19:15:55,275 | Sam customer defined id is more priority than other IDs. Customer defined id for resource HelloWorldFunction is HelloWorldFunction 2022-10-19 19:15:55,275 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id 2022-10-19 19:15:55,276 | 2 resources found in the stack 2022-10-19 19:15:55,276 | Found Serverless function with name='HelloWorldFunction' and CodeUri='HelloWorldFunction' 2022-10-19 19:15:55,276 | --base-dir is not presented, adjusting uri HelloWorldFunction relative to /PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/template.yaml 2022-10-19 19:15:55,299 | Found one Lambda function with name 'HelloWorldFunction' 2022-10-19 19:15:55,299 | Invoking app.lambda_handler (python3.8) 2022-10-19 19:15:55,299 | No environment variables found for function 'HelloWorldFunction' 2022-10-19 19:15:55,299 | Loading AWS credentials from session with profile 'None' 2022-10-19 19:15:55,304 | Resolving code path. Cwd=/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build, CodeUri=/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction 2022-10-19 19:15:55,304 | Resolved absolute path to code is /PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction 2022-10-19 19:15:55,305 | Code /PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction is not a zip/jar file 2022-10-19 19:15:55,307 | Cleaning all decompressed code dirs 2022-10-19 19:15:55,333 | Sending Telemetry: {'metrics': [{'commandRun': {'requestId': '94dcccda-e42d-4b06-b446-ad4bcc6fe6c3', 'installationId': '8ada282a-332d-4b64-98e1-4f7fb559111a', 'sessionId': 'b2d0e00e-d549-46e3-bab5-0cd6a21e9883', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.15', 'samcliVersion': '1.60.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam local invoke', 'metricSpecificAttributes': {'projectType': 'CFN', 'gitOrigin': None, 'projectName': '0fdcacdf0a5a5c982672247f3c17ee797bd2e5c80105099d05b5329699391655', 'initialCommit': None}, 'duration': 75, 'exitReason': 'NotFound', 'exitCode': 255}}]} 2022-10-19 19:15:56,064 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) Traceback (most recent call last): File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/api/client.py", line 261, in _raise_for_status response.raise_for_status() File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/requests/models.py", line 943, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/public.ecr.aws/sam/emulation-python3.8:rapid-1.60.0-x86_64/json During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/homebrew/bin/sam", line 8, in <module> sys.exit(cli()) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/decorators.py", line 73, in new_func return ctx.invoke(f, obj, *args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/lib/telemetry/metric.py", line 176, in wrapped raise exception # pylint: disable=raising-bad-type File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/lib/telemetry/metric.py", line 126, in wrapped return_value = func(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/lib/utils/version_checker.py", line 41, in wrapped actual_result = func(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/cli/main.py", line 86, in wrapper return func(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/commands/local/invoke/cli.py", line 85, in cli do_cli( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/commands/local/invoke/cli.py", line 182, in do_cli context.local_lambda_runner.invoke( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/commands/local/lib/local_lambda.py", line 137, in invoke self.local_runtime.invoke( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/lib/telemetry/metric.py", line 240, in wrapped_func return_value = func(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/lambdafn/runtime.py", line 177, in invoke container = self.create(function_config, debug_context, container_host, container_host_interface) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/lambdafn/runtime.py", line 73, in create container = LambdaContainer( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/docker/lambda_container.py", line 93, in __init__ image = LambdaContainer._get_image( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/docker/lambda_container.py", line 236, in _get_image return lambda_image.build(runtime, packagetype, image, layers, architecture, function_name=function_name) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/docker/lambda_image.py", line 145, in build self.docker_client.images.get(image_tag) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/models/images.py", line 316, in get return self.prepare_model(self.client.api.inspect_image(name)) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/utils/decorators.py", line 19, in wrapped return f(self, resource_id, *args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/api/image.py", line 245, in inspect_image return self._result( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/api/client.py", line 267, in _result self._raise_for_status(response) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/api/client.py", line 263, in _raise_for_status raise create_api_error_from_http_exception(e) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception raise cls(e, response=response, explanation=explanation) docker.errors.NotFound: 404 Client Error: Not Found ("id not found") ### Expected result: I was expecting the result coming from the lambda execution. I got a similar output if i try `sam local start-api` and then `curl http://127.0.0.1:3000/hello` ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: macOS Monterey (M1) 2. `sam --version`: SAM CLI, version 1.60.0 3. AWS region: eu-central-1
True
Bug: sam invoke local - Docker 404 Client Error: Not Found - ### Description: I'm having a hard time trying to locally invoking lambda code with hello-world templates ### Steps to reproduce: `sam init -r python3.8` `1` `1` `N` `sam-app-py38` `cd sam-app-py38` `sam build --debug` 2022-10-19 19:05:12,888 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2022-10-19 19:05:12,888 | Using config file: samconfig.toml, config environment: default 2022-10-19 19:05:12,888 | Expand command line arguments to: 2022-10-19 19:05:12,888 | --template_file=/PATH_TO_SAM_APP/template.yml --build_dir=.aws-sam/build --cache_dir=.aws-sam/cache 2022-10-19 19:05:13,127 | 'build' command is called 2022-10-19 19:05:13,127 | Template is not provided in context, skip adding project type metric 2022-10-19 19:05:13,154 | Sending Telemetry: {'metrics': [{'commandRun': {'requestId': '5dab619d-ddc3-435c-a390-8deddd4ff231', 'installationId': '8ada282a-332d-4b64-98e1-4f7fb559111a', 'sessionId': '109eed1c-a0cf-4170-a22a-0b43307498a9', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.15', 'samcliVersion': '1.60.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam build', 'metricSpecificAttributes': {'gitOrigin': None, 'projectName': '894b4894943a1373a55cb1b7c0a8aa33530234fd6f4de1e562bc37c62c949767', 'initialCommit': None}, 'duration': 265, 'exitReason': 'TemplateNotFoundException', 'exitCode': 1}}]} 2022-10-19 19:05:13,853 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) Error: Template file not found at /PATH_TO_SAM_APP/template.yml afrigerio@Alessandros-MacBook-Pro lambdas % cd sam-app-py38 afrigerio@Alessandros-MacBook-Pro sam-app-py38 % sam build --debug 2022-10-19 19:06:07,009 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2022-10-19 19:06:07,009 | Using config file: samconfig.toml, config environment: default 2022-10-19 19:06:07,009 | Expand command line arguments to: 2022-10-19 19:06:07,009 | --template_file=/PATH_TO_SAM_APP/sam-app-py38/template.yaml --build_dir=.aws-sam/build --cache_dir=.aws-sam/cache 2022-10-19 19:06:07,144 | 'build' command is called 2022-10-19 19:06:07,147 | No Parameters detected in the template 2022-10-19 19:06:07,158 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,158 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,158 | 0 stacks found in the template 2022-10-19 19:06:07,158 | No Parameters detected in the template 2022-10-19 19:06:07,163 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,164 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,164 | 2 resources found in the stack 2022-10-19 19:06:07,164 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/' 2022-10-19 19:06:07,164 | --base-dir is not presented, adjusting uri hello_world/ relative to /PATH_TO_SAM_APP/sam-app-py38/template.yaml 2022-10-19 19:06:07,166 | Your template contains a resource with logical ID "ServerlessRestApi", which is a reserved logical ID in AWS SAM. It could result in unexpected behaviors and is not recommended. 2022-10-19 19:06:07,166 | 2 resources found in the stack 2022-10-19 19:06:07,166 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/' 2022-10-19 19:06:07,167 | Instantiating build definitions 2022-10-19 19:06:07,167 | No previous build graph found, generating new one 2022-10-19 19:06:07,167 | Unique function build definition found, adding as new (Function Build Definition: BuildDefinition(python3.8, /PATH_TO_SAM_APP/sam-app-py38/hello_world, Zip, , 4a7df146-0be6-426a-b355-2e43bc6fce03, {}, {}, x86_64, []), Function: Function(function_id='HelloWorldFunction', name='HelloWorldFunction', functionname='HelloWorldFunction', runtime='python3.8', memory=None, timeout=3, handler='app.lambda_handler', imageuri=None, packagetype='Zip', imageconfig=None, codeuri='/PATH_TO_SAM_APP/sam-app-py38/hello_world', environment=None, rolearn=None, layers=[], events={'HelloWorld': {'Type': 'Api', 'Properties': {'Path': '/hello', 'Method': 'get', 'RestApiId': 'ServerlessRestApi'}}}, metadata={'SamResourceId': 'HelloWorldFunction'}, inlinecode=None, codesign_config_arn=None, architectures=['x86_64'], function_url_config=None, stack_path='')) 2022-10-19 19:06:07,168 | Building codeuri: /PATH_TO_SAM_APP/sam-app-py38/hello_world runtime: python3.8 metadata: {} architecture: x86_64 functions: HelloWorldFunction 2022-10-19 19:06:07,168 | Building to following folder /PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction 2022-10-19 19:06:07,168 | Loading workflow module 'aws_lambda_builders.workflows' 2022-10-19 19:06:07,172 | Registering workflow 'PythonPipBuilder' with capability 'Capability(language='python', dependency_manager='pip', application_framework=None)' 2022-10-19 19:06:07,173 | Registering workflow 'NodejsNpmBuilder' with capability 'Capability(language='nodejs', dependency_manager='npm', application_framework=None)' 2022-10-19 19:06:07,174 | Registering workflow 'RubyBundlerBuilder' with capability 'Capability(language='ruby', dependency_manager='bundler', application_framework=None)' 2022-10-19 19:06:07,176 | Registering workflow 'GoModulesBuilder' with capability 'Capability(language='go', dependency_manager='modules', application_framework=None)' 2022-10-19 19:06:07,178 | Registering workflow 'JavaGradleWorkflow' with capability 'Capability(language='java', dependency_manager='gradle', application_framework=None)' 2022-10-19 19:06:07,179 | Registering workflow 'JavaMavenWorkflow' with capability 'Capability(language='java', dependency_manager='maven', application_framework=None)' 2022-10-19 19:06:07,181 | Registering workflow 'DotnetCliPackageBuilder' with capability 'Capability(language='dotnet', dependency_manager='cli-package', application_framework=None)' 2022-10-19 19:06:07,182 | Registering workflow 'CustomMakeBuilder' with capability 'Capability(language='provided', dependency_manager=None, application_framework=None)' 2022-10-19 19:06:07,183 | Registering workflow 'NodejsNpmEsbuildBuilder' with capability 'Capability(language='nodejs', dependency_manager='npm-esbuild', application_framework=None)' 2022-10-19 19:06:07,183 | Found workflow 'PythonPipBuilder' to support capabilities 'Capability(language='python', dependency_manager='pip', application_framework=None)' 2022-10-19 19:06:07,198 | Running workflow 'PythonPipBuilder' 2022-10-19 19:06:07,199 | Running PythonPipBuilder:ResolveDependencies 2022-10-19 19:06:07,216 | calling pip download -r /PATH_TO_SAM_APP/sam-app-py38/hello_world/requirements.txt --dest /var/folders/7s/02wlz_dx27q5f_hjqbrsjdsr0000gn/T/tmpm3ffmr1d --exists-action i 2022-10-19 19:06:07,782 | Full dependency closure: {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | initial compatible: {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | initial incompatible: set() 2022-10-19 19:06:07,783 | Downloading missing wheels: set() 2022-10-19 19:06:07,783 | compatible wheels after second download pass: {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | Build missing wheels from sdists (C compiling True): set() 2022-10-19 19:06:07,783 | compatible after building wheels (no C compiling): {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | Build missing wheels from sdists (C compiling False): set() 2022-10-19 19:06:07,783 | compatible after building wheels (C compiling): {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | Final compatible: {idna==3.4(wheel), urllib3==1.26.12(wheel), requests==2.28.1(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | Final incompatible: set() 2022-10-19 19:06:07,783 | Final missing wheels: set() 2022-10-19 19:06:07,797 | PythonPipBuilder:ResolveDependencies succeeded 2022-10-19 19:06:07,797 | Running PythonPipBuilder:CopySource 2022-10-19 19:06:07,798 | Copying source file (/PATH_TO_SAM_APP/sam-app-py38/hello_world/requirements.txt) to destination (/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction/requirements.txt) 2022-10-19 19:06:07,798 | Copying source file (/PATH_TO_SAM_APP/sam-app-py38/hello_world/__init__.py) to destination (/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction/__init__.py) 2022-10-19 19:06:07,798 | Copying source file (/PATH_TO_SAM_APP/sam-app-py38/hello_world/app.py) to destination (/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction/app.py) 2022-10-19 19:06:07,798 | PythonPipBuilder:CopySource succeeded 2022-10-19 19:06:07,799 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,799 | 2 resources found in the stack 2022-10-19 19:06:07,799 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/' Build Succeeded Built Artifacts : .aws-sam/build Built Template : .aws-sam/build/template.yaml Commands you can use next [*] Validate SAM template: sam validate [*] Invoke Function: sam local invoke [*] Test Function in the Cloud: sam sync --stack-name {stack-name} --watch [*] Deploy: sam deploy --guided 2022-10-19 19:06:07,826 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2022-10-19 19:06:07,826 | Unable to find Click Context for getting session_id. 2022-10-19 19:06:07,827 | Sending Telemetry: {'metrics': [{'commandRun': {'requestId': '7ac7183d-5134-44d2-a310-1ee8d30ece10', 'installationId': '8ada282a-332d-4b64-98e1-4f7fb559111a', 'sessionId': 'b549e766-17a5-4641-b0de-d53095b082cb', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.15', 'samcliVersion': '1.60.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam build', 'metricSpecificAttributes': {'projectType': 'CFN', 'gitOrigin': None, 'projectName': '0fdcacdf0a5a5c982672247f3c17ee797bd2e5c80105099d05b5329699391655', 'initialCommit': None}, 'duration': 816, 'exitReason': 'success', 'exitCode': 0}}]} 2022-10-19 19:06:07,827 | Sending Telemetry: {'metrics': [{'events': {'requestId': 'dd4677f9-d7d4-4ebb-9214-aa1aebc9712c', 'installationId': '8ada282a-332d-4b64-98e1-4f7fb559111a', 'sessionId': 'b549e766-17a5-4641-b0de-d53095b082cb', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.15', 'samcliVersion': '1.60.0', 'metricSpecificAttributes': {'events': [{'event_name': 'BuildWorkflowUsed', 'event_value': 'python-pip', 'thread_id': 4376790400, 'time_stamp': '2022-10-19 17:06:07.168'}, {'event_name': 'BuildFunctionRuntime', 'event_value': 'python3.8', 'thread_id': 4376790400, 'time_stamp': '2022-10-19 17:06:07.801'}]}}}]} 2022-10-19 19:06:08,521 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) 2022-10-19 19:06:08,522 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) `sam local invoke --debug` 2022-10-19 19:15:55,258 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2022-10-19 19:15:55,258 | Using config file: samconfig.toml, config environment: default 2022-10-19 19:15:55,258 | Expand command line arguments to: 2022-10-19 19:15:55,258 | --template_file=/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/template.yaml --event=events/event.json --no_event --layer_cache_basedir=/Users/afrigerio/.aws-sam/layers-pkg --container_host=localhost --container_host_interface=127.0.0.1 2022-10-19 19:15:55,258 | local invoke command is called 2022-10-19 19:15:55,261 | No Parameters detected in the template 2022-10-19 19:15:55,270 | Sam customer defined id is more priority than other IDs. Customer defined id for resource HelloWorldFunction is HelloWorldFunction 2022-10-19 19:15:55,270 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id 2022-10-19 19:15:55,270 | 0 stacks found in the template 2022-10-19 19:15:55,270 | No Parameters detected in the template 2022-10-19 19:15:55,275 | Sam customer defined id is more priority than other IDs. Customer defined id for resource HelloWorldFunction is HelloWorldFunction 2022-10-19 19:15:55,275 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id 2022-10-19 19:15:55,276 | 2 resources found in the stack 2022-10-19 19:15:55,276 | Found Serverless function with name='HelloWorldFunction' and CodeUri='HelloWorldFunction' 2022-10-19 19:15:55,276 | --base-dir is not presented, adjusting uri HelloWorldFunction relative to /PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/template.yaml 2022-10-19 19:15:55,299 | Found one Lambda function with name 'HelloWorldFunction' 2022-10-19 19:15:55,299 | Invoking app.lambda_handler (python3.8) 2022-10-19 19:15:55,299 | No environment variables found for function 'HelloWorldFunction' 2022-10-19 19:15:55,299 | Loading AWS credentials from session with profile 'None' 2022-10-19 19:15:55,304 | Resolving code path. Cwd=/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build, CodeUri=/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction 2022-10-19 19:15:55,304 | Resolved absolute path to code is /PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction 2022-10-19 19:15:55,305 | Code /PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction is not a zip/jar file 2022-10-19 19:15:55,307 | Cleaning all decompressed code dirs 2022-10-19 19:15:55,333 | Sending Telemetry: {'metrics': [{'commandRun': {'requestId': '94dcccda-e42d-4b06-b446-ad4bcc6fe6c3', 'installationId': '8ada282a-332d-4b64-98e1-4f7fb559111a', 'sessionId': 'b2d0e00e-d549-46e3-bab5-0cd6a21e9883', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.15', 'samcliVersion': '1.60.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam local invoke', 'metricSpecificAttributes': {'projectType': 'CFN', 'gitOrigin': None, 'projectName': '0fdcacdf0a5a5c982672247f3c17ee797bd2e5c80105099d05b5329699391655', 'initialCommit': None}, 'duration': 75, 'exitReason': 'NotFound', 'exitCode': 255}}]} 2022-10-19 19:15:56,064 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) Traceback (most recent call last): File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/api/client.py", line 261, in _raise_for_status response.raise_for_status() File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/requests/models.py", line 943, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/public.ecr.aws/sam/emulation-python3.8:rapid-1.60.0-x86_64/json During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/homebrew/bin/sam", line 8, in <module> sys.exit(cli()) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/decorators.py", line 73, in new_func return ctx.invoke(f, obj, *args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/lib/telemetry/metric.py", line 176, in wrapped raise exception # pylint: disable=raising-bad-type File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/lib/telemetry/metric.py", line 126, in wrapped return_value = func(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/lib/utils/version_checker.py", line 41, in wrapped actual_result = func(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/cli/main.py", line 86, in wrapper return func(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/commands/local/invoke/cli.py", line 85, in cli do_cli( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/commands/local/invoke/cli.py", line 182, in do_cli context.local_lambda_runner.invoke( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/commands/local/lib/local_lambda.py", line 137, in invoke self.local_runtime.invoke( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/lib/telemetry/metric.py", line 240, in wrapped_func return_value = func(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/lambdafn/runtime.py", line 177, in invoke container = self.create(function_config, debug_context, container_host, container_host_interface) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/lambdafn/runtime.py", line 73, in create container = LambdaContainer( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/docker/lambda_container.py", line 93, in __init__ image = LambdaContainer._get_image( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/docker/lambda_container.py", line 236, in _get_image return lambda_image.build(runtime, packagetype, image, layers, architecture, function_name=function_name) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/docker/lambda_image.py", line 145, in build self.docker_client.images.get(image_tag) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/models/images.py", line 316, in get return self.prepare_model(self.client.api.inspect_image(name)) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/utils/decorators.py", line 19, in wrapped return f(self, resource_id, *args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/api/image.py", line 245, in inspect_image return self._result( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/api/client.py", line 267, in _result self._raise_for_status(response) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/api/client.py", line 263, in _raise_for_status raise create_api_error_from_http_exception(e) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception raise cls(e, response=response, explanation=explanation) docker.errors.NotFound: 404 Client Error: Not Found ("id not found") ### Expected result: I was expecting the result coming from the lambda execution. I got a similar output if i try `sam local start-api` and then `curl http://127.0.0.1:3000/hello` ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: macOS Monterey (M1) 2. `sam --version`: SAM CLI, version 1.60.0 3. AWS region: eu-central-1
non_process
bug sam invoke local docker client error not found description i m having a hard time trying to locally invoking lambda code with hello world templates steps to reproduce sam init r n sam app cd sam app sam build debug telemetry endpoint configured to be using config file samconfig toml config environment default expand command line arguments to template file path to sam app template yml build dir aws sam build cache dir aws sam cache build command is called template is not provided in context skip adding project type metric sending telemetry metrics httpsconnectionpool host aws serverless tools telemetry us west amazonaws com port read timed out read timeout error template file not found at path to sam app template yml afrigerio alessandros macbook pro lambdas cd sam app afrigerio alessandros macbook pro sam app sam build debug telemetry endpoint configured to be using config file samconfig toml config environment default expand command line arguments to template file path to sam app sam app template yaml build dir aws sam build cache dir aws sam cache build command is called no parameters detected in the template there is no customer defined id or cdk path defined for resource helloworldfunction so we will use the resource logical id as the resource id there is no customer defined id or cdk path defined for resource serverlessrestapi so we will use the resource logical id as the resource id stacks found in the template no parameters detected in the template there is no customer defined id or cdk path defined for resource helloworldfunction so we will use the resource logical id as the resource id there is no customer defined id or cdk path defined for resource serverlessrestapi so we will use the resource logical id as the resource id resources found in the stack found serverless function with name helloworldfunction and codeuri hello world base dir is not presented adjusting uri hello world relative to path to sam app sam app template yaml your template contains a resource with logical id serverlessrestapi which is a reserved logical id in aws sam it could result in unexpected behaviors and is not recommended resources found in the stack found serverless function with name helloworldfunction and codeuri hello world instantiating build definitions no previous build graph found generating new one unique function build definition found adding as new function build definition builddefinition path to sam app sam app hello world zip function function function id helloworldfunction name helloworldfunction functionname helloworldfunction runtime memory none timeout handler app lambda handler imageuri none packagetype zip imageconfig none codeuri path to sam app sam app hello world environment none rolearn none layers events helloworld type api properties path hello method get restapiid serverlessrestapi metadata samresourceid helloworldfunction inlinecode none codesign config arn none architectures function url config none stack path building codeuri path to sam app sam app hello world runtime metadata architecture functions helloworldfunction building to following folder path to sam app sam app aws sam build helloworldfunction loading workflow module aws lambda builders workflows registering workflow pythonpipbuilder with capability capability language python dependency manager pip application framework none registering workflow nodejsnpmbuilder with capability capability language nodejs dependency manager npm application framework none registering workflow rubybundlerbuilder with capability capability language ruby dependency manager bundler application framework none registering workflow gomodulesbuilder with capability capability language go dependency manager modules application framework none registering workflow javagradleworkflow with capability capability language java dependency manager gradle application framework none registering workflow javamavenworkflow with capability capability language java dependency manager maven application framework none registering workflow dotnetclipackagebuilder with capability capability language dotnet dependency manager cli package application framework none registering workflow custommakebuilder with capability capability language provided dependency manager none application framework none registering workflow nodejsnpmesbuildbuilder with capability capability language nodejs dependency manager npm esbuild application framework none found workflow pythonpipbuilder to support capabilities capability language python dependency manager pip application framework none running workflow pythonpipbuilder running pythonpipbuilder resolvedependencies calling pip download r path to sam app sam app hello world requirements txt dest var folders t exists action i full dependency closure wheel requests wheel idna wheel certifi wheel charset normalizer wheel initial compatible wheel requests wheel idna wheel certifi wheel charset normalizer wheel initial incompatible set downloading missing wheels set compatible wheels after second download pass wheel requests wheel idna wheel certifi wheel charset normalizer wheel build missing wheels from sdists c compiling true set compatible after building wheels no c compiling wheel requests wheel idna wheel certifi wheel charset normalizer wheel build missing wheels from sdists c compiling false set compatible after building wheels c compiling wheel requests wheel idna wheel certifi wheel charset normalizer wheel final compatible idna wheel wheel requests wheel certifi wheel charset normalizer wheel final incompatible set final missing wheels set pythonpipbuilder resolvedependencies succeeded running pythonpipbuilder copysource copying source file path to sam app sam app hello world requirements txt to destination path to sam app sam app aws sam build helloworldfunction requirements txt copying source file path to sam app sam app hello world init py to destination path to sam app sam app aws sam build helloworldfunction init py copying source file path to sam app sam app hello world app py to destination path to sam app sam app aws sam build helloworldfunction app py pythonpipbuilder copysource succeeded there is no customer defined id or cdk path defined for resource helloworldfunction so we will use the resource logical id as the resource id resources found in the stack found serverless function with name helloworldfunction and codeuri hello world build succeeded built artifacts aws sam build built template aws sam build template yaml commands you can use next validate sam template sam validate invoke function sam local invoke test function in the cloud sam sync stack name stack name watch deploy sam deploy guided telemetry endpoint configured to be unable to find click context for getting session id sending telemetry metrics sending telemetry metrics httpsconnectionpool host aws serverless tools telemetry us west amazonaws com port read timed out read timeout httpsconnectionpool host aws serverless tools telemetry us west amazonaws com port read timed out read timeout sam local invoke debug telemetry endpoint configured to be using config file samconfig toml config environment default expand command line arguments to template file path to sam app sam app aws sam build template yaml event events event json no event layer cache basedir users afrigerio aws sam layers pkg container host localhost container host interface local invoke command is called no parameters detected in the template sam customer defined id is more priority than other ids customer defined id for resource helloworldfunction is helloworldfunction there is no customer defined id or cdk path defined for resource serverlessrestapi so we will use the resource logical id as the resource id stacks found in the template no parameters detected in the template sam customer defined id is more priority than other ids customer defined id for resource helloworldfunction is helloworldfunction there is no customer defined id or cdk path defined for resource serverlessrestapi so we will use the resource logical id as the resource id resources found in the stack found serverless function with name helloworldfunction and codeuri helloworldfunction base dir is not presented adjusting uri helloworldfunction relative to path to sam app sam app aws sam build template yaml found one lambda function with name helloworldfunction invoking app lambda handler no environment variables found for function helloworldfunction loading aws credentials from session with profile none resolving code path cwd path to sam app sam app aws sam build codeuri path to sam app sam app aws sam build helloworldfunction resolved absolute path to code is path to sam app sam app aws sam build helloworldfunction code path to sam app sam app aws sam build helloworldfunction is not a zip jar file cleaning all decompressed code dirs sending telemetry metrics httpsconnectionpool host aws serverless tools telemetry us west amazonaws com port read timed out read timeout traceback most recent call last file opt homebrew cellar aws sam cli libexec lib site packages docker api client py line in raise for status response raise for status file opt homebrew cellar aws sam cli libexec lib site packages requests models py line in raise for status raise httperror http error msg response self requests exceptions httperror client error not found for url http docker localhost images public ecr aws sam emulation rapid json during handling of the above exception another exception occurred traceback most recent call last file opt homebrew bin sam line in sys exit cli file opt homebrew cellar aws sam cli libexec lib site packages click core py line in call return self main args kwargs file opt homebrew cellar aws sam cli libexec lib site packages click core py line in main rv self invoke ctx file opt homebrew cellar aws sam cli libexec lib site packages click core py line in invoke return process result sub ctx command invoke sub ctx file opt homebrew cellar aws sam cli libexec lib site packages click core py line in invoke return process result sub ctx command invoke sub ctx file opt homebrew cellar aws sam cli libexec lib site packages click core py line in invoke return ctx invoke self callback ctx params file opt homebrew cellar aws sam cli libexec lib site packages click core py line in invoke return callback args kwargs file opt homebrew cellar aws sam cli libexec lib site packages click decorators py line in new func return ctx invoke f obj args kwargs file opt homebrew cellar aws sam cli libexec lib site packages click core py line in invoke return callback args kwargs file opt homebrew cellar aws sam cli libexec lib site packages samcli lib telemetry metric py line in wrapped raise exception pylint disable raising bad type file opt homebrew cellar aws sam cli libexec lib site packages samcli lib telemetry metric py line in wrapped return value func args kwargs file opt homebrew cellar aws sam cli libexec lib site packages samcli lib utils version checker py line in wrapped actual result func args kwargs file opt homebrew cellar aws sam cli libexec lib site packages samcli cli main py line in wrapper return func args kwargs file opt homebrew cellar aws sam cli libexec lib site packages samcli commands local invoke cli py line in cli do cli file opt homebrew cellar aws sam cli libexec lib site packages samcli commands local invoke cli py line in do cli context local lambda runner invoke file opt homebrew cellar aws sam cli libexec lib site packages samcli commands local lib local lambda py line in invoke self local runtime invoke file opt homebrew cellar aws sam cli libexec lib site packages samcli lib telemetry metric py line in wrapped func return value func args kwargs file opt homebrew cellar aws sam cli libexec lib site packages samcli local lambdafn runtime py line in invoke container self create function config debug context container host container host interface file opt homebrew cellar aws sam cli libexec lib site packages samcli local lambdafn runtime py line in create container lambdacontainer file opt homebrew cellar aws sam cli libexec lib site packages samcli local docker lambda container py line in init image lambdacontainer get image file opt homebrew cellar aws sam cli libexec lib site packages samcli local docker lambda container py line in get image return lambda image build runtime packagetype image layers architecture function name function name file opt homebrew cellar aws sam cli libexec lib site packages samcli local docker lambda image py line in build self docker client images get image tag file opt homebrew cellar aws sam cli libexec lib site packages docker models images py line in get return self prepare model self client api inspect image name file opt homebrew cellar aws sam cli libexec lib site packages docker utils decorators py line in wrapped return f self resource id args kwargs file opt homebrew cellar aws sam cli libexec lib site packages docker api image py line in inspect image return self result file opt homebrew cellar aws sam cli libexec lib site packages docker api client py line in result self raise for status response file opt homebrew cellar aws sam cli libexec lib site packages docker api client py line in raise for status raise create api error from http exception e file opt homebrew cellar aws sam cli libexec lib site packages docker errors py line in create api error from http exception raise cls e response response explanation explanation docker errors notfound client error not found id not found expected result i was expecting the result coming from the lambda execution i got a similar output if i try sam local start api and then curl additional environment details ex windows mac amazon linux etc os macos monterey sam version sam cli version aws region eu central
0
263,647
28,047,903,624
IssuesEvent
2023-03-29 01:34:42
kapseliboi/vaadin-date-picker
https://api.github.com/repos/kapseliboi/vaadin-date-picker
closed
CVE-2021-23440 (High) detected in set-value-2.0.1.tgz - autoclosed
Mend: dependency security vulnerability
## CVE-2021-23440 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>set-value-2.0.1.tgz</b></p></summary> <p>Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.</p> <p>Library home page: <a href="https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz">https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/set-value/package.json</p> <p> Dependency Hierarchy: - vaadin-component-dev-dependencies-3.2.0.tgz (Root Library) - stylelint-9.10.1.tgz - micromatch-3.1.10.tgz - snapdragon-0.8.2.tgz - base-0.11.2.tgz - cache-base-1.0.1.tgz - :x: **set-value-2.0.1.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects the package set-value before <2.0.1, >=3.0.0 <4.0.1. A type confusion vulnerability can lead to a bypass of CVE-2019-10747 when the user-provided keys used in the path parameter are arrays. Mend Note: After conducting further research, Mend has determined that all versions of set-value up to version 4.0.0 are vulnerable to CVE-2021-23440. <p>Publish Date: 2021-09-12 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23440>CVE-2021-23440</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2021-09-12</p> <p>Fix Resolution: set-value - 4.0.1 </p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-23440 (High) detected in set-value-2.0.1.tgz - autoclosed - ## CVE-2021-23440 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>set-value-2.0.1.tgz</b></p></summary> <p>Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.</p> <p>Library home page: <a href="https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz">https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/set-value/package.json</p> <p> Dependency Hierarchy: - vaadin-component-dev-dependencies-3.2.0.tgz (Root Library) - stylelint-9.10.1.tgz - micromatch-3.1.10.tgz - snapdragon-0.8.2.tgz - base-0.11.2.tgz - cache-base-1.0.1.tgz - :x: **set-value-2.0.1.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects the package set-value before <2.0.1, >=3.0.0 <4.0.1. A type confusion vulnerability can lead to a bypass of CVE-2019-10747 when the user-provided keys used in the path parameter are arrays. Mend Note: After conducting further research, Mend has determined that all versions of set-value up to version 4.0.0 are vulnerable to CVE-2021-23440. <p>Publish Date: 2021-09-12 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23440>CVE-2021-23440</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2021-09-12</p> <p>Fix Resolution: set-value - 4.0.1 </p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in set value tgz autoclosed cve high severity vulnerability vulnerable library set value tgz create nested values and any intermediaries using dot notation a b c paths library home page a href path to dependency file package json path to vulnerable library node modules set value package json dependency hierarchy vaadin component dev dependencies tgz root library stylelint tgz micromatch tgz snapdragon tgz base tgz cache base tgz x set value tgz vulnerable library found in base branch master vulnerability details this affects the package set value before a type confusion vulnerability can lead to a bypass of cve when the user provided keys used in the path parameter are arrays mend note after conducting further research mend has determined that all versions of set value up to version are vulnerable to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution set value step up your open source security game with mend
0
122,644
16,195,130,073
IssuesEvent
2021-05-04 13:45:05
devmount/third-stats
https://api.github.com/repos/devmount/third-stats
closed
Empty chart (tooltip only) when only one single data point on abscissa
design enhancement ux
**Describe the bug** If only one entry on x-axis exists (e.g. only one year in the years chart), the chart appears empty. The data point exists, but is only visible on mouseover via tooltip. **To Reproduce** Steps to reproduce the behavior: 1. Go to stats page 2. Set date range to one year only (e.g. 2020-01-01 to 2020-05-05) 3. See years chart empty **Expected behavior** A visible data point making clear that there actually is data. **Screenshots** ![image](https://user-images.githubusercontent.com/5441654/116310510-6377b300-a7aa-11eb-8a34-8ebc8b4acef0.png) **Additional context** Applies to every line chart.
1.0
Empty chart (tooltip only) when only one single data point on abscissa - **Describe the bug** If only one entry on x-axis exists (e.g. only one year in the years chart), the chart appears empty. The data point exists, but is only visible on mouseover via tooltip. **To Reproduce** Steps to reproduce the behavior: 1. Go to stats page 2. Set date range to one year only (e.g. 2020-01-01 to 2020-05-05) 3. See years chart empty **Expected behavior** A visible data point making clear that there actually is data. **Screenshots** ![image](https://user-images.githubusercontent.com/5441654/116310510-6377b300-a7aa-11eb-8a34-8ebc8b4acef0.png) **Additional context** Applies to every line chart.
non_process
empty chart tooltip only when only one single data point on abscissa describe the bug if only one entry on x axis exists e g only one year in the years chart the chart appears empty the data point exists but is only visible on mouseover via tooltip to reproduce steps to reproduce the behavior go to stats page set date range to one year only e g to see years chart empty expected behavior a visible data point making clear that there actually is data screenshots additional context applies to every line chart
0
13,393
15,866,628,738
IssuesEvent
2021-04-08 15:57:17
digitalmethodsinitiative/4cat
https://api.github.com/repos/digitalmethodsinitiative/4cat
closed
Greentext processor
(mostly) back-end enhancement processors
A processor, which extracts greentexts (to then split into sentences, or something?)
1.0
Greentext processor - A processor, which extracts greentexts (to then split into sentences, or something?)
process
greentext processor a processor which extracts greentexts to then split into sentences or something
1
423,720
12,301,099,287
IssuesEvent
2020-05-11 14:55:51
Olyno/oden
https://api.github.com/repos/Olyno/oden
opened
[module: fs] - RoadMap
feature module: fs priority: high
Here is the roadmap for the ``fs`` module. * - [ ] fs.access(path[, mode], callback) * - [ ] fs.accessSync(path[, mode]) * - [ ] fs.appendFile(path, data[, options], callback) * - [ ] fs.appendFileSync(path, data[, options]) * - [ ] fs.chmod(path, mode, callback) * - [ ] fs.chmodSync(path, mode) * - [ ] fs.chown(path, uid, gid, callback) * - [ ] fs.chownSync(path, uid, gid) * - [ ] fs.close(fd, callback) * - [ ] fs.closeSync(fd) * - [ ] fs.constants * - [ ] fs.copyFile(src, dest[, mode], callback) * - [ ] fs.copyFileSync(src, dest[, mode]) * - [ ] fs.createReadStream(path[, options]) * - [ ] fs.createWriteStream(path[, options]) * - [ ] fs.exists(path, callback) * - [ ] fs.existsSync(path) * - [ ] fs.fchmod(fd, mode, callback) * - [ ] fs.fchmodSync(fd, mode) * - [ ] fs.fchown(fd, uid, gid, callback) * - [ ] fs.fchownSync(fd, uid, gid) * - [ ] fs.fdatasync(fd, callback) * - [ ] fs.fdatasyncSync(fd) * - [ ] fs.fstat(fd[, options], callback) * - [ ] fs.fstatSync(fd[, options]) * - [ ] fs.fsync(fd, callback) * - [ ] fs.fsyncSync(fd) * - [ ] fs.ftruncate(fd[, len], callback) * - [ ] fs.ftruncateSync(fd[, len]) * - [ ] fs.futimes(fd, atime, mtime, callback) * - [ ] fs.futimesSync(fd, atime, mtime) * - [ ] fs.lchmod(path, mode, callback) * - [ ] fs.lchmodSync(path, mode) * - [ ] fs.lchown(path, uid, gid, callback) * - [ ] fs.lchownSync(path, uid, gid) * - [ ] fs.link(existingPath, newPath, callback) * - [ ] fs.linkSync(existingPath, newPath) * - [ ] fs.lstat(path[, options], callback) * - [ ] fs.lstatSync(path[, options]) * - [x] fs.mkdir(path[, options], callback) * - [x] fs.mkdirSync(path[, options]) * - [ ] fs.mkdtemp(prefix[, options], callback) * - [ ] fs.mkdtempSync(prefix[, options]) * - [ ] fs.open(path[, flags[, mode]], callback) * - [ ] fs.opendir(path[, options], callback) * - [ ] fs.opendirSync(path[, options]) * - [ ] fs.openSync(path[, flags, mode]) * - [ ] fs.read(fd, buffer, offset, length, position, callback) * - [ ] fs.read(fd, [options,] callback) * - [x] fs.readdir(path[, options], callback) * - [x] fs.readdirSync(path[, options]) * - [x] fs.readFile(path[, options], callback) * - [x] fs.readFileSync(path[, options]) * - [ ] fs.readlink(path[, options], callback) * - [ ] fs.readlinkSync(path[, options]) * - [ ] fs.readSync(fd, buffer, of offset, length, position) * - [ ] fs.readSync(fd, buffer, [options]) * - [ ] fs.readv(fd, buffers[, position], callback) * - [ ] fs.readvSync(fd, buffers[, position]) * - [ ] fs.realpath(path[, options], callback) * - [ ] fs.realpath.native(path[, options], callback) * - [ ] fs.realpathSync(path[, options]) * - [ ] fs.realpathSync.native(path[, options]) * - [x] fs.rename(oldPath, newPath, callback) * - [x] fs.renameSync(oldPath, newPath) * - [ ] fs.rmdir(path[, options], callback) * - [ ] fs.rmdirSync(path[, options]) * - [x] fs.stat(path[, options], callback) * - [x] fs.statSync(path[, options]) * - [ ] fs.symlink(target, path[, type], callback) * - [ ] fs.symlinkSync(target, path[, type]) * - [ ] fs.truncate(path[, len], callback) * - [ ] fs.truncateSync(path[, len]) * - [ ] fs.unlink(path, callback) * - [ ] fs.unlinkSync(path) * - [ ] fs.unwatchFile(filename[, listener]) * - [ ] fs.utimes(path, atime, mtime, callback) * - [ ] fs.utimesSync(path, atime, mtime) * - [ ] fs.watch(filename[, options][, listener]) * - [ ] fs.watchFile(filename[, options], listener) * - [ ] fs.write(fd, buffer[, offset[, length[, position]]], callback) * - [ ] fs.write(fd, string[, position[, encoding]], callback) * - [ ] fs.writeFile(file, data[, options], callback)
1.0
[module: fs] - RoadMap - Here is the roadmap for the ``fs`` module. * - [ ] fs.access(path[, mode], callback) * - [ ] fs.accessSync(path[, mode]) * - [ ] fs.appendFile(path, data[, options], callback) * - [ ] fs.appendFileSync(path, data[, options]) * - [ ] fs.chmod(path, mode, callback) * - [ ] fs.chmodSync(path, mode) * - [ ] fs.chown(path, uid, gid, callback) * - [ ] fs.chownSync(path, uid, gid) * - [ ] fs.close(fd, callback) * - [ ] fs.closeSync(fd) * - [ ] fs.constants * - [ ] fs.copyFile(src, dest[, mode], callback) * - [ ] fs.copyFileSync(src, dest[, mode]) * - [ ] fs.createReadStream(path[, options]) * - [ ] fs.createWriteStream(path[, options]) * - [ ] fs.exists(path, callback) * - [ ] fs.existsSync(path) * - [ ] fs.fchmod(fd, mode, callback) * - [ ] fs.fchmodSync(fd, mode) * - [ ] fs.fchown(fd, uid, gid, callback) * - [ ] fs.fchownSync(fd, uid, gid) * - [ ] fs.fdatasync(fd, callback) * - [ ] fs.fdatasyncSync(fd) * - [ ] fs.fstat(fd[, options], callback) * - [ ] fs.fstatSync(fd[, options]) * - [ ] fs.fsync(fd, callback) * - [ ] fs.fsyncSync(fd) * - [ ] fs.ftruncate(fd[, len], callback) * - [ ] fs.ftruncateSync(fd[, len]) * - [ ] fs.futimes(fd, atime, mtime, callback) * - [ ] fs.futimesSync(fd, atime, mtime) * - [ ] fs.lchmod(path, mode, callback) * - [ ] fs.lchmodSync(path, mode) * - [ ] fs.lchown(path, uid, gid, callback) * - [ ] fs.lchownSync(path, uid, gid) * - [ ] fs.link(existingPath, newPath, callback) * - [ ] fs.linkSync(existingPath, newPath) * - [ ] fs.lstat(path[, options], callback) * - [ ] fs.lstatSync(path[, options]) * - [x] fs.mkdir(path[, options], callback) * - [x] fs.mkdirSync(path[, options]) * - [ ] fs.mkdtemp(prefix[, options], callback) * - [ ] fs.mkdtempSync(prefix[, options]) * - [ ] fs.open(path[, flags[, mode]], callback) * - [ ] fs.opendir(path[, options], callback) * - [ ] fs.opendirSync(path[, options]) * - [ ] fs.openSync(path[, flags, mode]) * - [ ] fs.read(fd, buffer, offset, length, position, callback) * - [ ] fs.read(fd, [options,] callback) * - [x] fs.readdir(path[, options], callback) * - [x] fs.readdirSync(path[, options]) * - [x] fs.readFile(path[, options], callback) * - [x] fs.readFileSync(path[, options]) * - [ ] fs.readlink(path[, options], callback) * - [ ] fs.readlinkSync(path[, options]) * - [ ] fs.readSync(fd, buffer, of offset, length, position) * - [ ] fs.readSync(fd, buffer, [options]) * - [ ] fs.readv(fd, buffers[, position], callback) * - [ ] fs.readvSync(fd, buffers[, position]) * - [ ] fs.realpath(path[, options], callback) * - [ ] fs.realpath.native(path[, options], callback) * - [ ] fs.realpathSync(path[, options]) * - [ ] fs.realpathSync.native(path[, options]) * - [x] fs.rename(oldPath, newPath, callback) * - [x] fs.renameSync(oldPath, newPath) * - [ ] fs.rmdir(path[, options], callback) * - [ ] fs.rmdirSync(path[, options]) * - [x] fs.stat(path[, options], callback) * - [x] fs.statSync(path[, options]) * - [ ] fs.symlink(target, path[, type], callback) * - [ ] fs.symlinkSync(target, path[, type]) * - [ ] fs.truncate(path[, len], callback) * - [ ] fs.truncateSync(path[, len]) * - [ ] fs.unlink(path, callback) * - [ ] fs.unlinkSync(path) * - [ ] fs.unwatchFile(filename[, listener]) * - [ ] fs.utimes(path, atime, mtime, callback) * - [ ] fs.utimesSync(path, atime, mtime) * - [ ] fs.watch(filename[, options][, listener]) * - [ ] fs.watchFile(filename[, options], listener) * - [ ] fs.write(fd, buffer[, offset[, length[, position]]], callback) * - [ ] fs.write(fd, string[, position[, encoding]], callback) * - [ ] fs.writeFile(file, data[, options], callback)
non_process
roadmap here is the roadmap for the fs module fs access path callback fs accesssync path fs appendfile path data callback fs appendfilesync path data fs chmod path mode callback fs chmodsync path mode fs chown path uid gid callback fs chownsync path uid gid fs close fd callback fs closesync fd fs constants fs copyfile src dest callback fs copyfilesync src dest fs createreadstream path fs createwritestream path fs exists path callback fs existssync path fs fchmod fd mode callback fs fchmodsync fd mode fs fchown fd uid gid callback fs fchownsync fd uid gid fs fdatasync fd callback fs fdatasyncsync fd fs fstat fd callback fs fstatsync fd fs fsync fd callback fs fsyncsync fd fs ftruncate fd callback fs ftruncatesync fd fs futimes fd atime mtime callback fs futimessync fd atime mtime fs lchmod path mode callback fs lchmodsync path mode fs lchown path uid gid callback fs lchownsync path uid gid fs link existingpath newpath callback fs linksync existingpath newpath fs lstat path callback fs lstatsync path fs mkdir path callback fs mkdirsync path fs mkdtemp prefix callback fs mkdtempsync prefix fs open path callback fs opendir path callback fs opendirsync path fs opensync path fs read fd buffer offset length position callback fs read fd callback fs readdir path callback fs readdirsync path fs readfile path callback fs readfilesync path fs readlink path callback fs readlinksync path fs readsync fd buffer of offset length position fs readsync fd buffer fs readv fd buffers callback fs readvsync fd buffers fs realpath path callback fs realpath native path callback fs realpathsync path fs realpathsync native path fs rename oldpath newpath callback fs renamesync oldpath newpath fs rmdir path callback fs rmdirsync path fs stat path callback fs statsync path fs symlink target path callback fs symlinksync target path fs truncate path callback fs truncatesync path fs unlink path callback fs unlinksync path fs unwatchfile filename fs utimes path atime mtime callback fs utimessync path atime mtime fs watch filename fs watchfile filename listener fs write fd buffer callback fs write fd string callback fs writefile file data callback
0
17,506
23,317,597,479
IssuesEvent
2022-08-08 13:47:28
prisma/prisma
https://api.github.com/repos/prisma/prisma
closed
Prisma error: Explicit many to many relationship when updating model
bug/2-confirmed kind/bug process/candidate topic: mysql team/client topic: referentialIntegrity priority/high size/unknown
### Bug description I am following the official doc to implement an explicit many to many relationship with an intermediate table that holds additional fields. The error occurs when I want to **update an existing user object**: ```ts const user = await prisma.user.create({ data: { email: "user@exmaple.com", name: "Max", }, }); await prisma.workspace.create({ data: { name: "workspace", users: { create: [ { role: "ADMIN", user: { connect: { id: user.id, }, }, }, ], }, }, }); // The error happens here. await prisma.user.update({ where: { id: user.id }, data: { name: "Bob", }, }); ``` The error says: ``` The column `issue-dev.User.id` does not exist in the current database. ``` (`issue-dev` is the DB name) So I guess it might be something related to MySQL, so I re-setup my DB with psql and it still has the error but with another error message: ``` The table `(not available)` does not exist in the current database. ``` I was using the explicit many-to-many relationship and it worked well when updating the existing user object. So I think it might be a Prisma bug. Thanks in advance for helping ### How to reproduce <!-- 1. Go to '...' 2. Change '....' 3. Run '....' 4. See error --> 1. I made a repo with minimal reproducible script: https://github.com/renyuanz/prisma-mysql-m2m-update-issue 2. Clone the repo and install, and setup a MySQL 3. Run `npx ts-node index.ts` 4. See error ### Expected behavior _No response_ ### Prisma information <!-- Do not include your database credentials when sharing your Prisma schema! --> ```prisma model User { id String @id @default(cuid()) createdAt DateTime @default(now()) updatedAt DateTime @default(now()) email String @unique workspaces UsersOnWorkspaces[] name String? } model Workspace { id String @id @default(cuid()) createdAt DateTime @default(now()) updatedAt DateTime @default(now()) users UsersOnWorkspaces[] name String } enum MembershipRole { ADMIN EDITOR VIEWER } model UsersOnWorkspaces { user User @relation(fields: [userId], references: [id]) userId String workspace Workspace @relation(fields: [workspaceId], references: [id]) workspaceId String role MembershipRole joinedAt DateTime @default(now()) @@id([userId, workspaceId]) } ``` ### Environment & setup - OS: M1 Mac - Database: MySQL - Node.js version: v14.18.0 ### Prisma Version ``` prisma : 3.11.0 @prisma/client : 3.11.0 Current platform : darwin-arm64 Query Engine (Node-API) : libquery-engine b371888aaf8f51357c7457d836b86d12da91658b (at node_modules/@prisma/engines/libquery_engine-darwin-arm64.dylib.node) Migration Engine : migration-engine-cli b371888aaf8f51357c7457d836b86d12da91658b (at node_modules/@prisma/engines/migration-engine-darwin-arm64) Introspection Engine : introspection-core b371888aaf8f51357c7457d836b86d12da91658b (at node_modules/@prisma/engines/introspection-engine-darwin-arm64) Format Binary : prisma-fmt b371888aaf8f51357c7457d836b86d12da91658b (at node_modules/@prisma/engines/prisma-fmt-darwin-arm64) Default Engines Hash : b371888aaf8f51357c7457d836b86d12da91658b Studio : 0.458.0 Preview Features : referentialIntegrity ```
1.0
Prisma error: Explicit many to many relationship when updating model - ### Bug description I am following the official doc to implement an explicit many to many relationship with an intermediate table that holds additional fields. The error occurs when I want to **update an existing user object**: ```ts const user = await prisma.user.create({ data: { email: "user@exmaple.com", name: "Max", }, }); await prisma.workspace.create({ data: { name: "workspace", users: { create: [ { role: "ADMIN", user: { connect: { id: user.id, }, }, }, ], }, }, }); // The error happens here. await prisma.user.update({ where: { id: user.id }, data: { name: "Bob", }, }); ``` The error says: ``` The column `issue-dev.User.id` does not exist in the current database. ``` (`issue-dev` is the DB name) So I guess it might be something related to MySQL, so I re-setup my DB with psql and it still has the error but with another error message: ``` The table `(not available)` does not exist in the current database. ``` I was using the explicit many-to-many relationship and it worked well when updating the existing user object. So I think it might be a Prisma bug. Thanks in advance for helping ### How to reproduce <!-- 1. Go to '...' 2. Change '....' 3. Run '....' 4. See error --> 1. I made a repo with minimal reproducible script: https://github.com/renyuanz/prisma-mysql-m2m-update-issue 2. Clone the repo and install, and setup a MySQL 3. Run `npx ts-node index.ts` 4. See error ### Expected behavior _No response_ ### Prisma information <!-- Do not include your database credentials when sharing your Prisma schema! --> ```prisma model User { id String @id @default(cuid()) createdAt DateTime @default(now()) updatedAt DateTime @default(now()) email String @unique workspaces UsersOnWorkspaces[] name String? } model Workspace { id String @id @default(cuid()) createdAt DateTime @default(now()) updatedAt DateTime @default(now()) users UsersOnWorkspaces[] name String } enum MembershipRole { ADMIN EDITOR VIEWER } model UsersOnWorkspaces { user User @relation(fields: [userId], references: [id]) userId String workspace Workspace @relation(fields: [workspaceId], references: [id]) workspaceId String role MembershipRole joinedAt DateTime @default(now()) @@id([userId, workspaceId]) } ``` ### Environment & setup - OS: M1 Mac - Database: MySQL - Node.js version: v14.18.0 ### Prisma Version ``` prisma : 3.11.0 @prisma/client : 3.11.0 Current platform : darwin-arm64 Query Engine (Node-API) : libquery-engine b371888aaf8f51357c7457d836b86d12da91658b (at node_modules/@prisma/engines/libquery_engine-darwin-arm64.dylib.node) Migration Engine : migration-engine-cli b371888aaf8f51357c7457d836b86d12da91658b (at node_modules/@prisma/engines/migration-engine-darwin-arm64) Introspection Engine : introspection-core b371888aaf8f51357c7457d836b86d12da91658b (at node_modules/@prisma/engines/introspection-engine-darwin-arm64) Format Binary : prisma-fmt b371888aaf8f51357c7457d836b86d12da91658b (at node_modules/@prisma/engines/prisma-fmt-darwin-arm64) Default Engines Hash : b371888aaf8f51357c7457d836b86d12da91658b Studio : 0.458.0 Preview Features : referentialIntegrity ```
process
prisma error explicit many to many relationship when updating model bug description i am following the official doc to implement an explicit many to many relationship with an intermediate table that holds additional fields the error occurs when i want to update an existing user object ts const user await prisma user create data email user exmaple com name max await prisma workspace create data name workspace users create role admin user connect id user id the error happens here await prisma user update where id user id data name bob the error says the column issue dev user id does not exist in the current database issue dev is the db name so i guess it might be something related to mysql so i re setup my db with psql and it still has the error but with another error message the table not available does not exist in the current database i was using the explicit many to many relationship and it worked well when updating the existing user object so i think it might be a prisma bug thanks in advance for helping how to reproduce go to change run see error i made a repo with minimal reproducible script clone the repo and install and setup a mysql run npx ts node index ts see error expected behavior no response prisma information prisma model user id string id default cuid createdat datetime default now updatedat datetime default now email string unique workspaces usersonworkspaces name string model workspace id string id default cuid createdat datetime default now updatedat datetime default now users usersonworkspaces name string enum membershiprole admin editor viewer model usersonworkspaces user user relation fields references userid string workspace workspace relation fields references workspaceid string role membershiprole joinedat datetime default now id environment setup os mac database mysql node js version prisma version prisma prisma client current platform darwin query engine node api libquery engine at node modules prisma engines libquery engine darwin dylib node migration engine migration engine cli at node modules prisma engines migration engine darwin introspection engine introspection core at node modules prisma engines introspection engine darwin format binary prisma fmt at node modules prisma engines prisma fmt darwin default engines hash studio preview features referentialintegrity
1
20,638
27,316,804,393
IssuesEvent
2023-02-24 16:18:57
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
docs: don't use the term "main pipeline"
devops/prod doc-bug Pri1 devops-cicd-process/tech
On the [template types and usages](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops#use-other-repositories) page, the docs state: > You may also use `@self` to refer to the repository where the main pipeline was found. This comes right after the following Note: > If no ref is specified, the pipeline will default to using refs/heads/main. At this point, it's a bit confusing what the "main" pipeline is: the one with the entry point that's run, or the one in the "main" branch. The next sentence makes it clear, but maybe the first one could be reworded... > This is convenient for use in extends templates if you want to refer back to contents in the extending pipeline's repository. For example: --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 6724abea-bbdc-bf66-ed5e-3214fa6c3e66 * Version Independent ID: 4f8dab21-3f0e-da32-cc0e-1d85c13c0065 * Content: [Templates - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops) * Content Source: [docs/pipelines/process/templates.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/templates.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
docs: don't use the term "main pipeline" - On the [template types and usages](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops#use-other-repositories) page, the docs state: > You may also use `@self` to refer to the repository where the main pipeline was found. This comes right after the following Note: > If no ref is specified, the pipeline will default to using refs/heads/main. At this point, it's a bit confusing what the "main" pipeline is: the one with the entry point that's run, or the one in the "main" branch. The next sentence makes it clear, but maybe the first one could be reworded... > This is convenient for use in extends templates if you want to refer back to contents in the extending pipeline's repository. For example: --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 6724abea-bbdc-bf66-ed5e-3214fa6c3e66 * Version Independent ID: 4f8dab21-3f0e-da32-cc0e-1d85c13c0065 * Content: [Templates - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops) * Content Source: [docs/pipelines/process/templates.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/templates.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
docs don t use the term main pipeline on the page the docs state you may also use self to refer to the repository where the main pipeline was found this comes right after the following note if no ref is specified the pipeline will default to using refs heads main at this point it s a bit confusing what the main pipeline is the one with the entry point that s run or the one in the main branch the next sentence makes it clear but maybe the first one could be reworded this is convenient for use in extends templates if you want to refer back to contents in the extending pipeline s repository for example document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id bbdc version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
13,151
15,572,845,066
IssuesEvent
2021-03-17 07:43:47
bitpal/bitpal_umbrella
https://api.github.com/repos/bitpal/bitpal_umbrella
opened
Admin page
Payment processor
An admin login could give you access to things like: * View node status * Give others access to a store * Restart server or stop/start different parts
1.0
Admin page - An admin login could give you access to things like: * View node status * Give others access to a store * Restart server or stop/start different parts
process
admin page an admin login could give you access to things like view node status give others access to a store restart server or stop start different parts
1
15,429
19,619,637,352
IssuesEvent
2022-01-07 03:34:44
factor/factor
https://api.github.com/repos/factor/factor
closed
sigset_t definition that works and doesn't crash with posix_spawn interface
unix process-launcher cross-platform-difference libc aliens maybe-close-this c-types
I've tried to research this issue and after many days of debugging I give up but am surely overlooking something obvious. ## What doesn't work? Any definition for `sigset_t` that I can think of. in POSIX, `sigset_t` should be either an integer or a C object, but shouldn't itself be a pointer type until it needs to be a parameter by reference (not just as an out parameter, but as input). My platform: glibc 2.32 on Ubuntu Linux x86-64 Note that the failures happen in `posix_spawn_*` functions, never in `sigemptyset` o r `sigfillset`. Usually in `posix_spawn_setsigmask` and `setsigdefault`, less commonly in `getsigmask`/`getsigdefault`. First, we'll need these declarations: ```factor USING: alien.c-types alien.syntax alien.data kernel libc math ; IN: sigset-debug FUNCTION: int sigemptyset ( sigset_t* set ) FUNCTION: int sigfillset ( sigset_t* set ) FUNCTION: int posix_spawnattr_init ( posix_spawnattr_t* attr ) FUNCTION: int posix_spawnattr_getsigdefault ( posix_spawnattr_t* attr, sigset_t* sigdefault ) FUNCTION: int posix_spawnattr_setsigdefault ( posix_spawnattr_t* attr, sigset_t* sigdefault ) : <posix-spawnattr> ( -- attr ) f posix_spawnattr_t <ref> [ posix_spawnattr_init ] keep [ [ throw-errno ] unless-zero ] dip ; : get-sigdefault ( attr: posix_spawnattr_t out-param: sigset_t -- sigdefault ) [ posix_spawnattr_getsigdefault ] keep nip ; : set-sigdefault ( attr: posix_spawnattr_t sigdefault: sigset_t -- ) posix_spawnattr_setsigdefault drop ; ``` #### All of the following are broken in one way or another. **Definition**: plain unsigned long - Current definition in Factor main branch. ```factor TYPEDEF: ulong sigset_t <posix-spawnattr> dup 0 sigset_t <ref> dup sigfillset drop set-sigdefault ! works until GC 0 sigset_t <ref> dup sigemptyset drop get-sigdefault . gc --> B{ 16 32 0 0 0 0 0 0 } *crash* ``` `critical_error: Invalid header in base_size: 7f06e0fea470` **Broken**: Works great until a GC happens. Usually works the first time, and then calling `posix_spawnattr_setsigdefault` again with a different new `sigset_t` object will crash the VM and/or give `~pprint error: byte array~` in the Listener, which will crash if you inspect it (see error in spoiler). *If it works after GC*, and you try to later use the `posix_spawnattr_t` object to which a `ulong sigset_t` like this was applied as mask or default, Factor will crash then anyway. POSIX allows `sigset_t` to be an integer type, but this may be incompatible with the libc implementation of `posix_spawn_*` functions, which may expect it to be 128 bytes wide depending on how libc was compiled (see **Why** section). <details> <summary><code>integer-length-expected</code> Error in print-error!</summary> ```factor Error in print-error! Error in thread 70 (UI update, [ self ui-thread set-global update-ui-loop ]): integer-length-expected Error in print-error! (U) Quotation: [ set-namestack init-catchstack self quot>> call => stop ] (O) Word: update-ui-loop (O) Word: rethrow (O) Method: M\ object error-in-thread (U) Quotation: [ [ error-in-thread. nl print-error nl :c flush ] with-global => stop ] (U) Quotation: [ swap >n call => ndrop ] Word: with-variables (U) Quotation: [ error-in-thread. nl print-error => nl :c flush ] (O) Word: print-error (O) Word: describe (O) Word: (describe) (O) Word: simple-table. (O) Word: pprint-cell (O) Word: pprint-short (O) Word: pprint (O) Word: pprint*/predicate-engine (O) Word: method? (O) Word: word-prop (O) Method: M\ word props>> (U) Quotation: [ leaf-signal-handler => ] Word: leaf-signal-handler (U) Quotation: [ OBJ-CURRENT-THREAD special-object error-thread set-global current-continuation => error-continuation set-global [ original-error set-global ] [ rethrow ] bi ] ``` </details> <br> **Definition**: ulong pointer typedef and `malloc` ```factor TYPEDEF: ulong* sigset_t <posix-spawnattr> dup 128 malloc dup sigfillset drop set-sigdefault ! works until GC 128 malloc dup sigemptyset drop get-sigdefault . gc ``` `critical_error: Invalid header in slot_count: 7f147942c690` **Broken**: Works until GC. Works in C but probably UB for aliasing reasons. As shown in the **Why** section, this works the same as the struct with a pointer/array member because it's always treated as opaque. **Definition**: `struct` with ulong pointer member and `malloc` ```factor STRUCT: sigset_t { val ulong* } ; <posix-spawnattr> 128 malloc dup sigfillset drop sigset_t <struct-boa> set-sigdefault ! immediately crashes the UI; crashes on GC ``` `fatal_error: Memory protection fault during gc: 0x18` **Broken**: UI immediately cascading errors, VM will crash on GC in the console listener. Maybe I need to cast the `malloc` pointer to `ulong*`?. Also works in C. **Definition**: struct with `unsigned long [16]` member ```factor USING: literals layouts ; << CONSTANT: SIGSET_NUM_WORDS $[ 1024 8 1 cells * / ] >> STRUCT: sigset_t { val ulong[SIGSET_NUM_WORDS] } ; <posix-spawnattr> sigset_t <struct> dup sigfillset drop posix_spawnattr_setsigdefault sigset_t <struct> dup sigfillset drop posix_spawnattr_getsigdefault . gc ``` `critical_error: Invalid header in base_size: 7f06e0fea470` **Broken**: Works until GC. This definition mirrors how glibc and musl libc actually declare `sigset_t`. ## Why I think those should work and/or don't work In glibc and musl libc, `sigset_t` is declared like this: [glibc/sysdeps/unix/sysv/linux/bits/types/__sigset_t.h](https://code.woboq.org/userspace/glibc/sysdeps/unix/sysv/linux/bits/types/__sigset_t.h.html#__sigset_t): ```c #define SIGSET_NUM_WORDS (1024 / (8 * sizeof (unsigned long int))) typedef struct { unsigned long int __val[SIGSET_NUM_WORDS]; } sigset_t; ``` On x64 `SIGSET_NUM_WORDS` is 16; `sizeof (sigset_t) == 128` = 1 bit per possible signal. I would write this in Factor as: ```factor USING: literals layouts math classes.struct ; << CONSTANT: SIGSET_NUM_WORDS $[ 1024 8 1 cells * / ] >> STRUCT: sigset_t { val ulong[SIGSET_NUM_WORDS] } ; ``` The Factor definition should not be a reference type from the FFI's view, only being made into a pointer when a callee parameter is a pointer type. `sigfillset` and `sigemptyset` work with this declaration in C and Factor. Factor and C results are the same here: ```factor IN: scratchpad sigset_t <struct> [ sigemptyset ] keep . sigset_t <struct> [ sigfillset ] keep . S{ sigset_t { val ulong-array{ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 } } } S{ sigset_t { val ulong-array{ 18446744067267100671 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 } } } ``` That only the 1st 8 bytes are changed seems to imply that even though `sizeof (sigset_t) == 128` when I compile a program, my glibc was compiled with `sizeof (sigset_t) == 8` (or something else is going on). This is a good reason why the current definition of `sigset_t` in Factor is `TYPEDEF: ulong sigset_t`. Scalar `sigset_t` is POSIX-compliant but the issue arises when invoking functions like `posix_spawnattr_setsigdefault`, if they are compiled with `sizeof (sigset_t) == 128`. [glibc/posix/spawnattr_setdefault.c](https://code.woboq.org/userspace/glibc/posix/spawnattr_setdefault.c.html#posix_spawnattr_setsigdefault): ```c int posix_spawnattr_setsigdefault (posix_spawnattr_t *attr, const sigset_t *sigdefault) { memcpy (&attr->__sd, sigdefault, sizeof (sigset_t)); return 0; } ``` If glibc thinks `sigset_t` is 8 bytes then it should just read less than the input buffer, so the crashes are beyond me.
1.0
sigset_t definition that works and doesn't crash with posix_spawn interface - I've tried to research this issue and after many days of debugging I give up but am surely overlooking something obvious. ## What doesn't work? Any definition for `sigset_t` that I can think of. in POSIX, `sigset_t` should be either an integer or a C object, but shouldn't itself be a pointer type until it needs to be a parameter by reference (not just as an out parameter, but as input). My platform: glibc 2.32 on Ubuntu Linux x86-64 Note that the failures happen in `posix_spawn_*` functions, never in `sigemptyset` o r `sigfillset`. Usually in `posix_spawn_setsigmask` and `setsigdefault`, less commonly in `getsigmask`/`getsigdefault`. First, we'll need these declarations: ```factor USING: alien.c-types alien.syntax alien.data kernel libc math ; IN: sigset-debug FUNCTION: int sigemptyset ( sigset_t* set ) FUNCTION: int sigfillset ( sigset_t* set ) FUNCTION: int posix_spawnattr_init ( posix_spawnattr_t* attr ) FUNCTION: int posix_spawnattr_getsigdefault ( posix_spawnattr_t* attr, sigset_t* sigdefault ) FUNCTION: int posix_spawnattr_setsigdefault ( posix_spawnattr_t* attr, sigset_t* sigdefault ) : <posix-spawnattr> ( -- attr ) f posix_spawnattr_t <ref> [ posix_spawnattr_init ] keep [ [ throw-errno ] unless-zero ] dip ; : get-sigdefault ( attr: posix_spawnattr_t out-param: sigset_t -- sigdefault ) [ posix_spawnattr_getsigdefault ] keep nip ; : set-sigdefault ( attr: posix_spawnattr_t sigdefault: sigset_t -- ) posix_spawnattr_setsigdefault drop ; ``` #### All of the following are broken in one way or another. **Definition**: plain unsigned long - Current definition in Factor main branch. ```factor TYPEDEF: ulong sigset_t <posix-spawnattr> dup 0 sigset_t <ref> dup sigfillset drop set-sigdefault ! works until GC 0 sigset_t <ref> dup sigemptyset drop get-sigdefault . gc --> B{ 16 32 0 0 0 0 0 0 } *crash* ``` `critical_error: Invalid header in base_size: 7f06e0fea470` **Broken**: Works great until a GC happens. Usually works the first time, and then calling `posix_spawnattr_setsigdefault` again with a different new `sigset_t` object will crash the VM and/or give `~pprint error: byte array~` in the Listener, which will crash if you inspect it (see error in spoiler). *If it works after GC*, and you try to later use the `posix_spawnattr_t` object to which a `ulong sigset_t` like this was applied as mask or default, Factor will crash then anyway. POSIX allows `sigset_t` to be an integer type, but this may be incompatible with the libc implementation of `posix_spawn_*` functions, which may expect it to be 128 bytes wide depending on how libc was compiled (see **Why** section). <details> <summary><code>integer-length-expected</code> Error in print-error!</summary> ```factor Error in print-error! Error in thread 70 (UI update, [ self ui-thread set-global update-ui-loop ]): integer-length-expected Error in print-error! (U) Quotation: [ set-namestack init-catchstack self quot>> call => stop ] (O) Word: update-ui-loop (O) Word: rethrow (O) Method: M\ object error-in-thread (U) Quotation: [ [ error-in-thread. nl print-error nl :c flush ] with-global => stop ] (U) Quotation: [ swap >n call => ndrop ] Word: with-variables (U) Quotation: [ error-in-thread. nl print-error => nl :c flush ] (O) Word: print-error (O) Word: describe (O) Word: (describe) (O) Word: simple-table. (O) Word: pprint-cell (O) Word: pprint-short (O) Word: pprint (O) Word: pprint*/predicate-engine (O) Word: method? (O) Word: word-prop (O) Method: M\ word props>> (U) Quotation: [ leaf-signal-handler => ] Word: leaf-signal-handler (U) Quotation: [ OBJ-CURRENT-THREAD special-object error-thread set-global current-continuation => error-continuation set-global [ original-error set-global ] [ rethrow ] bi ] ``` </details> <br> **Definition**: ulong pointer typedef and `malloc` ```factor TYPEDEF: ulong* sigset_t <posix-spawnattr> dup 128 malloc dup sigfillset drop set-sigdefault ! works until GC 128 malloc dup sigemptyset drop get-sigdefault . gc ``` `critical_error: Invalid header in slot_count: 7f147942c690` **Broken**: Works until GC. Works in C but probably UB for aliasing reasons. As shown in the **Why** section, this works the same as the struct with a pointer/array member because it's always treated as opaque. **Definition**: `struct` with ulong pointer member and `malloc` ```factor STRUCT: sigset_t { val ulong* } ; <posix-spawnattr> 128 malloc dup sigfillset drop sigset_t <struct-boa> set-sigdefault ! immediately crashes the UI; crashes on GC ``` `fatal_error: Memory protection fault during gc: 0x18` **Broken**: UI immediately cascading errors, VM will crash on GC in the console listener. Maybe I need to cast the `malloc` pointer to `ulong*`?. Also works in C. **Definition**: struct with `unsigned long [16]` member ```factor USING: literals layouts ; << CONSTANT: SIGSET_NUM_WORDS $[ 1024 8 1 cells * / ] >> STRUCT: sigset_t { val ulong[SIGSET_NUM_WORDS] } ; <posix-spawnattr> sigset_t <struct> dup sigfillset drop posix_spawnattr_setsigdefault sigset_t <struct> dup sigfillset drop posix_spawnattr_getsigdefault . gc ``` `critical_error: Invalid header in base_size: 7f06e0fea470` **Broken**: Works until GC. This definition mirrors how glibc and musl libc actually declare `sigset_t`. ## Why I think those should work and/or don't work In glibc and musl libc, `sigset_t` is declared like this: [glibc/sysdeps/unix/sysv/linux/bits/types/__sigset_t.h](https://code.woboq.org/userspace/glibc/sysdeps/unix/sysv/linux/bits/types/__sigset_t.h.html#__sigset_t): ```c #define SIGSET_NUM_WORDS (1024 / (8 * sizeof (unsigned long int))) typedef struct { unsigned long int __val[SIGSET_NUM_WORDS]; } sigset_t; ``` On x64 `SIGSET_NUM_WORDS` is 16; `sizeof (sigset_t) == 128` = 1 bit per possible signal. I would write this in Factor as: ```factor USING: literals layouts math classes.struct ; << CONSTANT: SIGSET_NUM_WORDS $[ 1024 8 1 cells * / ] >> STRUCT: sigset_t { val ulong[SIGSET_NUM_WORDS] } ; ``` The Factor definition should not be a reference type from the FFI's view, only being made into a pointer when a callee parameter is a pointer type. `sigfillset` and `sigemptyset` work with this declaration in C and Factor. Factor and C results are the same here: ```factor IN: scratchpad sigset_t <struct> [ sigemptyset ] keep . sigset_t <struct> [ sigfillset ] keep . S{ sigset_t { val ulong-array{ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 } } } S{ sigset_t { val ulong-array{ 18446744067267100671 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 } } } ``` That only the 1st 8 bytes are changed seems to imply that even though `sizeof (sigset_t) == 128` when I compile a program, my glibc was compiled with `sizeof (sigset_t) == 8` (or something else is going on). This is a good reason why the current definition of `sigset_t` in Factor is `TYPEDEF: ulong sigset_t`. Scalar `sigset_t` is POSIX-compliant but the issue arises when invoking functions like `posix_spawnattr_setsigdefault`, if they are compiled with `sizeof (sigset_t) == 128`. [glibc/posix/spawnattr_setdefault.c](https://code.woboq.org/userspace/glibc/posix/spawnattr_setdefault.c.html#posix_spawnattr_setsigdefault): ```c int posix_spawnattr_setsigdefault (posix_spawnattr_t *attr, const sigset_t *sigdefault) { memcpy (&attr->__sd, sigdefault, sizeof (sigset_t)); return 0; } ``` If glibc thinks `sigset_t` is 8 bytes then it should just read less than the input buffer, so the crashes are beyond me.
process
sigset t definition that works and doesn t crash with posix spawn interface i ve tried to research this issue and after many days of debugging i give up but am surely overlooking something obvious what doesn t work any definition for sigset t that i can think of in posix sigset t should be either an integer or a c object but shouldn t itself be a pointer type until it needs to be a parameter by reference not just as an out parameter but as input my platform glibc on ubuntu linux note that the failures happen in posix spawn functions never in sigemptyset o r sigfillset usually in posix spawn setsigmask and setsigdefault less commonly in getsigmask getsigdefault first we ll need these declarations factor using alien c types alien syntax alien data kernel libc math in sigset debug function int sigemptyset sigset t set function int sigfillset sigset t set function int posix spawnattr init posix spawnattr t attr function int posix spawnattr getsigdefault posix spawnattr t attr sigset t sigdefault function int posix spawnattr setsigdefault posix spawnattr t attr sigset t sigdefault attr f posix spawnattr t posix spawnattr init keep unless zero dip get sigdefault attr posix spawnattr t out param sigset t sigdefault keep nip set sigdefault attr posix spawnattr t sigdefault sigset t posix spawnattr setsigdefault drop all of the following are broken in one way or another definition plain unsigned long current definition in factor main branch factor typedef ulong sigset t dup sigset t dup sigfillset drop set sigdefault works until gc sigset t dup sigemptyset drop get sigdefault gc b crash critical error invalid header in base size broken works great until a gc happens usually works the first time and then calling posix spawnattr setsigdefault again with a different new sigset t object will crash the vm and or give pprint error byte array in the listener which will crash if you inspect it see error in spoiler if it works after gc and you try to later use the posix spawnattr t object to which a ulong sigset t like this was applied as mask or default factor will crash then anyway posix allows sigset t to be an integer type but this may be incompatible with the libc implementation of posix spawn functions which may expect it to be bytes wide depending on how libc was compiled see why section integer length expected error in print error factor error in print error error in thread ui update integer length expected error in print error u quotation o word update ui loop o word rethrow o method m object error in thread u quotation with global stop u quotation word with variables u quotation o word print error o word describe o word describe o word simple table o word pprint cell o word pprint short o word pprint o word pprint predicate engine o word method o word word prop o method m word props u quotation word leaf signal handler u quotation obj current thread special object error thread set global current continuation error continuation set global bi definition ulong pointer typedef and malloc factor typedef ulong sigset t dup malloc dup sigfillset drop set sigdefault works until gc malloc dup sigemptyset drop get sigdefault gc critical error invalid header in slot count broken works until gc works in c but probably ub for aliasing reasons as shown in the why section this works the same as the struct with a pointer array member because it s always treated as opaque definition struct with ulong pointer member and malloc factor struct sigset t val ulong malloc dup sigfillset drop sigset t set sigdefault immediately crashes the ui crashes on gc fatal error memory protection fault during gc broken ui immediately cascading errors vm will crash on gc in the console listener maybe i need to cast the malloc pointer to ulong also works in c definition struct with unsigned long member factor using literals layouts struct sigset t val ulong sigset t dup sigfillset drop posix spawnattr setsigdefault sigset t dup sigfillset drop posix spawnattr getsigdefault gc critical error invalid header in base size broken works until gc this definition mirrors how glibc and musl libc actually declare sigset t why i think those should work and or don t work in glibc and musl libc sigset t is declared like this c define sigset num words sizeof unsigned long int typedef struct unsigned long int val sigset t on sigset num words is sizeof sigset t bit per possible signal i would write this in factor as factor using literals layouts math classes struct struct sigset t val ulong the factor definition should not be a reference type from the ffi s view only being made into a pointer when a callee parameter is a pointer type sigfillset and sigemptyset work with this declaration in c and factor factor and c results are the same here factor in scratchpad sigset t keep sigset t keep s sigset t val ulong array s sigset t val ulong array that only the bytes are changed seems to imply that even though sizeof sigset t when i compile a program my glibc was compiled with sizeof sigset t or something else is going on this is a good reason why the current definition of sigset t in factor is typedef ulong sigset t scalar sigset t is posix compliant but the issue arises when invoking functions like posix spawnattr setsigdefault if they are compiled with sizeof sigset t c int posix spawnattr setsigdefault posix spawnattr t attr const sigset t sigdefault memcpy attr sd sigdefault sizeof sigset t return if glibc thinks sigset t is bytes then it should just read less than the input buffer so the crashes are beyond me
1
9,008
12,121,699,022
IssuesEvent
2020-04-22 09:43:33
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Unexpected behaviour of "Merge Selected Feature" in case of error in "Merge Feature Attribute"
Bug Feedback Processing
Working with two consecutive linestrings (linestrings sharing one point) if the "Merge Feature Attribute" raise an exception for example due to a violation of a postgres trigger in one of the attribute fields of the new merged linestring, the two selected lines are deleted definitively even if one stop editing without saving. This is according to me a bug, because stop editing without saving should come back to the original situation while here the two lines are deleted and not recoverable. this is the WARNING of QGIS following the attached example ``` Could not commit changes to layer foo Errors: SUCCESS: 2 feature(s) deleted. ERROR: 1 feature(s) not added. Provider errors: PostGIS error while adding features: ERRORE: a=10 < b=15 CONTEXT: funzione PL/pgSQL delete_foo() riga 1 a RAISE ``` For what I can understand deleting the old linestrings and creating the new linestring are two processes completely separated, while the commit should come only after the second step. QGIS versions 3.4.14-Madeira and 2.18 (portable) [Merge_Selected_Feature.txt](https://github.com/qgis/QGIS/files/4511593/Merge_Selected_Feature.txt) Attached a very simplified example of a postgres table with two consecutive linestring and a trigger; it is enough to put a value of "a" lower then "b" in the "Merge Feature Attribute" and after the error close editing without saving.
1.0
Unexpected behaviour of "Merge Selected Feature" in case of error in "Merge Feature Attribute" - Working with two consecutive linestrings (linestrings sharing one point) if the "Merge Feature Attribute" raise an exception for example due to a violation of a postgres trigger in one of the attribute fields of the new merged linestring, the two selected lines are deleted definitively even if one stop editing without saving. This is according to me a bug, because stop editing without saving should come back to the original situation while here the two lines are deleted and not recoverable. this is the WARNING of QGIS following the attached example ``` Could not commit changes to layer foo Errors: SUCCESS: 2 feature(s) deleted. ERROR: 1 feature(s) not added. Provider errors: PostGIS error while adding features: ERRORE: a=10 < b=15 CONTEXT: funzione PL/pgSQL delete_foo() riga 1 a RAISE ``` For what I can understand deleting the old linestrings and creating the new linestring are two processes completely separated, while the commit should come only after the second step. QGIS versions 3.4.14-Madeira and 2.18 (portable) [Merge_Selected_Feature.txt](https://github.com/qgis/QGIS/files/4511593/Merge_Selected_Feature.txt) Attached a very simplified example of a postgres table with two consecutive linestring and a trigger; it is enough to put a value of "a" lower then "b" in the "Merge Feature Attribute" and after the error close editing without saving.
process
unexpected behaviour of merge selected feature in case of error in merge feature attribute working with two consecutive linestrings linestrings sharing one point if the merge feature attribute raise an exception for example due to a violation of a postgres trigger in one of the attribute fields of the new merged linestring the two selected lines are deleted definitively even if one stop editing without saving this is according to me a bug because stop editing without saving should come back to the original situation while here the two lines are deleted and not recoverable this is the warning of qgis following the attached example could not commit changes to layer foo errors success feature s deleted error feature s not added provider errors postgis error while adding features errore a b context funzione pl pgsql delete foo riga a raise for what i can understand deleting the old linestrings and creating the new linestring are two processes completely separated while the commit should come only after the second step qgis versions madeira and portable attached a very simplified example of a postgres table with two consecutive linestring and a trigger it is enough to put a value of a lower then b in the merge feature attribute and after the error close editing without saving
1
1,054
3,520,800,224
IssuesEvent
2016-01-12 22:22:39
matz-e/lobster
https://api.github.com/repos/matz-e/lobster
closed
Allow to specify per workflow resource requirements
enhancement processing
Current Lobster usage involves one contract, normally: we tell the pool what resources to request and place no other requirements on tasks but the number of cores they use. With more heterogenous projects involving different workflows, it would be better to specify the resources per workflow, and optionally let WQ lower/adjust the resource estimates as needed.
1.0
Allow to specify per workflow resource requirements - Current Lobster usage involves one contract, normally: we tell the pool what resources to request and place no other requirements on tasks but the number of cores they use. With more heterogenous projects involving different workflows, it would be better to specify the resources per workflow, and optionally let WQ lower/adjust the resource estimates as needed.
process
allow to specify per workflow resource requirements current lobster usage involves one contract normally we tell the pool what resources to request and place no other requirements on tasks but the number of cores they use with more heterogenous projects involving different workflows it would be better to specify the resources per workflow and optionally let wq lower adjust the resource estimates as needed
1
391,600
11,576,125,807
IssuesEvent
2020-02-21 11:13:02
luna/enso
https://api.github.com/repos/luna/enso
opened
Recent Projects List
Category: Backend Change: Non-Breaking Difficulty: Core Contributor Priority: Medium Type: Enhancement
### Summary We need the ability to list a user's recent projects to them to enable easy opening of projects. This task deals with implementing this functionality. ### Value The IDE will be able to list recent projects. ### Specification - [ ] Work out how to store information about recent projects persistently. - [ ] Implement the `project/listRecent` message. ### Acceptance Criteria & Test Cases - The above specification has been implemented. - The listed functionality has been rigorously tested.
1.0
Recent Projects List - ### Summary We need the ability to list a user's recent projects to them to enable easy opening of projects. This task deals with implementing this functionality. ### Value The IDE will be able to list recent projects. ### Specification - [ ] Work out how to store information about recent projects persistently. - [ ] Implement the `project/listRecent` message. ### Acceptance Criteria & Test Cases - The above specification has been implemented. - The listed functionality has been rigorously tested.
non_process
recent projects list summary we need the ability to list a user s recent projects to them to enable easy opening of projects this task deals with implementing this functionality value the ide will be able to list recent projects specification work out how to store information about recent projects persistently implement the project listrecent message acceptance criteria test cases the above specification has been implemented the listed functionality has been rigorously tested
0
20,168
10,616,786,274
IssuesEvent
2019-10-12 14:24:33
AGarlicMonkey/Clove
https://api.github.com/repos/AGarlicMonkey/Clove
opened
TextSystem creates new textures and sprites each frame
ECS UI performance rendering
The TextSystem creates new sprites / textures each frame before sending the data to the renderer. We should probably create these when the font class is created and store them in some kind of map to be accessed when needed
True
TextSystem creates new textures and sprites each frame - The TextSystem creates new sprites / textures each frame before sending the data to the renderer. We should probably create these when the font class is created and store them in some kind of map to be accessed when needed
non_process
textsystem creates new textures and sprites each frame the textsystem creates new sprites textures each frame before sending the data to the renderer we should probably create these when the font class is created and store them in some kind of map to be accessed when needed
0
8,123
11,303,528,883
IssuesEvent
2020-01-17 20:21:10
material-components/material-components-ios
https://api.github.com/repos/material-components/material-components-ios
closed
[TextControls] Break up TextControls subspecs/targets
[TextControls] type:Process
The various text controls (input chip view, text area, and text field) should exist in their own Cocoapods subspecs/Bazel targets. Acceptance criteria: The blaze target for text controls has been broken up. The various text controls are no longer all in the same target. This was filed as an internal issue. If you are a Googler, please visit [b/147442346](http://b/147442346) for more details. <!-- Auto-generated content below, do not modify --> --- #### Internal data - Associated internal bug: [b/147442346](http://b/147442346)
1.0
[TextControls] Break up TextControls subspecs/targets - The various text controls (input chip view, text area, and text field) should exist in their own Cocoapods subspecs/Bazel targets. Acceptance criteria: The blaze target for text controls has been broken up. The various text controls are no longer all in the same target. This was filed as an internal issue. If you are a Googler, please visit [b/147442346](http://b/147442346) for more details. <!-- Auto-generated content below, do not modify --> --- #### Internal data - Associated internal bug: [b/147442346](http://b/147442346)
process
break up textcontrols subspecs targets the various text controls input chip view text area and text field should exist in their own cocoapods subspecs bazel targets acceptance criteria the blaze target for text controls has been broken up the various text controls are no longer all in the same target this was filed as an internal issue if you are a googler please visit for more details internal data associated internal bug
1
15,607
19,729,899,226
IssuesEvent
2022-01-14 00:37:42
nodejs/node
https://api.github.com/repos/nodejs/node
closed
`process.exit()` in ESM results in exit code 13
process esm
### Version v16.13.0 ### Platform _No response_ ### Subsystem _No response_ ### What steps will reproduce the bug? ```sh echo "process.exit()" > test.mjs node ./test.mjs # this exits with code 13 echo "process.exit()" > test.cjs node ./test.cjs # this exits with code 0 ``` ### How often does it reproduce? Is there a required condition? Seems to only happen when using ES modules. ### What is the expected behavior? The process should exit with code 0. ### What do you see instead? The process exits with code 13. ### Additional information This seems to have been introduced in #34640 to handle unfinished TLA, but in this instance I'm not even using TLA so it seems like a false positive.
1.0
`process.exit()` in ESM results in exit code 13 - ### Version v16.13.0 ### Platform _No response_ ### Subsystem _No response_ ### What steps will reproduce the bug? ```sh echo "process.exit()" > test.mjs node ./test.mjs # this exits with code 13 echo "process.exit()" > test.cjs node ./test.cjs # this exits with code 0 ``` ### How often does it reproduce? Is there a required condition? Seems to only happen when using ES modules. ### What is the expected behavior? The process should exit with code 0. ### What do you see instead? The process exits with code 13. ### Additional information This seems to have been introduced in #34640 to handle unfinished TLA, but in this instance I'm not even using TLA so it seems like a false positive.
process
process exit in esm results in exit code version platform no response subsystem no response what steps will reproduce the bug sh echo process exit test mjs node test mjs this exits with code echo process exit test cjs node test cjs this exits with code how often does it reproduce is there a required condition seems to only happen when using es modules what is the expected behavior the process should exit with code what do you see instead the process exits with code additional information this seems to have been introduced in to handle unfinished tla but in this instance i m not even using tla so it seems like a false positive
1
137,586
12,759,813,955
IssuesEvent
2020-06-29 06:46:28
alpaka-group/alpaka-workshop-slides
https://api.github.com/repos/alpaka-group/alpaka-workshop-slides
closed
Decide on license
documentation question
Before we upload the slides and the cheat sheet we should decide on a license for this repository. My apologies for being late with this but I haven't thought of this until now. Thoughts? @bussmann @psychocoderHPC @sbastrakov
1.0
Decide on license - Before we upload the slides and the cheat sheet we should decide on a license for this repository. My apologies for being late with this but I haven't thought of this until now. Thoughts? @bussmann @psychocoderHPC @sbastrakov
non_process
decide on license before we upload the slides and the cheat sheet we should decide on a license for this repository my apologies for being late with this but i haven t thought of this until now thoughts bussmann psychocoderhpc sbastrakov
0
135,179
18,672,449,801
IssuesEvent
2021-10-31 00:22:06
odpi/egeria
https://api.github.com/repos/odpi/egeria
closed
securedProperties Field Retrieved Through REST API
security no-issue-activity
The `securedProperties` field on org.odpi.openmetadata.frameworks.connectors.properties.ConnectionProperties.java comes back via the REST API. The documentation for this field says: > securedProperties - Protected properties for secure log on by connector to back end server. These > are protected properties that **can only be retrieved by privileged connector code.** But when I use the securedProperties during configuration, the fields come back via the REST API at this endpoint: `GET {{baseURL}}/open-metadata/admin-services/users/{{user}}/servers/{{server}}/configuration.` Experiencing this issue in the latest version of Egeria.
True
securedProperties Field Retrieved Through REST API - The `securedProperties` field on org.odpi.openmetadata.frameworks.connectors.properties.ConnectionProperties.java comes back via the REST API. The documentation for this field says: > securedProperties - Protected properties for secure log on by connector to back end server. These > are protected properties that **can only be retrieved by privileged connector code.** But when I use the securedProperties during configuration, the fields come back via the REST API at this endpoint: `GET {{baseURL}}/open-metadata/admin-services/users/{{user}}/servers/{{server}}/configuration.` Experiencing this issue in the latest version of Egeria.
non_process
securedproperties field retrieved through rest api the securedproperties field on org odpi openmetadata frameworks connectors properties connectionproperties java comes back via the rest api the documentation for this field says securedproperties protected properties for secure log on by connector to back end server these are protected properties that can only be retrieved by privileged connector code but when i use the securedproperties during configuration the fields come back via the rest api at this endpoint get baseurl open metadata admin services users user servers server configuration experiencing this issue in the latest version of egeria
0
21,093
28,045,023,763
IssuesEvent
2023-03-28 21:54:14
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
[MLv2] Implement function to return orderable columns
.Backend .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
We need a function to power this UI <img width="275" alt="image" src="https://user-images.githubusercontent.com/1455846/228049850-5a49d1b7-c356-40fc-837d-6349f8a54e9e.png">
1.0
[MLv2] Implement function to return orderable columns - We need a function to power this UI <img width="275" alt="image" src="https://user-images.githubusercontent.com/1455846/228049850-5a49d1b7-c356-40fc-837d-6349f8a54e9e.png">
process
implement function to return orderable columns we need a function to power this ui img width alt image src
1
235,284
7,736,173,873
IssuesEvent
2018-05-27 23:28:22
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
closed
USER ISSUE: after bleeding_staging updated, i cant connect to my server
High Priority
**Version:** 0.7.5.0 beta staging-77a8710c **Steps to Reproduce:** (after update from previous bleeding version) Launch server. Try to connect to it. **Expected behavior:** Connected and playing. **Actual behavior:** Server crash, client deadlocked on rotating gear screen. Savefile. https://www.dropbox.com/s/c61x78xzfdbirmq/Game-2018-05-15-18.39.01.rar?dl=1 Server dump. https://www.dropbox.com/s/307xgr72l0gyc4k/ServerCrash%20NetException%2005170753.ecodmp?dl=1
1.0
USER ISSUE: after bleeding_staging updated, i cant connect to my server - **Version:** 0.7.5.0 beta staging-77a8710c **Steps to Reproduce:** (after update from previous bleeding version) Launch server. Try to connect to it. **Expected behavior:** Connected and playing. **Actual behavior:** Server crash, client deadlocked on rotating gear screen. Savefile. https://www.dropbox.com/s/c61x78xzfdbirmq/Game-2018-05-15-18.39.01.rar?dl=1 Server dump. https://www.dropbox.com/s/307xgr72l0gyc4k/ServerCrash%20NetException%2005170753.ecodmp?dl=1
non_process
user issue after bleeding staging updated i cant connect to my server version beta staging steps to reproduce after update from previous bleeding version launch server try to connect to it expected behavior connected and playing actual behavior server crash client deadlocked on rotating gear screen savefile server dump
0
227,934
25,135,398,485
IssuesEvent
2022-11-09 18:09:54
lukebrogan-mend/NuGetGallery
https://api.github.com/repos/lukebrogan-mend/NuGetGallery
opened
moment-2.18.1.js: 3 vulnerabilities (highest severity is: 7.5)
security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-2.18.1.js</b></p></summary> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.js</a></p> <p>Path to vulnerable library: /Scripts/gallery/moment-2.18.1.js</p> <p> </details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (moment version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2022-31129](https://www.mend.io/vulnerability-database/CVE-2022-31129) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | moment-2.18.1.js | Direct | moment - 2.29.4 | &#10060; | | [CVE-2017-18214](https://www.mend.io/vulnerability-database/CVE-2017-18214) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | moment-2.18.1.js | Direct | moment - 2.19.3 | &#10060; | | [CVE-2022-24785](https://www.mend.io/vulnerability-database/CVE-2022-24785) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | moment-2.18.1.js | Direct | moment - 2.29.2 | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-31129</summary> ### Vulnerable Library - <b>moment-2.18.1.js</b></p> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.js</a></p> <p>Path to vulnerable library: /Scripts/gallery/moment-2.18.1.js</p> <p> Dependency Hierarchy: - :x: **moment-2.18.1.js** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> moment is a JavaScript date library for parsing, validating, manipulating, and formatting dates. Affected versions of moment were found to use an inefficient parsing algorithm. Specifically using string-to-date parsing in moment (more specifically rfc2822 parsing, which is tried by default) has quadratic (N^2) complexity on specific inputs. Users may notice a noticeable slowdown is observed with inputs above 10k characters. Users who pass user-provided strings without sanity length checks to moment constructor are vulnerable to (Re)DoS attacks. The problem is patched in 2.29.4, the patch can be applied to all affected versions with minimal tweaking. Users are advised to upgrade. Users unable to upgrade should consider limiting date lengths accepted from user input. <p>Publish Date: 2022-07-06 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-31129>CVE-2022-31129</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/moment/moment/security/advisories/GHSA-wc69-rhjr-hc9g">https://github.com/moment/moment/security/advisories/GHSA-wc69-rhjr-hc9g</a></p> <p>Release Date: 2022-07-06</p> <p>Fix Resolution: moment - 2.29.4</p> </p> <p></p> </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2017-18214</summary> ### Vulnerable Library - <b>moment-2.18.1.js</b></p> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.js</a></p> <p>Path to vulnerable library: /Scripts/gallery/moment-2.18.1.js</p> <p> Dependency Hierarchy: - :x: **moment-2.18.1.js** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> The moment module before 2.19.3 for Node.js is prone to a regular expression denial of service via a crafted date string, a different vulnerability than CVE-2016-4055. <p>Publish Date: 2018-03-04 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-18214>CVE-2017-18214</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-446m-mv8f-q348">https://github.com/advisories/GHSA-446m-mv8f-q348</a></p> <p>Release Date: 2018-03-04</p> <p>Fix Resolution: moment - 2.19.3</p> </p> <p></p> </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-24785</summary> ### Vulnerable Library - <b>moment-2.18.1.js</b></p> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.js</a></p> <p>Path to vulnerable library: /Scripts/gallery/moment-2.18.1.js</p> <p> Dependency Hierarchy: - :x: **moment-2.18.1.js** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> Moment.js is a JavaScript date library for parsing, validating, manipulating, and formatting dates. A path traversal vulnerability impacts npm (server) users of Moment.js between versions 1.0.1 and 2.29.1, especially if a user-provided locale string is directly used to switch moment locale. This problem is patched in 2.29.2, and the patch can be applied to all affected versions. As a workaround, sanitize the user-provided locale name before passing it to Moment.js. <p>Publish Date: 2022-04-04 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-24785>CVE-2022-24785</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/moment/moment/security/advisories/GHSA-8hfj-j24r-96c4">https://github.com/moment/moment/security/advisories/GHSA-8hfj-j24r-96c4</a></p> <p>Release Date: 2022-04-04</p> <p>Fix Resolution: moment - 2.29.2</p> </p> <p></p> </details>
True
moment-2.18.1.js: 3 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-2.18.1.js</b></p></summary> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.js</a></p> <p>Path to vulnerable library: /Scripts/gallery/moment-2.18.1.js</p> <p> </details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (moment version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2022-31129](https://www.mend.io/vulnerability-database/CVE-2022-31129) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | moment-2.18.1.js | Direct | moment - 2.29.4 | &#10060; | | [CVE-2017-18214](https://www.mend.io/vulnerability-database/CVE-2017-18214) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | moment-2.18.1.js | Direct | moment - 2.19.3 | &#10060; | | [CVE-2022-24785](https://www.mend.io/vulnerability-database/CVE-2022-24785) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | moment-2.18.1.js | Direct | moment - 2.29.2 | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-31129</summary> ### Vulnerable Library - <b>moment-2.18.1.js</b></p> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.js</a></p> <p>Path to vulnerable library: /Scripts/gallery/moment-2.18.1.js</p> <p> Dependency Hierarchy: - :x: **moment-2.18.1.js** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> moment is a JavaScript date library for parsing, validating, manipulating, and formatting dates. Affected versions of moment were found to use an inefficient parsing algorithm. Specifically using string-to-date parsing in moment (more specifically rfc2822 parsing, which is tried by default) has quadratic (N^2) complexity on specific inputs. Users may notice a noticeable slowdown is observed with inputs above 10k characters. Users who pass user-provided strings without sanity length checks to moment constructor are vulnerable to (Re)DoS attacks. The problem is patched in 2.29.4, the patch can be applied to all affected versions with minimal tweaking. Users are advised to upgrade. Users unable to upgrade should consider limiting date lengths accepted from user input. <p>Publish Date: 2022-07-06 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-31129>CVE-2022-31129</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/moment/moment/security/advisories/GHSA-wc69-rhjr-hc9g">https://github.com/moment/moment/security/advisories/GHSA-wc69-rhjr-hc9g</a></p> <p>Release Date: 2022-07-06</p> <p>Fix Resolution: moment - 2.29.4</p> </p> <p></p> </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2017-18214</summary> ### Vulnerable Library - <b>moment-2.18.1.js</b></p> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.js</a></p> <p>Path to vulnerable library: /Scripts/gallery/moment-2.18.1.js</p> <p> Dependency Hierarchy: - :x: **moment-2.18.1.js** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> The moment module before 2.19.3 for Node.js is prone to a regular expression denial of service via a crafted date string, a different vulnerability than CVE-2016-4055. <p>Publish Date: 2018-03-04 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-18214>CVE-2017-18214</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-446m-mv8f-q348">https://github.com/advisories/GHSA-446m-mv8f-q348</a></p> <p>Release Date: 2018-03-04</p> <p>Fix Resolution: moment - 2.19.3</p> </p> <p></p> </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-24785</summary> ### Vulnerable Library - <b>moment-2.18.1.js</b></p> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.js</a></p> <p>Path to vulnerable library: /Scripts/gallery/moment-2.18.1.js</p> <p> Dependency Hierarchy: - :x: **moment-2.18.1.js** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> Moment.js is a JavaScript date library for parsing, validating, manipulating, and formatting dates. A path traversal vulnerability impacts npm (server) users of Moment.js between versions 1.0.1 and 2.29.1, especially if a user-provided locale string is directly used to switch moment locale. This problem is patched in 2.29.2, and the patch can be applied to all affected versions. As a workaround, sanitize the user-provided locale name before passing it to Moment.js. <p>Publish Date: 2022-04-04 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-24785>CVE-2022-24785</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/moment/moment/security/advisories/GHSA-8hfj-j24r-96c4">https://github.com/moment/moment/security/advisories/GHSA-8hfj-j24r-96c4</a></p> <p>Release Date: 2022-04-04</p> <p>Fix Resolution: moment - 2.29.2</p> </p> <p></p> </details>
non_process
moment js vulnerabilities highest severity is vulnerable library moment js parse validate manipulate and display dates library home page a href path to vulnerable library scripts gallery moment js vulnerabilities cve severity cvss dependency type fixed in moment version remediation available high moment js direct moment high moment js direct moment high moment js direct moment details cve vulnerable library moment js parse validate manipulate and display dates library home page a href path to vulnerable library scripts gallery moment js dependency hierarchy x moment js vulnerable library found in base branch main vulnerability details moment is a javascript date library for parsing validating manipulating and formatting dates affected versions of moment were found to use an inefficient parsing algorithm specifically using string to date parsing in moment more specifically parsing which is tried by default has quadratic n complexity on specific inputs users may notice a noticeable slowdown is observed with inputs above characters users who pass user provided strings without sanity length checks to moment constructor are vulnerable to re dos attacks the problem is patched in the patch can be applied to all affected versions with minimal tweaking users are advised to upgrade users unable to upgrade should consider limiting date lengths accepted from user input publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution moment cve vulnerable library moment js parse validate manipulate and display dates library home page a href path to vulnerable library scripts gallery moment js dependency hierarchy x moment js vulnerable library found in base branch main vulnerability details the moment module before for node js is prone to a regular expression denial of service via a crafted date string a different vulnerability than cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution moment cve vulnerable library moment js parse validate manipulate and display dates library home page a href path to vulnerable library scripts gallery moment js dependency hierarchy x moment js vulnerable library found in base branch main vulnerability details moment js is a javascript date library for parsing validating manipulating and formatting dates a path traversal vulnerability impacts npm server users of moment js between versions and especially if a user provided locale string is directly used to switch moment locale this problem is patched in and the patch can be applied to all affected versions as a workaround sanitize the user provided locale name before passing it to moment js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution moment
0
8,034
11,210,801,551
IssuesEvent
2020-01-06 14:05:46
kubeflow/kfctl
https://api.github.com/repos/kubeflow/kfctl
closed
Make kubeflow/kfctl source of truth for kfctl development; turn down kfctl in kubeflow/kubeflow
area/kfctl effort/5-days kind/process priority/p0
Opening this issue to track migrating development of kfctl from kubeflow/kubeflow to kubeflow/kfctl @kkasravi What are the next steps here? Are we waiting on E2E tests to be migrated to kubeflow/kfctl? /assign @kkasravi
1.0
Make kubeflow/kfctl source of truth for kfctl development; turn down kfctl in kubeflow/kubeflow - Opening this issue to track migrating development of kfctl from kubeflow/kubeflow to kubeflow/kfctl @kkasravi What are the next steps here? Are we waiting on E2E tests to be migrated to kubeflow/kfctl? /assign @kkasravi
process
make kubeflow kfctl source of truth for kfctl development turn down kfctl in kubeflow kubeflow opening this issue to track migrating development of kfctl from kubeflow kubeflow to kubeflow kfctl kkasravi what are the next steps here are we waiting on tests to be migrated to kubeflow kfctl assign kkasravi
1
18,070
24,084,415,240
IssuesEvent
2022-09-19 09:36:56
streamnative/flink
https://api.github.com/repos/streamnative/flink
closed
[SQL Connector] Consumer cannot consume from Pulsar topics written by SQL connector with avro schema
compute/data-processing
in writeToExplicitTableAndReadWithJsonSchemaUsingPulsarConsumer test case, we have a ``` org.apache.pulsar.client.api.PulsarClientException$IncompatibleSchemaException: Failed to subscribe whAmv with 1 partitions {"errorMsg":"Topic does not have schema to check","reqId":2453711855417924595, "remote":"localhost/127.0.0.1:50786", "local":"/127.0.0.1:50800"} ``` This is because when writing the data to Pulsar topic, we use a byte schema. The pulsar consumer will check the schema compatibility issues. We need to understand if this can changed using a different configuration entry (enable schema evolution feature)
1.0
[SQL Connector] Consumer cannot consume from Pulsar topics written by SQL connector with avro schema - in writeToExplicitTableAndReadWithJsonSchemaUsingPulsarConsumer test case, we have a ``` org.apache.pulsar.client.api.PulsarClientException$IncompatibleSchemaException: Failed to subscribe whAmv with 1 partitions {"errorMsg":"Topic does not have schema to check","reqId":2453711855417924595, "remote":"localhost/127.0.0.1:50786", "local":"/127.0.0.1:50800"} ``` This is because when writing the data to Pulsar topic, we use a byte schema. The pulsar consumer will check the schema compatibility issues. We need to understand if this can changed using a different configuration entry (enable schema evolution feature)
process
consumer cannot consume from pulsar topics written by sql connector with avro schema in writetoexplicittableandreadwithjsonschemausingpulsarconsumer test case we have a org apache pulsar client api pulsarclientexception incompatibleschemaexception failed to subscribe whamv with partitions errormsg topic does not have schema to check reqid remote localhost local this is because when writing the data to pulsar topic we use a byte schema the pulsar consumer will check the schema compatibility issues we need to understand if this can changed using a different configuration entry enable schema evolution feature
1
12,351
9,633,772,766
IssuesEvent
2019-05-15 19:30:48
aws/aws-cli
https://api.github.com/repos/aws/aws-cli
closed
aws rds-data execute-sql fails, urging to use ExecuteStatement API
closing-soon-if-no-response response-requested service-api
I first got this error when I was using AWS SDK in Node, but then I tried replicating it with the AWS CLI. I got the same result: ``` An error occurred (BadRequestException) when calling the ExecuteSql operation: This API is deprecated and not available. Use ExecuteStatement API instead ``` I know **Data API** are beta, but I couldn't find any reference of _Execute Statement API_ in the "AWS SDK for Node" documentation, nor in the `rds-data` _man_ page. **Region**: `eu-west-1`
1.0
aws rds-data execute-sql fails, urging to use ExecuteStatement API - I first got this error when I was using AWS SDK in Node, but then I tried replicating it with the AWS CLI. I got the same result: ``` An error occurred (BadRequestException) when calling the ExecuteSql operation: This API is deprecated and not available. Use ExecuteStatement API instead ``` I know **Data API** are beta, but I couldn't find any reference of _Execute Statement API_ in the "AWS SDK for Node" documentation, nor in the `rds-data` _man_ page. **Region**: `eu-west-1`
non_process
aws rds data execute sql fails urging to use executestatement api i first got this error when i was using aws sdk in node but then i tried replicating it with the aws cli i got the same result an error occurred badrequestexception when calling the executesql operation this api is deprecated and not available use executestatement api instead i know data api are beta but i couldn t find any reference of execute statement api in the aws sdk for node documentation nor in the rds data man page region eu west
0
250,706
27,111,222,334
IssuesEvent
2023-02-15 15:26:35
EliyaC/NodeGoat
https://api.github.com/repos/EliyaC/NodeGoat
closed
WS-2018-0103 (Medium) detected in stringstream-0.0.5.tgz - autoclosed
security vulnerability
## WS-2018-0103 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>stringstream-0.0.5.tgz</b></p></summary> <p>Encode and decode streams into string streams</p> <p>Library home page: <a href="https://registry.npmjs.org/stringstream/-/stringstream-0.0.5.tgz">https://registry.npmjs.org/stringstream/-/stringstream-0.0.5.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/npm/node_modules/request/node_modules/stringstream/package.json</p> <p> Dependency Hierarchy: - grunt-npm-install-0.3.1.tgz (Root Library) - npm-3.10.10.tgz - request-2.75.0.tgz - :x: **stringstream-0.0.5.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/EliyaC/NodeGoat/commit/2f9ac315d9e05728b7ce26ce7cf1b4e684e54fde">2f9ac315d9e05728b7ce26ce7cf1b4e684e54fde</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> All versions of stringstream are vulnerable to out-of-bounds read as it allocates uninitialized Buffers when number is passed in input stream on Node.js 4.x and below. <p>Publish Date: 2018-05-16 <p>URL: <a href=https://hackerone.com/reports/321670>WS-2018-0103</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nodesecurity.io/advisories/664">https://nodesecurity.io/advisories/664</a></p> <p>Release Date: 2018-01-27</p> <p>Fix Resolution: 0.0.6</p> </p> </details> <p></p>
True
WS-2018-0103 (Medium) detected in stringstream-0.0.5.tgz - autoclosed - ## WS-2018-0103 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>stringstream-0.0.5.tgz</b></p></summary> <p>Encode and decode streams into string streams</p> <p>Library home page: <a href="https://registry.npmjs.org/stringstream/-/stringstream-0.0.5.tgz">https://registry.npmjs.org/stringstream/-/stringstream-0.0.5.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/npm/node_modules/request/node_modules/stringstream/package.json</p> <p> Dependency Hierarchy: - grunt-npm-install-0.3.1.tgz (Root Library) - npm-3.10.10.tgz - request-2.75.0.tgz - :x: **stringstream-0.0.5.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/EliyaC/NodeGoat/commit/2f9ac315d9e05728b7ce26ce7cf1b4e684e54fde">2f9ac315d9e05728b7ce26ce7cf1b4e684e54fde</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> All versions of stringstream are vulnerable to out-of-bounds read as it allocates uninitialized Buffers when number is passed in input stream on Node.js 4.x and below. <p>Publish Date: 2018-05-16 <p>URL: <a href=https://hackerone.com/reports/321670>WS-2018-0103</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nodesecurity.io/advisories/664">https://nodesecurity.io/advisories/664</a></p> <p>Release Date: 2018-01-27</p> <p>Fix Resolution: 0.0.6</p> </p> </details> <p></p>
non_process
ws medium detected in stringstream tgz autoclosed ws medium severity vulnerability vulnerable library stringstream tgz encode and decode streams into string streams library home page a href path to dependency file package json path to vulnerable library node modules npm node modules request node modules stringstream package json dependency hierarchy grunt npm install tgz root library npm tgz request tgz x stringstream tgz vulnerable library found in head commit a href found in base branch master vulnerability details all versions of stringstream are vulnerable to out of bounds read as it allocates uninitialized buffers when number is passed in input stream on node js x and below publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
0
16,648
21,712,325,087
IssuesEvent
2022-05-10 14:48:24
prisma/prisma
https://api.github.com/repos/prisma/prisma
closed
dbgenerated function cause Null constraint error in MySQL
bug/2-confirmed kind/bug process/candidate topic: mysql team/client topic: dbgenerated topic: uuid size/s
### Bug description Some dbgenerated functions work and others produce errors. The error is not on the database side, but on Prisma Client side. ### How to reproduce ✅ no error: ```prisma model A { id Bytes @id @default(dbgenerated("(uuid_to_bin(uuid()))")) @db.Binary(16) } ``` ❌ error: ```prisma model B { id Bytes @id @default(dbgenerated("(uuid_to_bin(uuid(), 1))")) @db.Binary(16) } ``` error detail: ``` Null constraint violation on the (not available) code: 'P2011', clientVersion: '3.12.0', meta: { constraint: null } ``` ### Expected behavior Because the value is generated on the db side, there is no need to detect errors in Prisma Client. ### Prisma information ```prisma datasource db { provider = "mysql" url = env("DATABASE_URL") referentialIntegrity = "prisma" } generator client { provider = "prisma-client-js" previewFeatures = ["referentialIntegrity", "fullTextIndex", "fullTextSearch", "interactiveTransactions", "filterJson"] } model A { id Bytes @id @default(dbgenerated("(uuid_to_bin(uuid()))")) @db.Binary(16) } model B { id Bytes @id @default(dbgenerated("(uuid_to_bin(uuid(), 1))")) @db.Binary(16) } ``` ```ts export const test = async () => { await prisma.a.create({ data: {} }) await prisma.b.create({ data: {} }) } ``` ### Environment & setup - OS: Mac OS - Database: MySQL - Node.js version: v16.14.0 ### Prisma Version ``` prisma : 3.12.0 @prisma/client : 3.12.0 Current platform : darwin Query Engine (Node-API) : libquery-engine 22b822189f46ef0dc5c5b503368d1bee01213980 (at node_modules/@prisma/engines/libquery_engine-darwin.dylib.node) Migration Engine : migration-engine-cli 22b822189f46ef0dc5c5b503368d1bee01213980 (at node_modules/@prisma/engines/migration-engine-darwin) Introspection Engine : introspection-core 22b822189f46ef0dc5c5b503368d1bee01213980 (at node_modules/@prisma/engines/introspection-engine-darwin) Format Binary : prisma-fmt 22b822189f46ef0dc5c5b503368d1bee01213980 (at node_modules/@prisma/engines/prisma-fmt-darwin) Default Engines Hash : 22b822189f46ef0dc5c5b503368d1bee01213980 Studio : 0.459.0 Preview Features : referentialIntegrity, fullTextIndex, fullTextSearch, interactiveTransactions, filterJson ```
1.0
dbgenerated function cause Null constraint error in MySQL - ### Bug description Some dbgenerated functions work and others produce errors. The error is not on the database side, but on Prisma Client side. ### How to reproduce ✅ no error: ```prisma model A { id Bytes @id @default(dbgenerated("(uuid_to_bin(uuid()))")) @db.Binary(16) } ``` ❌ error: ```prisma model B { id Bytes @id @default(dbgenerated("(uuid_to_bin(uuid(), 1))")) @db.Binary(16) } ``` error detail: ``` Null constraint violation on the (not available) code: 'P2011', clientVersion: '3.12.0', meta: { constraint: null } ``` ### Expected behavior Because the value is generated on the db side, there is no need to detect errors in Prisma Client. ### Prisma information ```prisma datasource db { provider = "mysql" url = env("DATABASE_URL") referentialIntegrity = "prisma" } generator client { provider = "prisma-client-js" previewFeatures = ["referentialIntegrity", "fullTextIndex", "fullTextSearch", "interactiveTransactions", "filterJson"] } model A { id Bytes @id @default(dbgenerated("(uuid_to_bin(uuid()))")) @db.Binary(16) } model B { id Bytes @id @default(dbgenerated("(uuid_to_bin(uuid(), 1))")) @db.Binary(16) } ``` ```ts export const test = async () => { await prisma.a.create({ data: {} }) await prisma.b.create({ data: {} }) } ``` ### Environment & setup - OS: Mac OS - Database: MySQL - Node.js version: v16.14.0 ### Prisma Version ``` prisma : 3.12.0 @prisma/client : 3.12.0 Current platform : darwin Query Engine (Node-API) : libquery-engine 22b822189f46ef0dc5c5b503368d1bee01213980 (at node_modules/@prisma/engines/libquery_engine-darwin.dylib.node) Migration Engine : migration-engine-cli 22b822189f46ef0dc5c5b503368d1bee01213980 (at node_modules/@prisma/engines/migration-engine-darwin) Introspection Engine : introspection-core 22b822189f46ef0dc5c5b503368d1bee01213980 (at node_modules/@prisma/engines/introspection-engine-darwin) Format Binary : prisma-fmt 22b822189f46ef0dc5c5b503368d1bee01213980 (at node_modules/@prisma/engines/prisma-fmt-darwin) Default Engines Hash : 22b822189f46ef0dc5c5b503368d1bee01213980 Studio : 0.459.0 Preview Features : referentialIntegrity, fullTextIndex, fullTextSearch, interactiveTransactions, filterJson ```
process
dbgenerated function cause null constraint error in mysql bug description some dbgenerated functions work and others produce errors the error is not on the database side but on prisma client side how to reproduce ✅ no error prisma model a id bytes id default dbgenerated uuid to bin uuid db binary ❌ error prisma model b id bytes id default dbgenerated uuid to bin uuid db binary error detail null constraint violation on the not available code clientversion meta constraint null expected behavior because the value is generated on the db side there is no need to detect errors in prisma client prisma information prisma datasource db provider mysql url env database url referentialintegrity prisma generator client provider prisma client js previewfeatures model a id bytes id default dbgenerated uuid to bin uuid db binary model b id bytes id default dbgenerated uuid to bin uuid db binary ts export const test async await prisma a create data await prisma b create data environment setup os mac os database mysql node js version prisma version prisma prisma client current platform darwin query engine node api libquery engine at node modules prisma engines libquery engine darwin dylib node migration engine migration engine cli at node modules prisma engines migration engine darwin introspection engine introspection core at node modules prisma engines introspection engine darwin format binary prisma fmt at node modules prisma engines prisma fmt darwin default engines hash studio preview features referentialintegrity fulltextindex fulltextsearch interactivetransactions filterjson
1
20,490
27,146,979,126
IssuesEvent
2023-02-16 20:49:08
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Docs on how to reference output variables from a job which has a matrix strategy
doc-bug Pri1 azure-devops-pipelines/svc azure-devops-pipelines-process/subsvc
In the "Use output variables from tasks" section, you could potentially add information about how to reference jobs that have a strategy matrix associated with them. What happens to their job names? You do you reference them in the `dependencies.JOB...` line? I currently have a job called "Build" with matrix entries called "linx", "windows" and "macos" and referencing output variables from those jobs doesn't seem to work in any way that I've tried. Should they be: ``` windowsStatus: $[ dependencies.Build.outputs['SetJobStatus.x64windowsStatus'] ] windowsStatus: $[ dependencies.Build windows.outputs['SetJobStatus.x64windowsStatus'] ] windowsStatus: $[ dependencies.Build_windows.outputs['SetJobStatus.x64windowsStatus'] ] windowsStatus: $[ dependencies.windows.outputs['SetJobStatus.x64windowsStatus'] ] windowsStatus: $[ dependencies['Build windows'].outputs['SetJobStatus.x64windowsStatus'] ] ``` I feel like I've tried a number of combinations and had no luck. I'm probably doing something foolish but it would be good to have it more clearly documented. --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/variables.md) * Service: **azure-devops-pipelines** * Sub-service: **azure-devops-pipelines-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
Docs on how to reference output variables from a job which has a matrix strategy - In the "Use output variables from tasks" section, you could potentially add information about how to reference jobs that have a strategy matrix associated with them. What happens to their job names? You do you reference them in the `dependencies.JOB...` line? I currently have a job called "Build" with matrix entries called "linx", "windows" and "macos" and referencing output variables from those jobs doesn't seem to work in any way that I've tried. Should they be: ``` windowsStatus: $[ dependencies.Build.outputs['SetJobStatus.x64windowsStatus'] ] windowsStatus: $[ dependencies.Build windows.outputs['SetJobStatus.x64windowsStatus'] ] windowsStatus: $[ dependencies.Build_windows.outputs['SetJobStatus.x64windowsStatus'] ] windowsStatus: $[ dependencies.windows.outputs['SetJobStatus.x64windowsStatus'] ] windowsStatus: $[ dependencies['Build windows'].outputs['SetJobStatus.x64windowsStatus'] ] ``` I feel like I've tried a number of combinations and had no luck. I'm probably doing something foolish but it would be good to have it more clearly documented. --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/variables.md) * Service: **azure-devops-pipelines** * Sub-service: **azure-devops-pipelines-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
docs on how to reference output variables from a job which has a matrix strategy in the use output variables from tasks section you could potentially add information about how to reference jobs that have a strategy matrix associated with them what happens to their job names you do you reference them in the dependencies job line i currently have a job called build with matrix entries called linx windows and macos and referencing output variables from those jobs doesn t seem to work in any way that i ve tried should they be windowsstatus windowsstatus windowsstatus windowsstatus windowsstatus outputs i feel like i ve tried a number of combinations and had no luck i m probably doing something foolish but it would be good to have it more clearly documented document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id bcdb content content source service azure devops pipelines sub service azure devops pipelines process github login juliakm microsoft alias jukullam
1
21,465
29,501,363,060
IssuesEvent
2023-06-02 22:18:28
mrdoob/three.js
https://api.github.com/repos/mrdoob/three.js
closed
CopyShader opacity does not work as expected
Suggestion Examples Post-processing
##### Description of the problem In examples/js/shaders/CopyShader.js, there is an issue with the fragment shader: ``` "void main() {", " vec4 texel = texture2D( tDiffuse, vUv );", " gl_FragColor = opacity * texel;", "}" ``` Because `texel` is multiplied by `opacity` directly, all elements of the color are scaled by the opacity value. This means on a black background, the opacity is effectively applied twice. On any other background, the resulting color will end up darker than expect. Proposed Solution: Multiply only the alpha component by opacity: ``` "void main() {", " vec4 texel = texture2D( tDiffuse, vUv );", " texel.a = opacity * texel.a;", " gl_FragColor = texel;", "}" ``` ##### Three.js version - [x] Dev - [ ] r117 - [ ] ... ##### Browser - [x] All of them - [ ] Chrome - [ ] Firefox - [ ] Internet Explorer ##### OS - [x] All of them - [ ] Windows - [ ] macOS - [ ] Linux - [ ] Android - [ ] iOS ##### Hardware Requirements (graphics card, VR Device, ...)
1.0
CopyShader opacity does not work as expected - ##### Description of the problem In examples/js/shaders/CopyShader.js, there is an issue with the fragment shader: ``` "void main() {", " vec4 texel = texture2D( tDiffuse, vUv );", " gl_FragColor = opacity * texel;", "}" ``` Because `texel` is multiplied by `opacity` directly, all elements of the color are scaled by the opacity value. This means on a black background, the opacity is effectively applied twice. On any other background, the resulting color will end up darker than expect. Proposed Solution: Multiply only the alpha component by opacity: ``` "void main() {", " vec4 texel = texture2D( tDiffuse, vUv );", " texel.a = opacity * texel.a;", " gl_FragColor = texel;", "}" ``` ##### Three.js version - [x] Dev - [ ] r117 - [ ] ... ##### Browser - [x] All of them - [ ] Chrome - [ ] Firefox - [ ] Internet Explorer ##### OS - [x] All of them - [ ] Windows - [ ] macOS - [ ] Linux - [ ] Android - [ ] iOS ##### Hardware Requirements (graphics card, VR Device, ...)
process
copyshader opacity does not work as expected description of the problem in examples js shaders copyshader js there is an issue with the fragment shader void main texel tdiffuse vuv gl fragcolor opacity texel because texel is multiplied by opacity directly all elements of the color are scaled by the opacity value this means on a black background the opacity is effectively applied twice on any other background the resulting color will end up darker than expect proposed solution multiply only the alpha component by opacity void main texel tdiffuse vuv texel a opacity texel a gl fragcolor texel three js version dev browser all of them chrome firefox internet explorer os all of them windows macos linux android ios hardware requirements graphics card vr device
1
98,358
11,071,886,691
IssuesEvent
2019-12-12 09:12:03
beeping-io/beeping
https://api.github.com/repos/beeping-io/beeping
closed
Check Review Section
documentation
Explain how to Pull request the changes: - Commit - Push - Pull request
1.0
Check Review Section - Explain how to Pull request the changes: - Commit - Push - Pull request
non_process
check review section explain how to pull request the changes commit push pull request
0
41,753
10,591,932,794
IssuesEvent
2019-10-09 12:03:21
vector-im/riot-web
https://api.github.com/repos/vector-im/riot-web
opened
Incorrect warnings on registration with no IS configured
bug defect phase:2 privacy privacy-sprint
Outdated warning text in the registration flow saying you can't reset password. TODO: Add screenshot
1.0
Incorrect warnings on registration with no IS configured - Outdated warning text in the registration flow saying you can't reset password. TODO: Add screenshot
non_process
incorrect warnings on registration with no is configured outdated warning text in the registration flow saying you can t reset password todo add screenshot
0
21,939
30,446,798,942
IssuesEvent
2023-07-15 19:28:36
h4sh5/pypi-auto-scanner
https://api.github.com/repos/h4sh5/pypi-auto-scanner
opened
pyutils 0.0.1b2 has 2 GuardDog issues
guarddog typosquatting silent-process-execution
https://pypi.org/project/pyutils https://inspector.pypi.io/project/pyutils ```{ "dependency": "pyutils", "version": "0.0.1b2", "result": { "issues": 2, "errors": {}, "results": { "typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: python-utils, pytils", "silent-process-execution": [ { "location": "pyutils-0.0.1b2/src/pyutils/exec_utils.py:200", "code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmp_uwg8ssv/pyutils" } }```
1.0
pyutils 0.0.1b2 has 2 GuardDog issues - https://pypi.org/project/pyutils https://inspector.pypi.io/project/pyutils ```{ "dependency": "pyutils", "version": "0.0.1b2", "result": { "issues": 2, "errors": {}, "results": { "typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: python-utils, pytils", "silent-process-execution": [ { "location": "pyutils-0.0.1b2/src/pyutils/exec_utils.py:200", "code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmp_uwg8ssv/pyutils" } }```
process
pyutils has guarddog issues dependency pyutils version result issues errors results typosquatting this package closely ressembles the following package names and might be a typosquatting attempt python utils pytils silent process execution location pyutils src pyutils exec utils py code subproc subprocess popen n args n stdin subprocess devnull n stdout subprocess devnull n stderr subprocess devnull n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp tmp pyutils
1
739
3,214,324,979
IssuesEvent
2015-10-07 00:50:41
broadinstitute/hellbender
https://api.github.com/repos/broadinstitute/hellbender
closed
Profile and optimize the ReadsPreprocessingPipeline
Dataflow DataflowPreprocessingPipeline profiling
Should probably be started only after the tests in https://github.com/broadinstitute/hellbender/issues/695 are in place.
1.0
Profile and optimize the ReadsPreprocessingPipeline - Should probably be started only after the tests in https://github.com/broadinstitute/hellbender/issues/695 are in place.
process
profile and optimize the readspreprocessingpipeline should probably be started only after the tests in are in place
1
487,213
14,020,742,995
IssuesEvent
2020-10-29 20:09:20
AY2021S1-CS2103T-W16-3/tp
https://api.github.com/repos/AY2021S1-CS2103T-W16-3/tp
closed
Disable scrolling of tabs by arrow keys
priority.medium :2nd_place_medal: type.bug :bug:
In order to get the user guide tab to be right aligned, its behaviour had to be modified. This does not work too well when scrolling tabs with arrow keys as the user guide tab is disabled when it is not selected. As such, the scrolling never reaches the user guide tab.
1.0
Disable scrolling of tabs by arrow keys - In order to get the user guide tab to be right aligned, its behaviour had to be modified. This does not work too well when scrolling tabs with arrow keys as the user guide tab is disabled when it is not selected. As such, the scrolling never reaches the user guide tab.
non_process
disable scrolling of tabs by arrow keys in order to get the user guide tab to be right aligned its behaviour had to be modified this does not work too well when scrolling tabs with arrow keys as the user guide tab is disabled when it is not selected as such the scrolling never reaches the user guide tab
0
20,448
27,106,635,008
IssuesEvent
2023-02-15 12:36:26
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
Obsoletion: abortive mitotic cell cycle GO:0033277
cell cycle and DNA processes obsoletion in progress ready
Please taxon restrict abortive mitotic cell cycle GO:0033277 There are only 2 EXP annotation based on: http://europepmc.org/article/MED/23509158 entitled "Megakaryocyte-specific deletion of the protein-tyrosine phosphatases Shp1 and Shp2 causes abnormal megakaryocyte development, platelet production, and function." so this might even be a phenotype?
1.0
Obsoletion: abortive mitotic cell cycle GO:0033277 - Please taxon restrict abortive mitotic cell cycle GO:0033277 There are only 2 EXP annotation based on: http://europepmc.org/article/MED/23509158 entitled "Megakaryocyte-specific deletion of the protein-tyrosine phosphatases Shp1 and Shp2 causes abnormal megakaryocyte development, platelet production, and function." so this might even be a phenotype?
process
obsoletion abortive mitotic cell cycle go please taxon restrict abortive mitotic cell cycle go there are only exp annotation based on entitled megakaryocyte specific deletion of the protein tyrosine phosphatases and causes abnormal megakaryocyte development platelet production and function so this might even be a phenotype
1
11,430
14,248,212,055
IssuesEvent
2020-11-19 12:36:59
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
Batch Resolve Coprocessor Locks
PCP-S1 difficulty/medium sig/coprocessor status/help-wanted
## Description Currently Coprocessor returns error immediately as soon as it meets a lock during scanning. When there are multiple locks in the scanning range this may result in high latency due to resolving locks one by one and there are RTTs for each resolve operation. This task is to allow Coprocessor returning multiple locks all at once when possible so that multiple locks can be resolved. Thus performance can be improved when there are heavy conflicts. Notice that you need to modify both TiKV and TiDB code in order to make them cooperate together. ## Difficulty - Medium ## Score - 2100 ## Mentor(s) - @loong - @busyjay - @breeswish ## Recommended Skills - Rust and Go language - TiKV transaction model
1.0
Batch Resolve Coprocessor Locks - ## Description Currently Coprocessor returns error immediately as soon as it meets a lock during scanning. When there are multiple locks in the scanning range this may result in high latency due to resolving locks one by one and there are RTTs for each resolve operation. This task is to allow Coprocessor returning multiple locks all at once when possible so that multiple locks can be resolved. Thus performance can be improved when there are heavy conflicts. Notice that you need to modify both TiKV and TiDB code in order to make them cooperate together. ## Difficulty - Medium ## Score - 2100 ## Mentor(s) - @loong - @busyjay - @breeswish ## Recommended Skills - Rust and Go language - TiKV transaction model
process
batch resolve coprocessor locks description currently coprocessor returns error immediately as soon as it meets a lock during scanning when there are multiple locks in the scanning range this may result in high latency due to resolving locks one by one and there are rtts for each resolve operation this task is to allow coprocessor returning multiple locks all at once when possible so that multiple locks can be resolved thus performance can be improved when there are heavy conflicts notice that you need to modify both tikv and tidb code in order to make them cooperate together difficulty medium score mentor s loong busyjay breeswish recommended skills rust and go language tikv transaction model
1
228,494
17,464,237,041
IssuesEvent
2021-08-06 14:39:40
fyysikkokilta/Plaseerausbotti
https://api.github.com/repos/fyysikkokilta/Plaseerausbotti
opened
Add docs/guide on usage
documentation
Can be either in just the [README.md](README.md) or do expanded Sphinx docs to `gh-pages`
1.0
Add docs/guide on usage - Can be either in just the [README.md](README.md) or do expanded Sphinx docs to `gh-pages`
non_process
add docs guide on usage can be either in just the readme md or do expanded sphinx docs to gh pages
0
5,097
7,879,040,203
IssuesEvent
2018-06-26 12:15:53
RustyPanda/zoobot
https://api.github.com/repos/RustyPanda/zoobot
opened
Odd batch loading behaviour - spiky 'fraction_of_n_full'
Preprocessing bug
<img width="364" alt="screen shot 2018-06-26 at 13 11 03" src="https://user-images.githubusercontent.com/7740526/41910730-725ee940-7942-11e8-93f5-23653f343532.png"> <img width="365" alt="screen shot 2018-06-26 at 13 12 15" src="https://user-images.githubusercontent.com/7740526/41910770-915ef95c-7942-11e8-8a33-cde42a644cf8.png"> Stratified sample includes a mysterious debug logger that shows very odd behaviour - sharp semi-periodic spikes. I should find out what's going on! Perhaps it's related to the start or end of a batch, or a variable batch size. The names are fraction_of_32_full and fraction_of_12800_full. 100 * batch size of 128? Periodic, but only sometimes: For augs_both, happens every 600 steps For augs, happens at 200, 1.05, 1.6, then stable May also be related to the varied training loss and tensorboard silent keyerrors.
1.0
Odd batch loading behaviour - spiky 'fraction_of_n_full' - <img width="364" alt="screen shot 2018-06-26 at 13 11 03" src="https://user-images.githubusercontent.com/7740526/41910730-725ee940-7942-11e8-93f5-23653f343532.png"> <img width="365" alt="screen shot 2018-06-26 at 13 12 15" src="https://user-images.githubusercontent.com/7740526/41910770-915ef95c-7942-11e8-8a33-cde42a644cf8.png"> Stratified sample includes a mysterious debug logger that shows very odd behaviour - sharp semi-periodic spikes. I should find out what's going on! Perhaps it's related to the start or end of a batch, or a variable batch size. The names are fraction_of_32_full and fraction_of_12800_full. 100 * batch size of 128? Periodic, but only sometimes: For augs_both, happens every 600 steps For augs, happens at 200, 1.05, 1.6, then stable May also be related to the varied training loss and tensorboard silent keyerrors.
process
odd batch loading behaviour spiky fraction of n full img width alt screen shot at src img width alt screen shot at src stratified sample includes a mysterious debug logger that shows very odd behaviour sharp semi periodic spikes i should find out what s going on perhaps it s related to the start or end of a batch or a variable batch size the names are fraction of full and fraction of full batch size of periodic but only sometimes for augs both happens every steps for augs happens at then stable may also be related to the varied training loss and tensorboard silent keyerrors
1
15,795
19,986,267,374
IssuesEvent
2022-01-30 18:03:25
processing/processing4
https://api.github.com/repos/processing/processing4
closed
mixing active and static mode throws the "wrong" error
help wanted preprocessor
## Description In previous versions of Processing, when mixing active and static modes the error Processing would throw was: **It looks like you're mixing "active" and "static" modes.** In Processing 4, the error becomes a lot more cryptic: **Syntax Error - Missing operator, semicolon, or '}' near 'setup'?** As this is a mistake a lot of first learners make, it would be really helpful to get the old error message back. ## Steps to Reproduce ``` size(100, 100); void setup() { } ```
1.0
mixing active and static mode throws the "wrong" error - ## Description In previous versions of Processing, when mixing active and static modes the error Processing would throw was: **It looks like you're mixing "active" and "static" modes.** In Processing 4, the error becomes a lot more cryptic: **Syntax Error - Missing operator, semicolon, or '}' near 'setup'?** As this is a mistake a lot of first learners make, it would be really helpful to get the old error message back. ## Steps to Reproduce ``` size(100, 100); void setup() { } ```
process
mixing active and static mode throws the wrong error description in previous versions of processing when mixing active and static modes the error processing would throw was it looks like you re mixing active and static modes in processing the error becomes a lot more cryptic syntax error missing operator semicolon or near setup as this is a mistake a lot of first learners make it would be really helpful to get the old error message back steps to reproduce size void setup
1
21,433
29,419,073,921
IssuesEvent
2023-05-31 01:15:14
global-healthy-liveable-cities/global-indicators
https://api.github.com/repos/global-healthy-liveable-cities/global-indicators
closed
Provide example for user of GHS Degree of Urbanisation (2023) data as alternative to UCDB (2019)
enhancement user process to document
Currently our examples use the [GHS Urban Centres Database](https://ghsl.jrc.ec.europa.eu/download.php?ds=ucdb) to identify urban regions, which was available when we were developing our tool in 2020. However, a newer dataset is now available for [Degree of Urbanisation (2023)](https://ghsl.jrc.ec.europa.eu/download.php?ds=DUC) with an accompanying [manual describing its usage to support international comparisons (2021)](https://op.europa.eu/en/publication-detail/-/publication/484c6825-8540-11eb-af5d-01aa75ed71a1/language-en). I believe we should update our example and documentation to guide users to use the more recent data product, in line with these guidelines. I haven't read these in detail yet, but lodging the issue as a reminder for intent to explore implementing this. In principle, the use of alternative urban region data sources and queries of these is possible, but it will make things easier for users if we provide direct examples of how to do this.
1.0
Provide example for user of GHS Degree of Urbanisation (2023) data as alternative to UCDB (2019) - Currently our examples use the [GHS Urban Centres Database](https://ghsl.jrc.ec.europa.eu/download.php?ds=ucdb) to identify urban regions, which was available when we were developing our tool in 2020. However, a newer dataset is now available for [Degree of Urbanisation (2023)](https://ghsl.jrc.ec.europa.eu/download.php?ds=DUC) with an accompanying [manual describing its usage to support international comparisons (2021)](https://op.europa.eu/en/publication-detail/-/publication/484c6825-8540-11eb-af5d-01aa75ed71a1/language-en). I believe we should update our example and documentation to guide users to use the more recent data product, in line with these guidelines. I haven't read these in detail yet, but lodging the issue as a reminder for intent to explore implementing this. In principle, the use of alternative urban region data sources and queries of these is possible, but it will make things easier for users if we provide direct examples of how to do this.
process
provide example for user of ghs degree of urbanisation data as alternative to ucdb currently our examples use the to identify urban regions which was available when we were developing our tool in however a newer dataset is now available for with an accompanying i believe we should update our example and documentation to guide users to use the more recent data product in line with these guidelines i haven t read these in detail yet but lodging the issue as a reminder for intent to explore implementing this in principle the use of alternative urban region data sources and queries of these is possible but it will make things easier for users if we provide direct examples of how to do this
1
8,400
11,567,514,944
IssuesEvent
2020-02-20 14:26:24
googleapis/google-oauth-java-client
https://api.github.com/repos/googleapis/google-oauth-java-client
opened
Parallll buid is not threadsafe
good first issue help wanted priority: p2 semver: patch type: process
[WARNING] ***************************************************************** [WARNING] * Your build is requesting parallel execution, but project * [WARNING] * contains the following plugin(s) that have goals not marked * [WARNING] * as @threadSafe to support parallel building. * [WARNING] * While this /may/ work fine, please look for plugin updates * [WARNING] * and/or request plugins be made thread-safe. * [WARNING] * If reporting an issue, report it against the plugin in * [WARNING] * question, not against maven-core * [WARNING] ***************************************************************** [WARNING] The following plugins are not marked @threadSafe in Google App Engine extensions to the Google OAuth Client Library for Java.: [WARNING] org.codehaus.mojo:clirr-maven-plugin:2.8 [WARNING] org.apache.maven.plugins:maven-project-info-reports-plugin:3.0.0 [WARNING] Enable debug to see more precisely which goals are not marked @threadSafe. [WARNING] *****************************************************************
1.0
Parallll buid is not threadsafe - [WARNING] ***************************************************************** [WARNING] * Your build is requesting parallel execution, but project * [WARNING] * contains the following plugin(s) that have goals not marked * [WARNING] * as @threadSafe to support parallel building. * [WARNING] * While this /may/ work fine, please look for plugin updates * [WARNING] * and/or request plugins be made thread-safe. * [WARNING] * If reporting an issue, report it against the plugin in * [WARNING] * question, not against maven-core * [WARNING] ***************************************************************** [WARNING] The following plugins are not marked @threadSafe in Google App Engine extensions to the Google OAuth Client Library for Java.: [WARNING] org.codehaus.mojo:clirr-maven-plugin:2.8 [WARNING] org.apache.maven.plugins:maven-project-info-reports-plugin:3.0.0 [WARNING] Enable debug to see more precisely which goals are not marked @threadSafe. [WARNING] *****************************************************************
process
parallll buid is not threadsafe your build is requesting parallel execution but project contains the following plugin s that have goals not marked as threadsafe to support parallel building while this may work fine please look for plugin updates and or request plugins be made thread safe if reporting an issue report it against the plugin in question not against maven core the following plugins are not marked threadsafe in google app engine extensions to the google oauth client library for java org codehaus mojo clirr maven plugin org apache maven plugins maven project info reports plugin enable debug to see more precisely which goals are not marked threadsafe
1
65,168
8,791,305,845
IssuesEvent
2018-12-21 12:11:06
algolia/react-instantsearch
https://api.github.com/repos/algolia/react-instantsearch
closed
Make the connector docs more usable
Documentation: API
The terms "provided props" and "exposed props" are confusing, even after you've already used connectors. I propose that there's an example on every connector page that explains clearly in what spot each type is used. Here listed for clarity in case this issue gets indexed in Google before we get around it: --- ## Exposed props Once a component is connected, these props become available on the connected component, you can use them in the JSX you use when using this component ## Provided props These are the props that you can use in your component that you will connect. These are typically things like `items`, to build your UI and `refine`, to be called when you click on a certain `item`. --- An alternative to the example is adding this text (or similar) to those documentations
1.0
Make the connector docs more usable - The terms "provided props" and "exposed props" are confusing, even after you've already used connectors. I propose that there's an example on every connector page that explains clearly in what spot each type is used. Here listed for clarity in case this issue gets indexed in Google before we get around it: --- ## Exposed props Once a component is connected, these props become available on the connected component, you can use them in the JSX you use when using this component ## Provided props These are the props that you can use in your component that you will connect. These are typically things like `items`, to build your UI and `refine`, to be called when you click on a certain `item`. --- An alternative to the example is adding this text (or similar) to those documentations
non_process
make the connector docs more usable the terms provided props and exposed props are confusing even after you ve already used connectors i propose that there s an example on every connector page that explains clearly in what spot each type is used here listed for clarity in case this issue gets indexed in google before we get around it exposed props once a component is connected these props become available on the connected component you can use them in the jsx you use when using this component provided props these are the props that you can use in your component that you will connect these are typically things like items to build your ui and refine to be called when you click on a certain item an alternative to the example is adding this text or similar to those documentations
0
15,150
18,906,784,131
IssuesEvent
2021-11-16 09:55:40
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
Validating index length prefixes
process/candidate engines/data model parser topic: indexes team/migrations
MySQL holds special rules about the size of the length prefix. These rules are different on text and bytes columns, and we should maybe catch them in datamodel validation before erroring out in the database. This should follow up the work on redefining native type and connector specific validations.
1.0
Validating index length prefixes - MySQL holds special rules about the size of the length prefix. These rules are different on text and bytes columns, and we should maybe catch them in datamodel validation before erroring out in the database. This should follow up the work on redefining native type and connector specific validations.
process
validating index length prefixes mysql holds special rules about the size of the length prefix these rules are different on text and bytes columns and we should maybe catch them in datamodel validation before erroring out in the database this should follow up the work on redefining native type and connector specific validations
1
388,474
11,488,103,423
IssuesEvent
2020-02-11 13:18:58
DigitalCampus/django-oppia
https://api.github.com/repos/DigitalCampus/django-oppia
closed
Option to register server
enhancement medium priority
Would allow us to track better which servers are installed (Moodle has similar functionality). Will need some more thinking about exactly how this would work but could also be used to inform oppia server admins if there have been new releases. The 'receiving' url for this data should be a separate app that DC uses - doesn't need to be built directly into Oppia. Will need to be very clear with server admins/implementers that this is happening (what data is shared), and give them option to disable this function easily
1.0
Option to register server - Would allow us to track better which servers are installed (Moodle has similar functionality). Will need some more thinking about exactly how this would work but could also be used to inform oppia server admins if there have been new releases. The 'receiving' url for this data should be a separate app that DC uses - doesn't need to be built directly into Oppia. Will need to be very clear with server admins/implementers that this is happening (what data is shared), and give them option to disable this function easily
non_process
option to register server would allow us to track better which servers are installed moodle has similar functionality will need some more thinking about exactly how this would work but could also be used to inform oppia server admins if there have been new releases the receiving url for this data should be a separate app that dc uses doesn t need to be built directly into oppia will need to be very clear with server admins implementers that this is happening what data is shared and give them option to disable this function easily
0
1,663
4,289,114,089
IssuesEvent
2016-07-17 22:07:18
pwittchen/NetworkEvents
https://api.github.com/repos/pwittchen/NetworkEvents
opened
Release v. 2.1.5
release process
**Initial release notes**: - added the private constructor to `NetworkHelper` class **Things to do**: - [ ] bump library version to 2.1.5 - [ ] upload Archives to Maven Central Repository - [ ] close and release artifact on Nexus - [ ] update gh-pages - [ ] update `CHANGELOG.md` after Maven Sync - [ ] bump library version in `README.md` after Maven Sync - [ ] create new GitHub release
1.0
Release v. 2.1.5 - **Initial release notes**: - added the private constructor to `NetworkHelper` class **Things to do**: - [ ] bump library version to 2.1.5 - [ ] upload Archives to Maven Central Repository - [ ] close and release artifact on Nexus - [ ] update gh-pages - [ ] update `CHANGELOG.md` after Maven Sync - [ ] bump library version in `README.md` after Maven Sync - [ ] create new GitHub release
process
release v initial release notes added the private constructor to networkhelper class things to do bump library version to upload archives to maven central repository close and release artifact on nexus update gh pages update changelog md after maven sync bump library version in readme md after maven sync create new github release
1
3,987
6,917,642,410
IssuesEvent
2017-11-29 09:19:33
itsyouonline/identityserver
https://api.github.com/repos/itsyouonline/identityserver
closed
Implement GUIDs as user identifiers
process_duplicate type_feature
- All users will have at least one user identifier, generated during registration - They are exposed to the user, either on the profile page or in the settings page - When sharing information with an organization (`through the authorization code grant flow`) a GUID is generated for that user-org pair -One is always shared. Therefore we should not need to implement a scope for this. - Allow them to be used as user identifier in api calls (required for at least the user info call, especially when we eventually drop username support) - Also allow them to log in??? - Find a way to transition existing sharing of username and replace this by sharing GUID as primary identifier
1.0
Implement GUIDs as user identifiers - - All users will have at least one user identifier, generated during registration - They are exposed to the user, either on the profile page or in the settings page - When sharing information with an organization (`through the authorization code grant flow`) a GUID is generated for that user-org pair -One is always shared. Therefore we should not need to implement a scope for this. - Allow them to be used as user identifier in api calls (required for at least the user info call, especially when we eventually drop username support) - Also allow them to log in??? - Find a way to transition existing sharing of username and replace this by sharing GUID as primary identifier
process
implement guids as user identifiers all users will have at least one user identifier generated during registration they are exposed to the user either on the profile page or in the settings page when sharing information with an organization through the authorization code grant flow a guid is generated for that user org pair one is always shared therefore we should not need to implement a scope for this allow them to be used as user identifier in api calls required for at least the user info call especially when we eventually drop username support also allow them to log in find a way to transition existing sharing of username and replace this by sharing guid as primary identifier
1
237,158
7,756,840,830
IssuesEvent
2018-05-31 14:42:51
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
[Coverity CID: 183481] Insecure data handling in /ext/lib/crypto/mbedtls/library/pkparse.c
Coverity EXT bug priority: low
Static code scan issues seen in File: /ext/lib/crypto/mbedtls/library/pkparse.c Category: Insecure data handling Function: pk_parse_key_pkcs1_der Component: External CID: 183481 Please fix or provide comments to square it off in coverity in the link: https://scan9.coverity.com/reports.htm#v32951/p12996
1.0
[Coverity CID: 183481] Insecure data handling in /ext/lib/crypto/mbedtls/library/pkparse.c - Static code scan issues seen in File: /ext/lib/crypto/mbedtls/library/pkparse.c Category: Insecure data handling Function: pk_parse_key_pkcs1_der Component: External CID: 183481 Please fix or provide comments to square it off in coverity in the link: https://scan9.coverity.com/reports.htm#v32951/p12996
non_process
insecure data handling in ext lib crypto mbedtls library pkparse c static code scan issues seen in file ext lib crypto mbedtls library pkparse c category insecure data handling function pk parse key der component external cid please fix or provide comments to square it off in coverity in the link
0
4,917
4,712,354,119
IssuesEvent
2016-10-14 16:29:23
JuliaLang/julia
https://api.github.com/repos/JuliaLang/julia
closed
1D array allocation is using jl_new_array
performance
While looking at https://github.com/JuliaLang/julia/issues/18914 I noticed that `Array(Float64, 3)` (the allocation used by comprehension on master, but other constructors are the same) is calling `jl_new_array` which requires allocation of a tuple whereas it should be using `jl_alloc_array_1d`, as on 0.4 and 0.5.
True
1D array allocation is using jl_new_array - While looking at https://github.com/JuliaLang/julia/issues/18914 I noticed that `Array(Float64, 3)` (the allocation used by comprehension on master, but other constructors are the same) is calling `jl_new_array` which requires allocation of a tuple whereas it should be using `jl_alloc_array_1d`, as on 0.4 and 0.5.
non_process
array allocation is using jl new array while looking at i noticed that array the allocation used by comprehension on master but other constructors are the same is calling jl new array which requires allocation of a tuple whereas it should be using jl alloc array as on and
0
112,237
24,241,757,064
IssuesEvent
2022-09-27 07:21:15
OctopusDeploy/Issues
https://api.github.com/repos/OctopusDeploy/Issues
closed
Prompt variables not showing when running a Runbook with Git variables
kind/bug state/triage team/config-as-code
### Team - [X] I've assigned a team label to this issue ### Severity Blocking Runbooks on projects with prompted Git variables ### Version All projects with Git variables ### Latest Version I could reproduce the problem in the latest build ### What happened? When running a Runbook on a Git project with Git variables, prompt variable inputs are not shown ### Reproduction - Create a new Git project with Git variables - Create a Runbook - Run the Runbook (run the latest version, running from a snapshot is not impacted) - Prompt variables are not shown ### Error and Stacktrace _No response_ ### More Information _No response_ ### Workaround It the project is not tenanted, publish a snapshot for the Runbook and run from that snapshot. For tenanted projects, there is no known workaround.
1.0
Prompt variables not showing when running a Runbook with Git variables - ### Team - [X] I've assigned a team label to this issue ### Severity Blocking Runbooks on projects with prompted Git variables ### Version All projects with Git variables ### Latest Version I could reproduce the problem in the latest build ### What happened? When running a Runbook on a Git project with Git variables, prompt variable inputs are not shown ### Reproduction - Create a new Git project with Git variables - Create a Runbook - Run the Runbook (run the latest version, running from a snapshot is not impacted) - Prompt variables are not shown ### Error and Stacktrace _No response_ ### More Information _No response_ ### Workaround It the project is not tenanted, publish a snapshot for the Runbook and run from that snapshot. For tenanted projects, there is no known workaround.
non_process
prompt variables not showing when running a runbook with git variables team i ve assigned a team label to this issue severity blocking runbooks on projects with prompted git variables version all projects with git variables latest version i could reproduce the problem in the latest build what happened when running a runbook on a git project with git variables prompt variable inputs are not shown reproduction create a new git project with git variables create a runbook run the runbook run the latest version running from a snapshot is not impacted prompt variables are not shown error and stacktrace no response more information no response workaround it the project is not tenanted publish a snapshot for the runbook and run from that snapshot for tenanted projects there is no known workaround
0
482,874
13,915,029,973
IssuesEvent
2020-10-20 23:31:33
archesproject/arches
https://api.github.com/repos/archesproject/arches
opened
In the Resource Editor, clicking the information button of an resource or resource-instance-list datatype breaks UI
Priority: High Subject: Resource Manager Type: Bug
**Describe the bug** <!--- By fully explaining what you are encountering, you can help us understand and reproduce the issue. --> <!--- Often times, a screenshot or animated GIF can help show what you are encountering. --> UI: https://ibb.co/9pNKy8k Error: `TypeError: Cannot read property 'node_config_lookup' of undefined at n.updateRelatedResourcesLookup` **To Reproduce** Steps to reproduce the behavior: 1. Load afs 2. Create a resource model of node of datatype resource-instance or resource-instance-list 3. Navigate the the resource creation page, add a resource instance 4. Click on the `i` information tab, notice the error **Screenshots** If applicable, add screenshots to help explain your problem. <!--- Consider including a Screen Capture: https://github.com/archesproject/arches/wiki/Screen-capture --> UI: https://ibb.co/9pNKy8k **Expected behavior** The related resources summary should load and an error should not be thrown.
1.0
In the Resource Editor, clicking the information button of an resource or resource-instance-list datatype breaks UI - **Describe the bug** <!--- By fully explaining what you are encountering, you can help us understand and reproduce the issue. --> <!--- Often times, a screenshot or animated GIF can help show what you are encountering. --> UI: https://ibb.co/9pNKy8k Error: `TypeError: Cannot read property 'node_config_lookup' of undefined at n.updateRelatedResourcesLookup` **To Reproduce** Steps to reproduce the behavior: 1. Load afs 2. Create a resource model of node of datatype resource-instance or resource-instance-list 3. Navigate the the resource creation page, add a resource instance 4. Click on the `i` information tab, notice the error **Screenshots** If applicable, add screenshots to help explain your problem. <!--- Consider including a Screen Capture: https://github.com/archesproject/arches/wiki/Screen-capture --> UI: https://ibb.co/9pNKy8k **Expected behavior** The related resources summary should load and an error should not be thrown.
non_process
in the resource editor clicking the information button of an resource or resource instance list datatype breaks ui describe the bug ui error typeerror cannot read property node config lookup of undefined at n updaterelatedresourceslookup to reproduce steps to reproduce the behavior load afs create a resource model of node of datatype resource instance or resource instance list navigate the the resource creation page add a resource instance click on the i information tab notice the error screenshots if applicable add screenshots to help explain your problem ui expected behavior the related resources summary should load and an error should not be thrown
0
7,114
6,760,298,927
IssuesEvent
2017-10-24 20:07:46
brave/browser-ios
https://api.github.com/repos/brave/browser-ios
closed
Confirm old pin before setting new pin
enhancement feature/browser-lock-pin QA/Steps-specified sec-low security suggestion
# Test plan 1. Set browser pin 2. Tap on set pin 3. Ensure it asks for existing pin before setting up a new pin # - Did you search for similar issues before submitting this one? Yes - Description: @jhreis suggested to confirm old pin before setting new pin - Device (iPhone5, iPhone6s plus, iPad 3, ?): All - Brave Version: 1.4.3(17.09.12.16) - Screenshot if needed: - Any related issues:
True
Confirm old pin before setting new pin - # Test plan 1. Set browser pin 2. Tap on set pin 3. Ensure it asks for existing pin before setting up a new pin # - Did you search for similar issues before submitting this one? Yes - Description: @jhreis suggested to confirm old pin before setting new pin - Device (iPhone5, iPhone6s plus, iPad 3, ?): All - Brave Version: 1.4.3(17.09.12.16) - Screenshot if needed: - Any related issues:
non_process
confirm old pin before setting new pin test plan set browser pin tap on set pin ensure it asks for existing pin before setting up a new pin did you search for similar issues before submitting this one yes description jhreis suggested to confirm old pin before setting new pin device plus ipad all brave version screenshot if needed any related issues
0
251,780
8,027,230,347
IssuesEvent
2018-07-27 08:21:49
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
mobile.francetvinfo.fr - video or audio doesn't play
browser-firefox-mobile priority-important
<!-- @browser: Firefox Mobile 63.0 --> <!-- @ua_header: Mozilla/5.0 (Android 7.1.2; Mobile; rv:63.0) Gecko/63.0 Firefox/63.0 --> <!-- @reported_with: mobile-reporter --> **URL**: https://mobile.francetvinfo.fr/en-direct/tv.html **Browser / Version**: Firefox Mobile 63.0 **Operating System**: Android 7.1.2 **Tested Another Browser**: Yes **Problem type**: Video or audio doesn't play **Description**: can't play video **Steps to Reproduce**: _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
mobile.francetvinfo.fr - video or audio doesn't play - <!-- @browser: Firefox Mobile 63.0 --> <!-- @ua_header: Mozilla/5.0 (Android 7.1.2; Mobile; rv:63.0) Gecko/63.0 Firefox/63.0 --> <!-- @reported_with: mobile-reporter --> **URL**: https://mobile.francetvinfo.fr/en-direct/tv.html **Browser / Version**: Firefox Mobile 63.0 **Operating System**: Android 7.1.2 **Tested Another Browser**: Yes **Problem type**: Video or audio doesn't play **Description**: can't play video **Steps to Reproduce**: _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
mobile francetvinfo fr video or audio doesn t play url browser version firefox mobile operating system android tested another browser yes problem type video or audio doesn t play description can t play video steps to reproduce from with ❤️
0
245,881
7,892,242,255
IssuesEvent
2018-06-28 14:30:20
CosmicMind/Material
https://api.github.com/repos/CosmicMind/Material
closed
myriad problems running the example projects
high priority investigate material
e.g. I cannot launch the "Search" example, fails to compile with `Cannot convert value of type '[FileAttributeKey : Any]?' to expected argument type '[String : Any]?'`
1.0
myriad problems running the example projects - e.g. I cannot launch the "Search" example, fails to compile with `Cannot convert value of type '[FileAttributeKey : Any]?' to expected argument type '[String : Any]?'`
non_process
myriad problems running the example projects e g i cannot launch the search example fails to compile with cannot convert value of type to expected argument type
0
202,626
15,287,953,067
IssuesEvent
2021-02-23 16:19:30
Orhan92/orhan-albayati-web-project
https://api.github.com/repos/Orhan92/orhan-albayati-web-project
closed
Formatting API
Test
Fix the formatting of the API on your webapplication. In other words, fix the formatting so that what is shown to the user is user friendly and looks good.
1.0
Formatting API - Fix the formatting of the API on your webapplication. In other words, fix the formatting so that what is shown to the user is user friendly and looks good.
non_process
formatting api fix the formatting of the api on your webapplication in other words fix the formatting so that what is shown to the user is user friendly and looks good
0
189,938
6,803,226,992
IssuesEvent
2017-11-02 23:35:43
davidberard2/SOEN341GROUPC
https://api.github.com/repos/davidberard2/SOEN341GROUPC
closed
Set up menu implementation
feature high priority menu Oct. 25 review sp 3
- [x] Move setting icon to the top menu bar (near the search) - [x] Add favourite icon to the bottom menu bar - [x] Add plus sign (+) in the middle of the bottom menu bar
1.0
Set up menu implementation - - [x] Move setting icon to the top menu bar (near the search) - [x] Add favourite icon to the bottom menu bar - [x] Add plus sign (+) in the middle of the bottom menu bar
non_process
set up menu implementation move setting icon to the top menu bar near the search add favourite icon to the bottom menu bar add plus sign in the middle of the bottom menu bar
0
8,154
11,354,871,210
IssuesEvent
2020-01-24 18:40:01
googleapis/java-iamcredentials
https://api.github.com/repos/googleapis/java-iamcredentials
closed
Promote to Beta
type: process
Package name: **google-cloud-iamcredentials** Current release: **alpha** Proposed release: **beta** ## Instructions Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue. ## Required - [x] Server API is beta or GA - [x] Service API is public - [x] Client surface is mostly stable (no known issues that could significantly change the surface) - [x] All manual types and methods have comment documentation - [x] Package name is idiomatic for the platform - [x] At least one integration/smoke test is defined and passing - [x] Central GitHub README lists and points to the per-API README - [x] Per-API README links to product page on cloud.google.com - [x] Manual code has been reviewed for API stability by repo owner ## Optional - [ ] Most common / important scenarios have descriptive samples - [ ] Public manual methods have at least one usage sample each (excluding overloads) - [ ] Per-API README includes a full description of the API - [ ] Per-API README contains at least one “getting started” sample using the most common API scenario - [ ] Manual code has been reviewed by API producer - [ ] Manual code has been reviewed by a DPE responsible for samples - [ ] 'Client LIbraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
1.0
Promote to Beta - Package name: **google-cloud-iamcredentials** Current release: **alpha** Proposed release: **beta** ## Instructions Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue. ## Required - [x] Server API is beta or GA - [x] Service API is public - [x] Client surface is mostly stable (no known issues that could significantly change the surface) - [x] All manual types and methods have comment documentation - [x] Package name is idiomatic for the platform - [x] At least one integration/smoke test is defined and passing - [x] Central GitHub README lists and points to the per-API README - [x] Per-API README links to product page on cloud.google.com - [x] Manual code has been reviewed for API stability by repo owner ## Optional - [ ] Most common / important scenarios have descriptive samples - [ ] Public manual methods have at least one usage sample each (excluding overloads) - [ ] Per-API README includes a full description of the API - [ ] Per-API README contains at least one “getting started” sample using the most common API scenario - [ ] Manual code has been reviewed by API producer - [ ] Manual code has been reviewed by a DPE responsible for samples - [ ] 'Client LIbraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
process
promote to beta package name google cloud iamcredentials current release alpha proposed release beta instructions check the lists below adding tests documentation as required once all the required boxes are ticked please create a release and close this issue required server api is beta or ga service api is public client surface is mostly stable no known issues that could significantly change the surface all manual types and methods have comment documentation package name is idiomatic for the platform at least one integration smoke test is defined and passing central github readme lists and points to the per api readme per api readme links to product page on cloud google com manual code has been reviewed for api stability by repo owner optional most common important scenarios have descriptive samples public manual methods have at least one usage sample each excluding overloads per api readme includes a full description of the api per api readme contains at least one “getting started” sample using the most common api scenario manual code has been reviewed by api producer manual code has been reviewed by a dpe responsible for samples client libraries page is added to the product documentation in apis reference section of the product s documentation on cloud site
1
18,381
24,510,856,484
IssuesEvent
2022-10-10 21:17:13
apache/arrow-rs
https://api.github.com/repos/apache/arrow-rs
closed
Release `object_store` `0.5.1`
development-process object-store
Follow on from https://github.com/apache/arrow-rs/issues/2620 * Planned Release Candidate: 2022-10-07 * Planned Release and Publish to crates.io: 2022-10-10 Items: - [x] Update changelog and readme: https://github.com/apache/arrow-rs/pull/2824 - [x] Create release candidate: https://lists.apache.org/thread/6hcr49coc07fmtjkoqfn32hxjwqmjjkz - [x] Release candidate approved https://lists.apache.org/thread/253t83fvtngmwtxnonb71gm6wyjhwght - [x] Release to crates.io
1.0
Release `object_store` `0.5.1` - Follow on from https://github.com/apache/arrow-rs/issues/2620 * Planned Release Candidate: 2022-10-07 * Planned Release and Publish to crates.io: 2022-10-10 Items: - [x] Update changelog and readme: https://github.com/apache/arrow-rs/pull/2824 - [x] Create release candidate: https://lists.apache.org/thread/6hcr49coc07fmtjkoqfn32hxjwqmjjkz - [x] Release candidate approved https://lists.apache.org/thread/253t83fvtngmwtxnonb71gm6wyjhwght - [x] Release to crates.io
process
release object store follow on from planned release candidate planned release and publish to crates io items update changelog and readme create release candidate release candidate approved release to crates io
1
20,562
27,222,636,690
IssuesEvent
2023-02-21 07:11:04
polarismesh/polaris
https://api.github.com/repos/polarismesh/polaris
closed
缓存的storeTIme更新异常,通过console更新的service数据,无法正确地同步到SDK
bug service need-feedback in processed
**Describe the bug** 通过console修改instances信息 ![image](https://user-images.githubusercontent.com/2884774/209088470-add7e9b1-47ff-4348-af20-acf043429b00.png) 通过sdk拉取的的instances信息 ![image](https://user-images.githubusercontent.com/2884774/209088657-02bfa39a-b9d8-48d6-86b4-b33fdcb87051.png) 两者不一致。 查看polaris-cache.log,有异常信息: 022-12-20T14:32:36.455434Z error cache cache/store_time.go:47 [Store][Time] watch store time {"error": "Error 1040: Too many connections"} 2022-12-20T14:37:43.447123Z error cache cache/store_time.go:47 [Store][Time] watch store time {"error": "Error 1040: Too many connections"} **Environment** - Version: 1.12.1
1.0
缓存的storeTIme更新异常,通过console更新的service数据,无法正确地同步到SDK - **Describe the bug** 通过console修改instances信息 ![image](https://user-images.githubusercontent.com/2884774/209088470-add7e9b1-47ff-4348-af20-acf043429b00.png) 通过sdk拉取的的instances信息 ![image](https://user-images.githubusercontent.com/2884774/209088657-02bfa39a-b9d8-48d6-86b4-b33fdcb87051.png) 两者不一致。 查看polaris-cache.log,有异常信息: 022-12-20T14:32:36.455434Z error cache cache/store_time.go:47 [Store][Time] watch store time {"error": "Error 1040: Too many connections"} 2022-12-20T14:37:43.447123Z error cache cache/store_time.go:47 [Store][Time] watch store time {"error": "Error 1040: Too many connections"} **Environment** - Version: 1.12.1
process
缓存的storetime更新异常 通过console更新的service数据,无法正确地同步到sdk describe the bug 通过console修改instances信息 通过sdk拉取的的instances信息 两者不一致。 查看polaris cache log,有异常信息 error cache cache store time go watch store time error error too many connections error cache cache store time go watch store time error error too many connections environment version
1
39,171
5,221,321,733
IssuesEvent
2017-01-27 00:59:04
coreos/etcd
https://api.github.com/repos/coreos/etcd
closed
TestCtlV3AuthMemberRemove: got etcdserver: server stopped
area/testing
via sempahore for unrelated PR (https://semaphoreci.com/coreos/etcd/branches/pull-request-7159/builds/4) ``` --- FAIL: TestCtlV3AuthMemberRemove (2.78s) ctl_v3_auth_test.go:515: read /dev/ptmx: input/output error (expected "eabdbb777cf498cb removed from cluster 4529f4d7b68cdaf9", got ["Error: etcdserver: server stopped\r\n"]) ```
1.0
TestCtlV3AuthMemberRemove: got etcdserver: server stopped - via sempahore for unrelated PR (https://semaphoreci.com/coreos/etcd/branches/pull-request-7159/builds/4) ``` --- FAIL: TestCtlV3AuthMemberRemove (2.78s) ctl_v3_auth_test.go:515: read /dev/ptmx: input/output error (expected "eabdbb777cf498cb removed from cluster 4529f4d7b68cdaf9", got ["Error: etcdserver: server stopped\r\n"]) ```
non_process
got etcdserver server stopped via sempahore for unrelated pr fail ctl auth test go read dev ptmx input output error expected removed from cluster got
0
52,884
6,284,925,947
IssuesEvent
2017-07-19 09:03:16
opensistemas-hub/osbrain
https://api.github.com/repos/opensistemas-hub/osbrain
opened
Support for parallel test coverage
enhancement test
Check [Code Climate](https://docs.codeclimate.com/docs/setting-up-test-coverage?utm_source=up&utm_medium=email&utm_campaign=201706&__s=cxfq3cvop3k94ucncqfc) or other alternatives. The idea is to improve coverage reports that sum-up results from different platforms (i.e.: Windows branches, Python < 3.5 branches...). This way we could remove `if sys.version_info` from `.coveragerc`.
1.0
Support for parallel test coverage - Check [Code Climate](https://docs.codeclimate.com/docs/setting-up-test-coverage?utm_source=up&utm_medium=email&utm_campaign=201706&__s=cxfq3cvop3k94ucncqfc) or other alternatives. The idea is to improve coverage reports that sum-up results from different platforms (i.e.: Windows branches, Python < 3.5 branches...). This way we could remove `if sys.version_info` from `.coveragerc`.
non_process
support for parallel test coverage check or other alternatives the idea is to improve coverage reports that sum up results from different platforms i e windows branches python branches this way we could remove if sys version info from coveragerc
0
8,869
11,964,747,124
IssuesEvent
2020-04-05 20:44:56
googleapis/google-cloud-common
https://api.github.com/repos/googleapis/google-cloud-common
closed
Create our standard labels for the gcloud-common repo
type: process
This is a repo that our language-specific manual layers depend upon. We need the standard labels defined here, too.
1.0
Create our standard labels for the gcloud-common repo - This is a repo that our language-specific manual layers depend upon. We need the standard labels defined here, too.
process
create our standard labels for the gcloud common repo this is a repo that our language specific manual layers depend upon we need the standard labels defined here too
1
17,518
23,329,215,855
IssuesEvent
2022-08-09 02:11:53
streamnative/flink
https://api.github.com/repos/streamnative/flink
closed
[BUG][Stream][FLINK-28609] PulsarSchema didn't get properly serialized
compute/data-processing type/bug
The reproduce code is shown here: https://github.com/JacekWislicki/vp-test2 PulsarSchema only serializes the first 1018 bytes when the schema data is too big. ``` Exception in thread "main" org.apache.flink.util.FlinkException: Failed to execute job 'Flink Streaming Job'. at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2108) at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1983) at org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:68) at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1951) at com.example.test2.flinkjob.SimpleJob2.main(SimpleJob2.java:30) Caused by: java.lang.RuntimeException: org.apache.flink.runtime.client.JobInitializationException: Could not start the JobMaster. at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:319) at org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:75) at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642) at java.base/java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:479) at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1016) at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1665) at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1598) at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177) Caused by: org.apache.flink.runtime.client.JobInitializationException: Could not start the JobMaster. at org.apache.flink.runtime.jobmaster.DefaultJobMasterServiceProcess.lambda$new$0(DefaultJobMasterServiceProcess.java:97) at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859) at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1769) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) at java.base/java.lang.Thread.run(Thread.java:832) Caused by: java.util.concurrent.CompletionException: java.lang.RuntimeException: org.apache.flink.runtime.JobException: Cannot instantiate the coordinator for operator Source: pulsar-source -> Sink: Unnamed at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314) at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319) at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1766) ... 3 more Caused by: java.lang.RuntimeException: org.apache.flink.runtime.JobException: Cannot instantiate the coordinator for operator Source: pulsar-source -> Sink: Unnamed at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:319) at org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:114) at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1764) ... 3 more Caused by: org.apache.flink.runtime.JobException: Cannot instantiate the coordinator for operator Source: pulsar-source -> Sink: Unnamed at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.initialize(ExecutionJobVertex.java:229) at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.initializeJobVertex(DefaultExecutionGraph.java:849) at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.initializeJobVertices(DefaultExecutionGraph.java:839) at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.attachJobGraph(DefaultExecutionGraph.java:798) at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.attachJobGraph(DefaultExecutionGraph.java:780) at org.apache.flink.runtime.executiongraph.DefaultExecutionGraphBuilder.buildGraph(DefaultExecutionGraphBuilder.java:194) at org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRestoreExecutionGraph(DefaultExecutionGraphFactory.java:149) at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:363) at org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:208) at org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:191) at org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:139) at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:135) at org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:115) at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:345) at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:322) at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:106) at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:94) at org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112) ... 4 more Caused by: java.lang.IllegalStateException at org.apache.flink.util.Preconditions.checkState(Preconditions.java:177) at org.apache.flink.connector.pulsar.common.schema.PulsarSchema.readObject(PulsarSchema.java:167) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at java.base/java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1201) at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2357) at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2191) at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1685) at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496) at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2390) at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2191) at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1685) at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496) at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2390) at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2191) at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1685) at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496) at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2390) at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2191) at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1685) at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:499) at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:457) at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:617) at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:602) at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:589) at org.apache.flink.util.SerializedValue.deserializeValue(SerializedValue.java:67) at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.create(OperatorCoordinatorHolder.java:433) at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.initialize(ExecutionJobVertex.java:223) ... 21 more ```
1.0
[BUG][Stream][FLINK-28609] PulsarSchema didn't get properly serialized - The reproduce code is shown here: https://github.com/JacekWislicki/vp-test2 PulsarSchema only serializes the first 1018 bytes when the schema data is too big. ``` Exception in thread "main" org.apache.flink.util.FlinkException: Failed to execute job 'Flink Streaming Job'. at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2108) at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1983) at org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:68) at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1951) at com.example.test2.flinkjob.SimpleJob2.main(SimpleJob2.java:30) Caused by: java.lang.RuntimeException: org.apache.flink.runtime.client.JobInitializationException: Could not start the JobMaster. at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:319) at org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:75) at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642) at java.base/java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:479) at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1016) at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1665) at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1598) at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177) Caused by: org.apache.flink.runtime.client.JobInitializationException: Could not start the JobMaster. at org.apache.flink.runtime.jobmaster.DefaultJobMasterServiceProcess.lambda$new$0(DefaultJobMasterServiceProcess.java:97) at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859) at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1769) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) at java.base/java.lang.Thread.run(Thread.java:832) Caused by: java.util.concurrent.CompletionException: java.lang.RuntimeException: org.apache.flink.runtime.JobException: Cannot instantiate the coordinator for operator Source: pulsar-source -> Sink: Unnamed at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314) at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319) at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1766) ... 3 more Caused by: java.lang.RuntimeException: org.apache.flink.runtime.JobException: Cannot instantiate the coordinator for operator Source: pulsar-source -> Sink: Unnamed at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:319) at org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:114) at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1764) ... 3 more Caused by: org.apache.flink.runtime.JobException: Cannot instantiate the coordinator for operator Source: pulsar-source -> Sink: Unnamed at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.initialize(ExecutionJobVertex.java:229) at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.initializeJobVertex(DefaultExecutionGraph.java:849) at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.initializeJobVertices(DefaultExecutionGraph.java:839) at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.attachJobGraph(DefaultExecutionGraph.java:798) at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.attachJobGraph(DefaultExecutionGraph.java:780) at org.apache.flink.runtime.executiongraph.DefaultExecutionGraphBuilder.buildGraph(DefaultExecutionGraphBuilder.java:194) at org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRestoreExecutionGraph(DefaultExecutionGraphFactory.java:149) at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:363) at org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:208) at org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:191) at org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:139) at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:135) at org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:115) at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:345) at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:322) at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:106) at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:94) at org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112) ... 4 more Caused by: java.lang.IllegalStateException at org.apache.flink.util.Preconditions.checkState(Preconditions.java:177) at org.apache.flink.connector.pulsar.common.schema.PulsarSchema.readObject(PulsarSchema.java:167) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at java.base/java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1201) at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2357) at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2191) at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1685) at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496) at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2390) at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2191) at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1685) at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496) at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2390) at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2191) at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1685) at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496) at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2390) at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2191) at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1685) at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:499) at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:457) at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:617) at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:602) at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:589) at org.apache.flink.util.SerializedValue.deserializeValue(SerializedValue.java:67) at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.create(OperatorCoordinatorHolder.java:433) at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.initialize(ExecutionJobVertex.java:223) ... 21 more ```
process
pulsarschema didn t get properly serialized the reproduce code is shown here pulsarschema only serializes the first bytes when the schema data is too big exception in thread main org apache flink util flinkexception failed to execute job flink streaming job at org apache flink streaming api environment streamexecutionenvironment executeasync streamexecutionenvironment java at org apache flink streaming api environment streamexecutionenvironment execute streamexecutionenvironment java at org apache flink streaming api environment localstreamenvironment execute localstreamenvironment java at org apache flink streaming api environment streamexecutionenvironment execute streamexecutionenvironment java at com example flinkjob main java caused by java lang runtimeexception org apache flink runtime client jobinitializationexception could not start the jobmaster at org apache flink util exceptionutils rethrow exceptionutils java at org apache flink util function functionutils lambda uncheckedfunction functionutils java at java base java util concurrent completablefuture uniapply tryfire completablefuture java at java base java util concurrent completablefuture completion exec completablefuture java at java base java util concurrent forkjointask doexec forkjointask java at java base java util concurrent forkjoinpool workqueue toplevelexec forkjoinpool java at java base java util concurrent forkjoinpool scan forkjoinpool java at java base java util concurrent forkjoinpool runworker forkjoinpool java at java base java util concurrent forkjoinworkerthread run forkjoinworkerthread java caused by org apache flink runtime client jobinitializationexception could not start the jobmaster at org apache flink runtime jobmaster defaultjobmasterserviceprocess lambda new defaultjobmasterserviceprocess java at java base java util concurrent completablefuture uniwhencomplete completablefuture java at java base java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java base java util concurrent completablefuture postcomplete completablefuture java at java base java util concurrent completablefuture asyncsupply run completablefuture java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java caused by java util concurrent completionexception java lang runtimeexception org apache flink runtime jobexception cannot instantiate the coordinator for operator source pulsar source sink unnamed at java base java util concurrent completablefuture encodethrowable completablefuture java at java base java util concurrent completablefuture completethrowable completablefuture java at java base java util concurrent completablefuture asyncsupply run completablefuture java more caused by java lang runtimeexception org apache flink runtime jobexception cannot instantiate the coordinator for operator source pulsar source sink unnamed at org apache flink util exceptionutils rethrow exceptionutils java at org apache flink util function functionutils lambda uncheckedsupplier functionutils java at java base java util concurrent completablefuture asyncsupply run completablefuture java more caused by org apache flink runtime jobexception cannot instantiate the coordinator for operator source pulsar source sink unnamed at org apache flink runtime executiongraph executionjobvertex initialize executionjobvertex java at org apache flink runtime executiongraph defaultexecutiongraph initializejobvertex defaultexecutiongraph java at org apache flink runtime executiongraph defaultexecutiongraph initializejobvertices defaultexecutiongraph java at org apache flink runtime executiongraph defaultexecutiongraph attachjobgraph defaultexecutiongraph java at org apache flink runtime executiongraph defaultexecutiongraph attachjobgraph defaultexecutiongraph java at org apache flink runtime executiongraph defaultexecutiongraphbuilder buildgraph defaultexecutiongraphbuilder java at org apache flink runtime scheduler defaultexecutiongraphfactory createandrestoreexecutiongraph defaultexecutiongraphfactory java at org apache flink runtime scheduler schedulerbase createandrestoreexecutiongraph schedulerbase java at org apache flink runtime scheduler schedulerbase schedulerbase java at org apache flink runtime scheduler defaultscheduler defaultscheduler java at org apache flink runtime scheduler defaultscheduler defaultscheduler java at org apache flink runtime scheduler defaultschedulerfactory createinstance defaultschedulerfactory java at org apache flink runtime jobmaster defaultslotpoolserviceschedulerfactory createscheduler defaultslotpoolserviceschedulerfactory java at org apache flink runtime jobmaster jobmaster createscheduler jobmaster java at org apache flink runtime jobmaster jobmaster jobmaster java at org apache flink runtime jobmaster factories defaultjobmasterservicefactory internalcreatejobmasterservice defaultjobmasterservicefactory java at org apache flink runtime jobmaster factories defaultjobmasterservicefactory lambda createjobmasterservice defaultjobmasterservicefactory java at org apache flink util function functionutils lambda uncheckedsupplier functionutils java more caused by java lang illegalstateexception at org apache flink util preconditions checkstate preconditions java at org apache flink connector pulsar common schema pulsarschema readobject pulsarschema java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at java base java io objectstreamclass invokereadobject objectstreamclass java at java base java io objectinputstream readserialdata objectinputstream java at java base java io objectinputstream readordinaryobject objectinputstream java at java base java io objectinputstream objectinputstream java at java base java io objectinputstream defaultreadfields objectinputstream java at java base java io objectinputstream readserialdata objectinputstream java at java base java io objectinputstream readordinaryobject objectinputstream java at java base java io objectinputstream objectinputstream java at java base java io objectinputstream defaultreadfields objectinputstream java at java base java io objectinputstream readserialdata objectinputstream java at java base java io objectinputstream readordinaryobject objectinputstream java at java base java io objectinputstream objectinputstream java at java base java io objectinputstream defaultreadfields objectinputstream java at java base java io objectinputstream readserialdata objectinputstream java at java base java io objectinputstream readordinaryobject objectinputstream java at java base java io objectinputstream objectinputstream java at java base java io objectinputstream readobject objectinputstream java at java base java io objectinputstream readobject objectinputstream java at org apache flink util instantiationutil deserializeobject instantiationutil java at org apache flink util instantiationutil deserializeobject instantiationutil java at org apache flink util instantiationutil deserializeobject instantiationutil java at org apache flink util serializedvalue deserializevalue serializedvalue java at org apache flink runtime operators coordination operatorcoordinatorholder create operatorcoordinatorholder java at org apache flink runtime executiongraph executionjobvertex initialize executionjobvertex java more
1
8,839
11,947,789,461
IssuesEvent
2020-04-03 10:34:18
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
Mar, May and Dec missing in report
log-processing
The log: ---------- ``` 0.0.0.0 - - [01/Jan/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Feb/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Mar/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Apr/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/May/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Jun/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Jul/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Aug/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Sep/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Okt/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Nov/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Dec/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" ``` ---------- locale LANG="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_CTYPE="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_ALL= The static-report: 9 valid Requests, 3 failed Requests. Mar, May and Dec missing.
1.0
Mar, May and Dec missing in report - The log: ---------- ``` 0.0.0.0 - - [01/Jan/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Feb/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Mar/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Apr/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/May/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Jun/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Jul/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Aug/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Sep/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Okt/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Nov/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" 0.0.0.0 - - [01/Dec/2019:18:51:58 +0100] "GET /test.html HTTP/1.1" 200 1702 "https://test.test/test.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 OPR/66.0.3515.72" "Traffic IN:1309 OUT:5570" "ReqTime:0 sec" ``` ---------- locale LANG="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_CTYPE="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_ALL= The static-report: 9 valid Requests, 3 failed Requests. Mar, May and Dec missing.
process
mar may and dec missing in report the log get test html http mozilla windows nt applewebkit khtml like gecko chrome safari opr traffic in out reqtime sec get test html http mozilla windows nt applewebkit khtml like gecko chrome safari opr traffic in out reqtime sec get test html http mozilla windows nt applewebkit khtml like gecko chrome safari opr traffic in out reqtime sec get test html http mozilla windows nt applewebkit khtml like gecko chrome safari opr traffic in out reqtime sec get test html http mozilla windows nt applewebkit khtml like gecko chrome safari opr traffic in out reqtime sec get test html http mozilla windows nt applewebkit khtml like gecko chrome safari opr traffic in out reqtime sec get test html http mozilla windows nt applewebkit khtml like gecko chrome safari opr traffic in out reqtime sec get test html http mozilla windows nt applewebkit khtml like gecko chrome safari opr traffic in out reqtime sec get test html http mozilla windows nt applewebkit khtml like gecko chrome safari opr traffic in out reqtime sec get test html http mozilla windows nt applewebkit khtml like gecko chrome safari opr traffic in out reqtime sec get test html http mozilla windows nt applewebkit khtml like gecko chrome safari opr traffic in out reqtime sec get test html http mozilla windows nt applewebkit khtml like gecko chrome safari opr traffic in out reqtime sec locale lang en us utf lc collate en us utf lc ctype en us utf lc messages en us utf lc monetary en us utf lc numeric en us utf lc time en us utf lc all the static report valid requests failed requests mar may and dec missing
1
321,267
27,519,695,202
IssuesEvent
2023-03-06 14:19:14
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Load Testing of an azure web app having azure ad authentication
triaged assigned-to-author product-question Pri2 load-testing/svc
How to performLoad Testing of an azure web app having azure ad authentication? --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: 6df4f986-dde4-05da-a3c6-495fb9dea3e7 * Version Independent ID: 0b5253e1-9e2c-2c53-9a12-2f8f0e2a1e53 * Content: [Load test private endpoints - Azure Load Testing](https://learn.microsoft.com/en-us/azure/load-testing/how-to-test-private-endpoint) * Content Source: [articles/load-testing/how-to-test-private-endpoint.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/load-testing/how-to-test-private-endpoint.md) * Service: **load-testing** * GitHub Login: @ntrogh * Microsoft Alias: **nicktrog**
1.0
Load Testing of an azure web app having azure ad authentication - How to performLoad Testing of an azure web app having azure ad authentication? --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: 6df4f986-dde4-05da-a3c6-495fb9dea3e7 * Version Independent ID: 0b5253e1-9e2c-2c53-9a12-2f8f0e2a1e53 * Content: [Load test private endpoints - Azure Load Testing](https://learn.microsoft.com/en-us/azure/load-testing/how-to-test-private-endpoint) * Content Source: [articles/load-testing/how-to-test-private-endpoint.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/load-testing/how-to-test-private-endpoint.md) * Service: **load-testing** * GitHub Login: @ntrogh * Microsoft Alias: **nicktrog**
non_process
load testing of an azure web app having azure ad authentication how to performload testing of an azure web app having azure ad authentication document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service load testing github login ntrogh microsoft alias nicktrog
0
15,850
20,029,415,294
IssuesEvent
2022-02-02 02:30:59
RobertCraigie/prisma-client-py
https://api.github.com/repos/RobertCraigie/prisma-client-py
opened
Scalar relational fields are not included in `create_many` input
bug/2-confirmed kind/bug process/candidate level/intermediate priority/high
<!-- Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output. See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output. --> ## Bug description <!-- A clear and concise description of what the bug is. --> Currently the following query will cause type checkers to report an error even though it is a supported query: ```py r = await client.profile.create_many( data=[ {'user_id': 'a', 'description': 'Foo'}, {'user_id': 'b', 'description': 'Foo 2'}, ], ) ``` ## How to reproduce <!-- Steps to reproduce the behavior: 1. Go to '...' 2. Change '....' 3. Run '....' 4. See error --> Run pyright on the above query ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> No errors ## Prisma information <!-- Your Prisma schema, Prisma Client Python queries, ... Do not include your database credentials when sharing your Prisma schema! --> Internal test schema ## Environment & setup <!-- In which environment does the problem occur --> - OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]--> Mac OS - Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]--> PostgreSQL - Python version: <!--[Run `python -V` to see your Python version]--> 3.9.9
1.0
Scalar relational fields are not included in `create_many` input - <!-- Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output. See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output. --> ## Bug description <!-- A clear and concise description of what the bug is. --> Currently the following query will cause type checkers to report an error even though it is a supported query: ```py r = await client.profile.create_many( data=[ {'user_id': 'a', 'description': 'Foo'}, {'user_id': 'b', 'description': 'Foo 2'}, ], ) ``` ## How to reproduce <!-- Steps to reproduce the behavior: 1. Go to '...' 2. Change '....' 3. Run '....' 4. See error --> Run pyright on the above query ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> No errors ## Prisma information <!-- Your Prisma schema, Prisma Client Python queries, ... Do not include your database credentials when sharing your Prisma schema! --> Internal test schema ## Environment & setup <!-- In which environment does the problem occur --> - OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]--> Mac OS - Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]--> PostgreSQL - Python version: <!--[Run `python -V` to see your Python version]--> 3.9.9
process
scalar relational fields are not included in create many input thanks for helping us improve prisma client python 🙏 please follow the sections in the template and provide as much information as possible about your problem e g by enabling additional logging output see for how to enable additional logging output bug description currently the following query will cause type checkers to report an error even though it is a supported query py r await client profile create many data user id a description foo user id b description foo how to reproduce steps to reproduce the behavior go to change run see error run pyright on the above query expected behavior no errors prisma information your prisma schema prisma client python queries do not include your database credentials when sharing your prisma schema internal test schema environment setup os mac os database postgresql python version
1