Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12,218
| 14,743,058,355
|
IssuesEvent
|
2021-01-07 13:20:10
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Multiple Payment scenario - Payment Deletion
|
anc-process anp-1 ant-enhancement grt-payments
|
In GitLab by @kdjstudios on Jul 9, 2019, 10:00
**Submitted by:** Gary
**Helpdesk:** http://gitlab.aavaz.biz/AnswerNet/SABilling/issues/1361#note_43716
**Server:** All
**Client/Site:** All
**Account:** All
**Issue:**
We either:
1. Display a notice "You are about to delete a payment on an account that is not the latest payment. This could result in discrepancies with aging buckets."
2. Delete the payment and then Reapply the latest payment to the invoices correctly.
|
1.0
|
Multiple Payment scenario - Payment Deletion - In GitLab by @kdjstudios on Jul 9, 2019, 10:00
**Submitted by:** Gary
**Helpdesk:** http://gitlab.aavaz.biz/AnswerNet/SABilling/issues/1361#note_43716
**Server:** All
**Client/Site:** All
**Account:** All
**Issue:**
We either:
1. Display a notice "You are about to delete a payment on an account that is not the latest payment. This could result in discrepancies with aging buckets."
2. Delete the payment and then Reapply the latest payment to the invoices correctly.
|
process
|
multiple payment scenario payment deletion in gitlab by kdjstudios on jul submitted by gary helpdesk server all client site all account all issue we either display a notice you are about to delete a payment on an account that is not the latest payment this could result in discrepancies with aging buckets delete the payment and then reapply the latest payment to the invoices correctly
| 1
|
149,591
| 19,581,701,390
|
IssuesEvent
|
2022-01-04 22:21:22
|
timf-deleteme/ng1
|
https://api.github.com/repos/timf-deleteme/ng1
|
opened
|
CVE-2020-15095 (Medium) detected in npm-3.10.10.tgz
|
security vulnerability
|
## CVE-2020-15095 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>npm-3.10.10.tgz</b></p></summary>
<p>a package manager for JavaScript</p>
<p>Library home page: <a href="https://registry.npmjs.org/npm/-/npm-3.10.10.tgz">https://registry.npmjs.org/npm/-/npm-3.10.10.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/npm/package.json</p>
<p>
Dependency Hierarchy:
- grunt-npm-install-0.3.1.tgz (Root Library)
- :x: **npm-3.10.10.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/timf-deleteme/ng1/commit/49eb31e591a7aadee01c5d77b0f75cad634572cb">49eb31e591a7aadee01c5d77b0f75cad634572cb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of the npm CLI prior to 6.14.6 are vulnerable to an information exposure vulnerability through log files. The CLI supports URLs like "<protocol>://[<user>[:<password>]@]<hostname>[:<port>][:][/]<path>". The password value is not redacted and is printed to stdout and also to any generated log files.
<p>Publish Date: 2020-07-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15095>CVE-2020-15095</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/cli/security/advisories/GHSA-93f3-23rq-pjfp">https://github.com/npm/cli/security/advisories/GHSA-93f3-23rq-pjfp</a></p>
<p>Release Date: 2020-07-07</p>
<p>Fix Resolution: npm - 6.14.6</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"npm","packageVersion":"3.10.10","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-npm-install:0.3.1;npm:3.10.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"npm - 6.14.6","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-15095","vulnerabilityDetails":"Versions of the npm CLI prior to 6.14.6 are vulnerable to an information exposure vulnerability through log files. The CLI supports URLs like \"\u003cprotocol\u003e://[\u003cuser\u003e[:\u003cpassword\u003e]@]\u003chostname\u003e[:\u003cport\u003e][:][/]\u003cpath\u003e\". The password value is not redacted and is printed to stdout and also to any generated log files.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15095","cvss3Severity":"medium","cvss3Score":"4.4","cvss3Metrics":{"A":"None","AC":"High","PR":"Low","S":"Unchanged","C":"High","UI":"Required","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-15095 (Medium) detected in npm-3.10.10.tgz - ## CVE-2020-15095 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>npm-3.10.10.tgz</b></p></summary>
<p>a package manager for JavaScript</p>
<p>Library home page: <a href="https://registry.npmjs.org/npm/-/npm-3.10.10.tgz">https://registry.npmjs.org/npm/-/npm-3.10.10.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/npm/package.json</p>
<p>
Dependency Hierarchy:
- grunt-npm-install-0.3.1.tgz (Root Library)
- :x: **npm-3.10.10.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/timf-deleteme/ng1/commit/49eb31e591a7aadee01c5d77b0f75cad634572cb">49eb31e591a7aadee01c5d77b0f75cad634572cb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of the npm CLI prior to 6.14.6 are vulnerable to an information exposure vulnerability through log files. The CLI supports URLs like "<protocol>://[<user>[:<password>]@]<hostname>[:<port>][:][/]<path>". The password value is not redacted and is printed to stdout and also to any generated log files.
<p>Publish Date: 2020-07-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15095>CVE-2020-15095</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/cli/security/advisories/GHSA-93f3-23rq-pjfp">https://github.com/npm/cli/security/advisories/GHSA-93f3-23rq-pjfp</a></p>
<p>Release Date: 2020-07-07</p>
<p>Fix Resolution: npm - 6.14.6</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"npm","packageVersion":"3.10.10","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-npm-install:0.3.1;npm:3.10.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"npm - 6.14.6","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-15095","vulnerabilityDetails":"Versions of the npm CLI prior to 6.14.6 are vulnerable to an information exposure vulnerability through log files. The CLI supports URLs like \"\u003cprotocol\u003e://[\u003cuser\u003e[:\u003cpassword\u003e]@]\u003chostname\u003e[:\u003cport\u003e][:][/]\u003cpath\u003e\". The password value is not redacted and is printed to stdout and also to any generated log files.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15095","cvss3Severity":"medium","cvss3Score":"4.4","cvss3Metrics":{"A":"None","AC":"High","PR":"Low","S":"Unchanged","C":"High","UI":"Required","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in npm tgz cve medium severity vulnerability vulnerable library npm tgz a package manager for javascript library home page a href path to dependency file package json path to vulnerable library node modules npm package json dependency hierarchy grunt npm install tgz root library x npm tgz vulnerable library found in head commit a href found in base branch master vulnerability details versions of the npm cli prior to are vulnerable to an information exposure vulnerability through log files the cli supports urls like the password value is not redacted and is printed to stdout and also to any generated log files publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution npm isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree grunt npm install npm isminimumfixversionavailable true minimumfixversion npm isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails versions of the npm cli prior to are vulnerable to an information exposure vulnerability through log files the cli supports urls like the password value is not redacted and is printed to stdout and also to any generated log files vulnerabilityurl
| 0
|
22,238
| 30,789,370,279
|
IssuesEvent
|
2023-07-31 15:09:27
|
ovh/public-cloud-roadmap
|
https://api.github.com/repos/ovh/public-cloud-roadmap
|
closed
|
Notebooks for Apache Spark - Alpha
|
Data Processing Available in Alpha Spark
|
## User story
As a customer,
I want to process data through Jupyter Notebooks
so that
I could explore my data in an interactive way
## Acceptance criteria
- I can launch notebook through the Manager.
- I can manage my notebook with the Manager.
- I can access the Jupyterlab of my notebook.
- I can use Apache Spark from my notebook through an interactive session.
- I can select the Apache Spark version of the cluster behind my notebook.
- I can select the amount of resources to allocate to my Apache Spark cluster from the Jupyterlab kernels.
## Follow, vote and give your feedback
You can follow this task with the notification on the right tab.
Ask us anything here in the comments below, and vote with emojis for most requested items !
👍 to vote for this issue
## Discuss on Discord
Feel free to discuss with us on https://discord.gg/ovhcloud
|
1.0
|
Notebooks for Apache Spark - Alpha - ## User story
As a customer,
I want to process data through Jupyter Notebooks
so that
I could explore my data in an interactive way
## Acceptance criteria
- I can launch notebook through the Manager.
- I can manage my notebook with the Manager.
- I can access the Jupyterlab of my notebook.
- I can use Apache Spark from my notebook through an interactive session.
- I can select the Apache Spark version of the cluster behind my notebook.
- I can select the amount of resources to allocate to my Apache Spark cluster from the Jupyterlab kernels.
## Follow, vote and give your feedback
You can follow this task with the notification on the right tab.
Ask us anything here in the comments below, and vote with emojis for most requested items !
👍 to vote for this issue
## Discuss on Discord
Feel free to discuss with us on https://discord.gg/ovhcloud
|
process
|
notebooks for apache spark alpha user story as a customer i want to process data through jupyter notebooks so that i could explore my data in an interactive way acceptance criteria i can launch notebook through the manager i can manage my notebook with the manager i can access the jupyterlab of my notebook i can use apache spark from my notebook through an interactive session i can select the apache spark version of the cluster behind my notebook i can select the amount of resources to allocate to my apache spark cluster from the jupyterlab kernels follow vote and give your feedback you can follow this task with the notification on the right tab ask us anything here in the comments below and vote with emojis for most requested items 👍 to vote for this issue discuss on discord feel free to discuss with us on
| 1
|
159,694
| 20,085,893,856
|
IssuesEvent
|
2022-02-05 01:08:11
|
AkshayMukkavilli/Tensorflow
|
https://api.github.com/repos/AkshayMukkavilli/Tensorflow
|
opened
|
CVE-2021-41204 (Medium) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl
|
security vulnerability
|
## CVE-2021-41204 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /Tensorflow/src/requirements.txt</p>
<p>Path to vulnerable library: /teSource-ArchiveExtractor_5ea86033-7612-4210-97f3-8edb65806ddf/20190525011619_2843/20190525011537_depth_0/2/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. In affected versions during TensorFlow's Grappler optimizer phase, constant folding might attempt to deep copy a resource tensor. This results in a segfault, as these tensors are supposed to not change. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-11-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41204>CVE-2021-41204</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-786j-5qwq-r36x">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-786j-5qwq-r36x</a></p>
<p>Release Date: 2021-11-05</p>
<p>Fix Resolution: tensorflow - 2.4.4, 2.5.2, 2.6.1, 2.7.0;tensorflow-cpu - 2.4.4, 2.5.2, 2.6.1, 2.7.0;tensorflow-gpu - 2.4.4, 2.5.2, 2.6.1, 2.7.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-41204 (Medium) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2021-41204 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /Tensorflow/src/requirements.txt</p>
<p>Path to vulnerable library: /teSource-ArchiveExtractor_5ea86033-7612-4210-97f3-8edb65806ddf/20190525011619_2843/20190525011537_depth_0/2/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. In affected versions during TensorFlow's Grappler optimizer phase, constant folding might attempt to deep copy a resource tensor. This results in a segfault, as these tensors are supposed to not change. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-11-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41204>CVE-2021-41204</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-786j-5qwq-r36x">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-786j-5qwq-r36x</a></p>
<p>Release Date: 2021-11-05</p>
<p>Fix Resolution: tensorflow - 2.4.4, 2.5.2, 2.6.1, 2.7.0;tensorflow-cpu - 2.4.4, 2.5.2, 2.6.1, 2.7.0;tensorflow-gpu - 2.4.4, 2.5.2, 2.6.1, 2.7.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in tensorflow whl cve medium severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file tensorflow src requirements txt path to vulnerable library tesource archiveextractor depth tensorflow tensorflow data purelib tensorflow dependency hierarchy x tensorflow whl vulnerable library vulnerability details tensorflow is an open source platform for machine learning in affected versions during tensorflow s grappler optimizer phase constant folding might attempt to deep copy a resource tensor this results in a segfault as these tensors are supposed to not change the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with whitesource
| 0
|
266,428
| 23,237,804,076
|
IssuesEvent
|
2022-08-03 13:18:05
|
OpenLiberty/open-liberty
|
https://api.github.com/repos/OpenLiberty/open-liberty
|
closed
|
Test Failure: testPollIntervalStable insufficient attempts
|
team:Zombie Apocalypse test bug
|
Test Failure: com.ibm.ws.concurrent.persistent.fat.autonomicalpolling1serv.AutonomicalPolling1ServerTest.testPollIntervalStable
```
testPollIntervalStable:junit.framework.AssertionFailedError: 2021-12-30-19:47:02:916 testPollIntervalStable failed after multiple attemps. This likely means the the autonomical poll interval algorithm is not working.
at com.ibm.ws.concurrent.persistent.fat.autonomicalpolling1serv.AutonomicalPolling1ServerTest.testPollIntervalStable(AutonomicalPolling1ServerTest.java:696)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at componenttest.custom.junit.runner.FATRunner$1.evaluate(FATRunner.java:198)
at componenttest.custom.junit.runner.FATRunner$2.evaluate(FATRunner.java:319)
at componenttest.custom.junit.runner.FATRunner.run(FATRunner.java:172)
```
This test only make 5 attempts at something that is vulnerable to timing, which is normally good enough, but not sufficient this time around.
|
1.0
|
Test Failure: testPollIntervalStable insufficient attempts - Test Failure: com.ibm.ws.concurrent.persistent.fat.autonomicalpolling1serv.AutonomicalPolling1ServerTest.testPollIntervalStable
```
testPollIntervalStable:junit.framework.AssertionFailedError: 2021-12-30-19:47:02:916 testPollIntervalStable failed after multiple attemps. This likely means the the autonomical poll interval algorithm is not working.
at com.ibm.ws.concurrent.persistent.fat.autonomicalpolling1serv.AutonomicalPolling1ServerTest.testPollIntervalStable(AutonomicalPolling1ServerTest.java:696)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at componenttest.custom.junit.runner.FATRunner$1.evaluate(FATRunner.java:198)
at componenttest.custom.junit.runner.FATRunner$2.evaluate(FATRunner.java:319)
at componenttest.custom.junit.runner.FATRunner.run(FATRunner.java:172)
```
This test only make 5 attempts at something that is vulnerable to timing, which is normally good enough, but not sufficient this time around.
|
non_process
|
test failure testpollintervalstable insufficient attempts test failure com ibm ws concurrent persistent fat testpollintervalstable testpollintervalstable junit framework assertionfailederror testpollintervalstable failed after multiple attemps this likely means the the autonomical poll interval algorithm is not working at com ibm ws concurrent persistent fat testpollintervalstable java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at componenttest custom junit runner fatrunner evaluate fatrunner java at componenttest custom junit runner fatrunner evaluate fatrunner java at componenttest custom junit runner fatrunner run fatrunner java this test only make attempts at something that is vulnerable to timing which is normally good enough but not sufficient this time around
| 0
|
392,713
| 26,955,391,090
|
IssuesEvent
|
2023-02-08 14:35:36
|
biaslab/RxInfer.jl
|
https://api.github.com/repos/biaslab/RxInfer.jl
|
closed
|
Stuck on MarginalRuleMethodError warning
|
documentation
|
I am trying to modify the GP regression example to create a local level (forecasting) model. I tried to follow the Missing Data example to handle the missing data. However I get a warning about MarginalRuleMethodError and cannot find documentation/examples on how to proceed. I would echo issue #15 about having more examples for forecasting cases. Thank you!
My model is defined as:
```
@model function locallevel(n, σ², σ²_noise)
f_0 ~ Normal(mean = 0, precision = 1 / σ²)
f = randomvar(n)
y = datavar(Float64, n) where { allow_missing = true }
f_prev = f_0
for i=1:n
f[i] ~ Normal(mean = f_prev, precision = 1/σ²)
y[i] ~ Normal(mean = f[i], precision = 1 / σ²_noise)
f_prev = f[i]
end
return f, y
end
```
The data is the same as in the GP regression example with some missing data for the prediction part:
```
Random.seed!(10)
n = 100
σ²_noise = 0.04;
t = collect(range(-2, 2, length=n)); #timeline
f_true = sinc.(t); # true process
f_noisy = f_true + sqrt(σ²_noise) * randn(n); #noisy process
pos = 1:100
t_obser = t[pos]; # time where we observe data
y_data = Array{Union{Float64,Missing}}(missing, n)
for i in pos
y_data[i] = f_noisy[i]
end
for i in 80:100
y_data[i] = missing
end
θ = [1., 1.]; # store [l, σ²]
Δt = [t[1]]; # time difference
append!(Δt, t[2:end] - t[1:end-1]);
```
I also added the following rules for missing data:
```
@rule NormalMeanPrecision(:μ, Marginalisation) (q_out::Any, q_τ::Missing) = missing
@rule NormalMeanPrecision(:μ, Marginalisation) (q_out::Missing, q_τ::Any) = missing
@rule NormalMeanPrecision(:μ, Marginalisation) (m_out::Missing, q_τ::PointMass, ) = missing
@rule NormalMeanPrecision(:τ, Marginalisation) (q_out::Any, q_μ::Missing) = missing
@rule NormalMeanPrecision(:τ, Marginalisation) (q_out::Missing, q_μ::Any) = missing
@rule typeof(+)(:in1, Marginalisation) (m_out::Missing, m_in2::Any) = missing
@rule typeof(+)(:in1, Marginalisation) (m_out::Any, m_in2::Missing) = missing
```
And call the inference:
```
result = inference(
model = locallevel(n, 1, 1),
data = (y = y_data,),
free_energy = true
)
```
When running, I am getting the following warning which I do not know how to handle. Do you have a suggestion? Thank you
```
MarginalRuleMethodError: no method matching rule for the given arguments
Possible fix, define:
@marginalrule NormalMeanPrecision(:out_μ) (m_out::Missing, m_μ::NormalWeightedMeanPrecision, q_τ::PointMass, ) = begin
return ...
end
```
|
1.0
|
Stuck on MarginalRuleMethodError warning - I am trying to modify the GP regression example to create a local level (forecasting) model. I tried to follow the Missing Data example to handle the missing data. However I get a warning about MarginalRuleMethodError and cannot find documentation/examples on how to proceed. I would echo issue #15 about having more examples for forecasting cases. Thank you!
My model is defined as:
```
@model function locallevel(n, σ², σ²_noise)
f_0 ~ Normal(mean = 0, precision = 1 / σ²)
f = randomvar(n)
y = datavar(Float64, n) where { allow_missing = true }
f_prev = f_0
for i=1:n
f[i] ~ Normal(mean = f_prev, precision = 1/σ²)
y[i] ~ Normal(mean = f[i], precision = 1 / σ²_noise)
f_prev = f[i]
end
return f, y
end
```
The data is the same as in the GP regression example with some missing data for the prediction part:
```
Random.seed!(10)
n = 100
σ²_noise = 0.04;
t = collect(range(-2, 2, length=n)); #timeline
f_true = sinc.(t); # true process
f_noisy = f_true + sqrt(σ²_noise) * randn(n); #noisy process
pos = 1:100
t_obser = t[pos]; # time where we observe data
y_data = Array{Union{Float64,Missing}}(missing, n)
for i in pos
y_data[i] = f_noisy[i]
end
for i in 80:100
y_data[i] = missing
end
θ = [1., 1.]; # store [l, σ²]
Δt = [t[1]]; # time difference
append!(Δt, t[2:end] - t[1:end-1]);
```
I also added the following rules for missing data:
```
@rule NormalMeanPrecision(:μ, Marginalisation) (q_out::Any, q_τ::Missing) = missing
@rule NormalMeanPrecision(:μ, Marginalisation) (q_out::Missing, q_τ::Any) = missing
@rule NormalMeanPrecision(:μ, Marginalisation) (m_out::Missing, q_τ::PointMass, ) = missing
@rule NormalMeanPrecision(:τ, Marginalisation) (q_out::Any, q_μ::Missing) = missing
@rule NormalMeanPrecision(:τ, Marginalisation) (q_out::Missing, q_μ::Any) = missing
@rule typeof(+)(:in1, Marginalisation) (m_out::Missing, m_in2::Any) = missing
@rule typeof(+)(:in1, Marginalisation) (m_out::Any, m_in2::Missing) = missing
```
And call the inference:
```
result = inference(
model = locallevel(n, 1, 1),
data = (y = y_data,),
free_energy = true
)
```
When running, I am getting the following warning which I do not know how to handle. Do you have a suggestion? Thank you
```
MarginalRuleMethodError: no method matching rule for the given arguments
Possible fix, define:
@marginalrule NormalMeanPrecision(:out_μ) (m_out::Missing, m_μ::NormalWeightedMeanPrecision, q_τ::PointMass, ) = begin
return ...
end
```
|
non_process
|
stuck on marginalrulemethoderror warning i am trying to modify the gp regression example to create a local level forecasting model i tried to follow the missing data example to handle the missing data however i get a warning about marginalrulemethoderror and cannot find documentation examples on how to proceed i would echo issue about having more examples for forecasting cases thank you my model is defined as model function locallevel n σ² σ² noise f normal mean precision σ² f randomvar n y datavar n where allow missing true f prev f for i n f normal mean f prev precision σ² y normal mean f precision σ² noise f prev f end return f y end the data is the same as in the gp regression example with some missing data for the prediction part random seed n σ² noise t collect range length n timeline f true sinc t true process f noisy f true sqrt σ² noise randn n noisy process pos t obser t time where we observe data y data array union missing missing n for i in pos y data f noisy end for i in y data missing end θ store δt time difference append δt t t i also added the following rules for missing data rule normalmeanprecision μ marginalisation q out any q τ missing missing rule normalmeanprecision μ marginalisation q out missing q τ any missing rule normalmeanprecision μ marginalisation m out missing q τ pointmass missing rule normalmeanprecision τ marginalisation q out any q μ missing missing rule normalmeanprecision τ marginalisation q out missing q μ any missing rule typeof marginalisation m out missing m any missing rule typeof marginalisation m out any m missing missing and call the inference result inference model locallevel n data y y data free energy true when running i am getting the following warning which i do not know how to handle do you have a suggestion thank you marginalrulemethoderror no method matching rule for the given arguments possible fix define marginalrule normalmeanprecision out μ m out missing m μ normalweightedmeanprecision q τ pointmass begin return end
| 0
|
21,157
| 28,132,167,187
|
IssuesEvent
|
2023-04-01 01:31:44
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
[MLv2] [Bug] `orderable-columns` should work if query contains two columns with the same name from different tables
|
Type:Bug .Backend .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
```clj
(-> (lib/query-for-table-name meta/metadata-provider "VENUES")
(lib/join (-> (lib/join-clause
(meta/table-metadata :categories)
(lib/=
(lib/field "VENUES" "CATEGORY_ID")
(lib/field "CATEGORIES" "ID")))
(lib/with-join-fields :all)))
(lib/fields [(lib/field "VENUES" "ID") (lib/field "CATEGORIES" "ID")])
(lib/append-stage)
(lib/orderable-columns))
```
Results in `Column :names must be distinct!`
|
1.0
|
[MLv2] [Bug] `orderable-columns` should work if query contains two columns with the same name from different tables - ```clj
(-> (lib/query-for-table-name meta/metadata-provider "VENUES")
(lib/join (-> (lib/join-clause
(meta/table-metadata :categories)
(lib/=
(lib/field "VENUES" "CATEGORY_ID")
(lib/field "CATEGORIES" "ID")))
(lib/with-join-fields :all)))
(lib/fields [(lib/field "VENUES" "ID") (lib/field "CATEGORIES" "ID")])
(lib/append-stage)
(lib/orderable-columns))
```
Results in `Column :names must be distinct!`
|
process
|
orderable columns should work if query contains two columns with the same name from different tables clj lib query for table name meta metadata provider venues lib join lib join clause meta table metadata categories lib lib field venues category id lib field categories id lib with join fields all lib fields lib append stage lib orderable columns results in column names must be distinct
| 1
|
18,515
| 24,551,720,990
|
IssuesEvent
|
2022-10-12 13:06:37
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Occasionally some activities are getting saved in the Resume status
|
Bug P0 iOS Process: Fixed Process: Tested dev
|
Occasionally some activities are getting saved in the Resume status even though the participant has completed the activities successfully
Refer the below-mentioned study
Study id: newstudyon25_08
Study name: Imported iosupdate1(newstudyon25_08)

|
2.0
|
[iOS] Occasionally some activities are getting saved in the Resume status - Occasionally some activities are getting saved in the Resume status even though the participant has completed the activities successfully
Refer the below-mentioned study
Study id: newstudyon25_08
Study name: Imported iosupdate1(newstudyon25_08)

|
process
|
occasionally some activities are getting saved in the resume status occasionally some activities are getting saved in the resume status even though the participant has completed the activities successfully refer the below mentioned study study id study name imported
| 1
|
48,831
| 3,000,285,307
|
IssuesEvent
|
2015-07-24 00:06:11
|
opendatakit/opendatakit
|
https://api.github.com/repos/opendatakit/opendatakit
|
closed
|
ODK Survey: ENOENT (no such file or directory)
|
Priority-Medium Survey Type-Other
|
Originally reported on Google Code with ID 850
```
When trying to get form "Common Javascript Framework" (version 20140308) in ODK Survey
(version 2.0 Alpha rev 105) receive error message: "open failed: ENOENT (No such file
or directory)." Same error message received when trying to get "Example Form" (version
20130408). Behavior experienced on Asus Nexus 7 running Android 4.2.2.
```
Reported by `fitzed` on 2013-06-18 19:29:24
|
1.0
|
ODK Survey: ENOENT (no such file or directory) - Originally reported on Google Code with ID 850
```
When trying to get form "Common Javascript Framework" (version 20140308) in ODK Survey
(version 2.0 Alpha rev 105) receive error message: "open failed: ENOENT (No such file
or directory)." Same error message received when trying to get "Example Form" (version
20130408). Behavior experienced on Asus Nexus 7 running Android 4.2.2.
```
Reported by `fitzed` on 2013-06-18 19:29:24
|
non_process
|
odk survey enoent no such file or directory originally reported on google code with id when trying to get form common javascript framework version in odk survey version alpha rev receive error message open failed enoent no such file or directory same error message received when trying to get example form version behavior experienced on asus nexus running android reported by fitzed on
| 0
|
334,443
| 24,419,486,539
|
IssuesEvent
|
2022-10-05 18:57:20
|
transit-analytics-lab/spur
|
https://api.github.com/repos/transit-analytics-lab/spur
|
closed
|
Simple example with working data (Toronto's Sheppard line)
|
documentation
|
Package a set of required data files to simulate a basic 2-way subway schedule based on GTFS data.
|
1.0
|
Simple example with working data (Toronto's Sheppard line) - Package a set of required data files to simulate a basic 2-way subway schedule based on GTFS data.
|
non_process
|
simple example with working data toronto s sheppard line package a set of required data files to simulate a basic way subway schedule based on gtfs data
| 0
|
9,549
| 12,513,071,907
|
IssuesEvent
|
2020-06-03 00:46:47
|
nanoframework/Home
|
https://api.github.com/repos/nanoframework/Home
|
closed
|
Creating enum in global namespace causes MDP fail
|
Area: Metadata Processor Priority: Medium Type: Bug
|
### Details about Problem
**nanoFramework area:** Visual Studio extension
**VS version<!--(if relevant)-->:** 2017
**VS extension version<!--(if relevant)-->:** 2017.2.0.10
**Target<!--(if relevant)-->:**
**Firmware image version<!--(if relevant)-->:**
**Device capabilities output<!--(if relevant)-->:**
### Description
Creating enum in global namespace causes MDP fail
### Detailed repro steps so we can see the same problem
Try to compile this program
```cs
using System;
using System.Threading;
public enum SomeEnum
{
A, B, C
}
namespace App
{
public class Program
{
public static void Main()
{
Console.WriteLine("Hello world!");
Thread.Sleep(Timeout.Infinite);
}
}
}
```
console output:
```console
1>------ Build started: Project: App, Configuration: Debug Any CPU ------
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : Unable to compile output assembly file 'C:\Users\klata\source\repos\NFApp6\App\obj\Debug\App.pe' - check parse command results.
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : Object reference not set to an instance of an object.
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : at nanoFramework.Tools.MetadataProcessor.Core.Extensions.TypeDefinitionExtensions.ToEnumDeclaration(TypeDefinition source)
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : at nanoFramework.Tools.MetadataProcessor.nanoTypeDefinitionTable.<>c.<.ctor>b__9_3(TypeDefinition et)
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : at System.Linq.Enumerable.WhereSelectListIterator`2.MoveNext()
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : at nanoFramework.Tools.MetadataProcessor.nanoTypeDefinitionTable..ctor(IEnumerable`1 items, nanoTablesContext context)
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : at nanoFramework.Tools.MetadataProcessor.nanoTablesContext..ctor(AssemblyDefinition assemblyDefinition, List`1 explicitTypesOrder, List`1 classNamesToExclude, ICustomStringSorter stringSorter, Boolean applyAttributesCompression, Boolean verbose, Boolean isCoreLibrary)
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : at nanoFramework.Tools.MetaDataProcessorTask.ExecuteCompile(String fileName)
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : at nanoFramework.Tools.MetaDataProcessorTask.Execute()
1>Done building project "App.nfproj" -- FAILED.
```
...
### Other suggested things
<!-- if applicable/relevant -->
### Expected behaviour
Compile without errors
### Screenshot
<!-- if applicable/relevant -->
<!--Very helpful if you send along a few screenshots to help visualize the issue!-->
### Additional context
### Make an effort to fix the bug
|
1.0
|
Creating enum in global namespace causes MDP fail -
### Details about Problem
**nanoFramework area:** Visual Studio extension
**VS version<!--(if relevant)-->:** 2017
**VS extension version<!--(if relevant)-->:** 2017.2.0.10
**Target<!--(if relevant)-->:**
**Firmware image version<!--(if relevant)-->:**
**Device capabilities output<!--(if relevant)-->:**
### Description
Creating enum in global namespace causes MDP fail
### Detailed repro steps so we can see the same problem
Try to compile this program
```cs
using System;
using System.Threading;
public enum SomeEnum
{
A, B, C
}
namespace App
{
public class Program
{
public static void Main()
{
Console.WriteLine("Hello world!");
Thread.Sleep(Timeout.Infinite);
}
}
}
```
console output:
```console
1>------ Build started: Project: App, Configuration: Debug Any CPU ------
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : Unable to compile output assembly file 'C:\Users\klata\source\repos\NFApp6\App\obj\Debug\App.pe' - check parse command results.
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : Object reference not set to an instance of an object.
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : at nanoFramework.Tools.MetadataProcessor.Core.Extensions.TypeDefinitionExtensions.ToEnumDeclaration(TypeDefinition source)
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : at nanoFramework.Tools.MetadataProcessor.nanoTypeDefinitionTable.<>c.<.ctor>b__9_3(TypeDefinition et)
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : at System.Linq.Enumerable.WhereSelectListIterator`2.MoveNext()
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : at nanoFramework.Tools.MetadataProcessor.nanoTypeDefinitionTable..ctor(IEnumerable`1 items, nanoTablesContext context)
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : at nanoFramework.Tools.MetadataProcessor.nanoTablesContext..ctor(AssemblyDefinition assemblyDefinition, List`1 explicitTypesOrder, List`1 classNamesToExclude, ICustomStringSorter stringSorter, Boolean applyAttributesCompression, Boolean verbose, Boolean isCoreLibrary)
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : at nanoFramework.Tools.MetaDataProcessorTask.ExecuteCompile(String fileName)
1>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets(223,5): error : at nanoFramework.Tools.MetaDataProcessorTask.Execute()
1>Done building project "App.nfproj" -- FAILED.
```
...
### Other suggested things
<!-- if applicable/relevant -->
### Expected behaviour
Compile without errors
### Screenshot
<!-- if applicable/relevant -->
<!--Very helpful if you send along a few screenshots to help visualize the issue!-->
### Additional context
### Make an effort to fix the bug
|
process
|
creating enum in global namespace causes mdp fail details about problem nanoframework area visual studio extension vs version vs extension version target firmware image version device capabilities output description creating enum in global namespace causes mdp fail detailed repro steps so we can see the same problem try to compile this program cs using system using system threading public enum someenum a b c namespace app public class program public static void main console writeline hello world thread sleep timeout infinite console output console build started project app configuration debug any cpu c program files microsoft visual studio professional msbuild nanoframework nfprojectsystem mdp targets error unable to compile output assembly file c users klata source repos app obj debug app pe check parse command results c program files microsoft visual studio professional msbuild nanoframework nfprojectsystem mdp targets error object reference not set to an instance of an object c program files microsoft visual studio professional msbuild nanoframework nfprojectsystem mdp targets error at nanoframework tools metadataprocessor core extensions typedefinitionextensions toenumdeclaration typedefinition source c program files microsoft visual studio professional msbuild nanoframework nfprojectsystem mdp targets error at nanoframework tools metadataprocessor nanotypedefinitiontable c b typedefinition et c program files microsoft visual studio professional msbuild nanoframework nfprojectsystem mdp targets error at system linq enumerable whereselectlistiterator movenext c program files microsoft visual studio professional msbuild nanoframework nfprojectsystem mdp targets error at system collections generic list ctor ienumerable collection c program files microsoft visual studio professional msbuild nanoframework nfprojectsystem mdp targets error at system linq enumerable tolist ienumerable source c program files microsoft visual studio professional msbuild nanoframework nfprojectsystem mdp targets error at nanoframework tools metadataprocessor nanotypedefinitiontable ctor ienumerable items nanotablescontext context c program files microsoft visual studio professional msbuild nanoframework nfprojectsystem mdp targets error at nanoframework tools metadataprocessor nanotablescontext ctor assemblydefinition assemblydefinition list explicittypesorder list classnamestoexclude icustomstringsorter stringsorter boolean applyattributescompression boolean verbose boolean iscorelibrary c program files microsoft visual studio professional msbuild nanoframework nfprojectsystem mdp targets error at nanoframework tools metadataprocessortask executecompile string filename c program files microsoft visual studio professional msbuild nanoframework nfprojectsystem mdp targets error at nanoframework tools metadataprocessortask execute done building project app nfproj failed other suggested things expected behaviour compile without errors screenshot additional context make an effort to fix the bug
| 1
|
58,711
| 24,534,291,495
|
IssuesEvent
|
2022-10-11 19:13:45
|
hashicorp/terraform-provider-azurerm
|
https://api.github.com/repos/hashicorp/terraform-provider-azurerm
|
closed
|
azurerm >= 3.15.0: Error: parsing "/subscriptions/…/resourceGroups/…/providers/Microsoft.ServiceBus/namespaces/…/AuthorizationRules/…": parsing segment "staticAuthorizationRules": expected the segment "AuthorizationRules" to be "authorizationRules"
|
bug regression service/service-bus
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Community Note
<!--- Please keep this note for the community --->
* Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
1.2.5
### AzureRM Provider Version
3.15.0
### Affected Resource(s)/Data Source(s)
azurerm_servicebus_namespace_authorization_rule
### Terraform Configuration Files
```hcl
resource "azurerm_servicebus_namespace_authorization_rule" "…" {
name = local.name
namespace_id = azurerm_servicebus_namespace.….id
listen = true
send = true
}
```
### Debug Output/Panic Output
```shell
Error: parsing "/subscriptions/…/resourceGroups/…/providers/Microsoft.ServiceBus/namespaces/…/AuthorizationRules/…": parsing segment "staticAuthorizationRules": expected the segment "AuthorizationRules" to be "authorizationRules"
with module.service_bus[0].azurerm_servicebus_namespace_authorization_rule.…,
on modules/service_bus/service_bus.tf line 13, in resource "azurerm_servicebus_namespace_authorization_rule" "…":
13: resource "azurerm_servicebus_namespace_authorization_rule" "…" {
```
### Expected Behaviour
No error. Same code works fine with AzureRM provider 3.14.0.
### Actual Behaviour
Doesn't work.
### Steps to Reproduce
…
### Important Factoids
_No response_
### References
_No response_
|
2.0
|
azurerm >= 3.15.0: Error: parsing "/subscriptions/…/resourceGroups/…/providers/Microsoft.ServiceBus/namespaces/…/AuthorizationRules/…": parsing segment "staticAuthorizationRules": expected the segment "AuthorizationRules" to be "authorizationRules" - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Community Note
<!--- Please keep this note for the community --->
* Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
1.2.5
### AzureRM Provider Version
3.15.0
### Affected Resource(s)/Data Source(s)
azurerm_servicebus_namespace_authorization_rule
### Terraform Configuration Files
```hcl
resource "azurerm_servicebus_namespace_authorization_rule" "…" {
name = local.name
namespace_id = azurerm_servicebus_namespace.….id
listen = true
send = true
}
```
### Debug Output/Panic Output
```shell
Error: parsing "/subscriptions/…/resourceGroups/…/providers/Microsoft.ServiceBus/namespaces/…/AuthorizationRules/…": parsing segment "staticAuthorizationRules": expected the segment "AuthorizationRules" to be "authorizationRules"
with module.service_bus[0].azurerm_servicebus_namespace_authorization_rule.…,
on modules/service_bus/service_bus.tf line 13, in resource "azurerm_servicebus_namespace_authorization_rule" "…":
13: resource "azurerm_servicebus_namespace_authorization_rule" "…" {
```
### Expected Behaviour
No error. Same code works fine with AzureRM provider 3.14.0.
### Actual Behaviour
Doesn't work.
### Steps to Reproduce
…
### Important Factoids
_No response_
### References
_No response_
|
non_process
|
azurerm error parsing subscriptions … resourcegroups … providers microsoft servicebus namespaces … authorizationrules … parsing segment staticauthorizationrules expected the segment authorizationrules to be authorizationrules is there an existing issue for this i have searched the existing issues community note please vote on this issue by adding a thumbsup to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform version azurerm provider version affected resource s data source s azurerm servicebus namespace authorization rule terraform configuration files hcl resource azurerm servicebus namespace authorization rule … name local name namespace id azurerm servicebus namespace … id listen true send true debug output panic output shell error parsing subscriptions … resourcegroups … providers microsoft servicebus namespaces … authorizationrules … parsing segment staticauthorizationrules expected the segment authorizationrules to be authorizationrules with module service bus azurerm servicebus namespace authorization rule … on modules service bus service bus tf line in resource azurerm servicebus namespace authorization rule … resource azurerm servicebus namespace authorization rule … expected behaviour no error same code works fine with azurerm provider actual behaviour doesn t work steps to reproduce … important factoids no response references no response
| 0
|
18,054
| 24,066,712,971
|
IssuesEvent
|
2022-09-17 15:46:36
|
apache/arrow-rs
|
https://api.github.com/repos/apache/arrow-rs
|
closed
|
Docs CI test is broken with latest nightly
|
bug development-process
|
**Describe the bug**
The docs CI test is failing on master
**To Reproduce**
https://github.com/apache/arrow-rs/actions/runs/3062071013/jobs/4942618618
```
error: internal compiler error: no errors encountered even though `delay_span_bug` issued
error: internal compiler error: broken MIR in DefId(
...
```
**Expected behavior**
Tests should pass
**Additional context**
@tustvold notes in https://github.com/apache/arrow-rs/pull/2693#issuecomment-1248120845 that this is due to https://github.com/rust-lang/rust/issues/101844
|
1.0
|
Docs CI test is broken with latest nightly - **Describe the bug**
The docs CI test is failing on master
**To Reproduce**
https://github.com/apache/arrow-rs/actions/runs/3062071013/jobs/4942618618
```
error: internal compiler error: no errors encountered even though `delay_span_bug` issued
error: internal compiler error: broken MIR in DefId(
...
```
**Expected behavior**
Tests should pass
**Additional context**
@tustvold notes in https://github.com/apache/arrow-rs/pull/2693#issuecomment-1248120845 that this is due to https://github.com/rust-lang/rust/issues/101844
|
process
|
docs ci test is broken with latest nightly describe the bug the docs ci test is failing on master to reproduce error internal compiler error no errors encountered even though delay span bug issued error internal compiler error broken mir in defid expected behavior tests should pass additional context tustvold notes in that this is due to
| 1
|
540,877
| 15,818,827,846
|
IssuesEvent
|
2021-04-05 16:35:40
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
closed
|
[studio-ui] Don't reload if there is no change in ICE
|
enhancement priority: medium
|
Turning ICE on/off causes the guest page to reload. Only reload if there has been a change to the content during the ICE session.
|
1.0
|
[studio-ui] Don't reload if there is no change in ICE - Turning ICE on/off causes the guest page to reload. Only reload if there has been a change to the content during the ICE session.
|
non_process
|
don t reload if there is no change in ice turning ice on off causes the guest page to reload only reload if there has been a change to the content during the ice session
| 0
|
6,980
| 10,131,444,050
|
IssuesEvent
|
2019-08-01 19:33:24
|
qri-io/desktop
|
https://api.github.com/repos/qri-io/desktop
|
closed
|
[3] Qri Desktop should launch Qri backend
|
main process
|
Migrate code from Qri frontend to launch the Qri backend when desktop is launched and shutdown on quit
|
1.0
|
[3] Qri Desktop should launch Qri backend - Migrate code from Qri frontend to launch the Qri backend when desktop is launched and shutdown on quit
|
process
|
qri desktop should launch qri backend migrate code from qri frontend to launch the qri backend when desktop is launched and shutdown on quit
| 1
|
11,898
| 14,689,863,470
|
IssuesEvent
|
2021-01-02 12:17:50
|
MrPeterJin/MrPeterJin.github.io
|
https://api.github.com/repos/MrPeterJin/MrPeterJin.github.io
|
closed
|
传统方法下图像处理的数学原理整理 (1) - P某的备忘录
|
/post/conventional-imaging-process-review-1/ Gitalk
|
https://www.601b.codes/post/conventional-imaging-process-review-1/
封面图片 credit: From Digital Image Processing 4E, Global Edition
众所周知在图像处理领域,神经网络还没有大规模运用时,支撑这个领域半边天的是各种数学原理(虽然现在数学也很重要)。这一...
|
1.0
|
传统方法下图像处理的数学原理整理 (1) - P某的备忘录 - https://www.601b.codes/post/conventional-imaging-process-review-1/
封面图片 credit: From Digital Image Processing 4E, Global Edition
众所周知在图像处理领域,神经网络还没有大规模运用时,支撑这个领域半边天的是各种数学原理(虽然现在数学也很重要)。这一...
|
process
|
传统方法下图像处理的数学原理整理 p某的备忘录 封面图片 credit from digital image processing global edition 众所周知在图像处理领域,神经网络还没有大规模运用时,支撑这个领域半边天的是各种数学原理(虽然现在数学也很重要)。这一
| 1
|
422,014
| 28,369,699,839
|
IssuesEvent
|
2023-04-12 16:03:47
|
Analog-Devices-MSDK/msdk
|
https://api.github.com/repos/Analog-Devices-MSDK/msdk
|
closed
|
MXC_SYS Missing EXT_CLK Support
|
bug documentation
|
Hi, is there a bug in the MXC_SYS implementation for the MAX32670?
In the [UG](https://www.analog.com/media/en/technical-documentation/user-guides/max32670max32671-user-guide.pdf), the clock tree (page 38) and GCR_CLKCTRL register description (page 66-67) both document an external clock as an available system clock option.
However, the implementation in sys_me15.c has support for EXTCLK removed:
In MXC_SYS_ClockSourceEnable (returns an error)
```C
case MXC_SYS_CLOCK_EXTCLK:
// MXC_GCR->clkctrl |= MXC_F_GCR_CLKCTRL_EXTCLK_EN;
// return MXC_SYS_Clock_Timeout(MXC_F_GCR_CLKCTRL_EXTCLK_RDY);
return E_NOT_SUPPORTED;
break;
```
In MXC_SYS_ClockSourceDisable (silent failure)
```C
case MXC_SYS_CLOCK_EXTCLK:
// MXC_GCR->clkctrl &= ~MXC_F_GCR_CLKCTRL_EXTCLK_EN;
break;
```
In MXC_SYS_ClockSelect (silent failure)
```C
case MXC_SYS_CLOCK_EXTCLK:
// Enable HIRC clock
// if(!(MXC_GCR->clkctrl & MXC_F_GCR_CLKCTRL_EXTCLK_EN)) {
// MXC_GCR->clkctrl |=MXC_F_GCR_CLKCTRL_EXTCLK_EN;
// // Check if HIRC clock is ready
// if (MXC_SYS_Clock_Timeout(MXC_F_GCR_CLKCTRL_EXTCLK_RDY) != E_NO_ERROR) {
// return E_TIME_OUT;
// }
// }
// Set HIRC clock as System Clock
// MXC_SETFIELD(MXC_GCR->clkctrl, MXC_F_GCR_CLKCTRL_SYSCLK_SEL, MXC_S_GCR_CLKCTRL_SYSCLK_SEL_EXTCLK);
break;
```
I will be populating the EXT_CLK footprint on my EV kit to test this disabled code.
(Also, MXC_SYS does not appear in the [peripheral documentation](https://analog-devices-msdk.github.io/msdk/Libraries/PeriphDrivers/Documentation/MAX32670/modules.html).)
|
1.0
|
MXC_SYS Missing EXT_CLK Support - Hi, is there a bug in the MXC_SYS implementation for the MAX32670?
In the [UG](https://www.analog.com/media/en/technical-documentation/user-guides/max32670max32671-user-guide.pdf), the clock tree (page 38) and GCR_CLKCTRL register description (page 66-67) both document an external clock as an available system clock option.
However, the implementation in sys_me15.c has support for EXTCLK removed:
In MXC_SYS_ClockSourceEnable (returns an error)
```C
case MXC_SYS_CLOCK_EXTCLK:
// MXC_GCR->clkctrl |= MXC_F_GCR_CLKCTRL_EXTCLK_EN;
// return MXC_SYS_Clock_Timeout(MXC_F_GCR_CLKCTRL_EXTCLK_RDY);
return E_NOT_SUPPORTED;
break;
```
In MXC_SYS_ClockSourceDisable (silent failure)
```C
case MXC_SYS_CLOCK_EXTCLK:
// MXC_GCR->clkctrl &= ~MXC_F_GCR_CLKCTRL_EXTCLK_EN;
break;
```
In MXC_SYS_ClockSelect (silent failure)
```C
case MXC_SYS_CLOCK_EXTCLK:
// Enable HIRC clock
// if(!(MXC_GCR->clkctrl & MXC_F_GCR_CLKCTRL_EXTCLK_EN)) {
// MXC_GCR->clkctrl |=MXC_F_GCR_CLKCTRL_EXTCLK_EN;
// // Check if HIRC clock is ready
// if (MXC_SYS_Clock_Timeout(MXC_F_GCR_CLKCTRL_EXTCLK_RDY) != E_NO_ERROR) {
// return E_TIME_OUT;
// }
// }
// Set HIRC clock as System Clock
// MXC_SETFIELD(MXC_GCR->clkctrl, MXC_F_GCR_CLKCTRL_SYSCLK_SEL, MXC_S_GCR_CLKCTRL_SYSCLK_SEL_EXTCLK);
break;
```
I will be populating the EXT_CLK footprint on my EV kit to test this disabled code.
(Also, MXC_SYS does not appear in the [peripheral documentation](https://analog-devices-msdk.github.io/msdk/Libraries/PeriphDrivers/Documentation/MAX32670/modules.html).)
|
non_process
|
mxc sys missing ext clk support hi is there a bug in the mxc sys implementation for the in the the clock tree page and gcr clkctrl register description page both document an external clock as an available system clock option however the implementation in sys c has support for extclk removed in mxc sys clocksourceenable returns an error c case mxc sys clock extclk mxc gcr clkctrl mxc f gcr clkctrl extclk en return mxc sys clock timeout mxc f gcr clkctrl extclk rdy return e not supported break in mxc sys clocksourcedisable silent failure c case mxc sys clock extclk mxc gcr clkctrl mxc f gcr clkctrl extclk en break in mxc sys clockselect silent failure c case mxc sys clock extclk enable hirc clock if mxc gcr clkctrl mxc f gcr clkctrl extclk en mxc gcr clkctrl mxc f gcr clkctrl extclk en check if hirc clock is ready if mxc sys clock timeout mxc f gcr clkctrl extclk rdy e no error return e time out set hirc clock as system clock mxc setfield mxc gcr clkctrl mxc f gcr clkctrl sysclk sel mxc s gcr clkctrl sysclk sel extclk break i will be populating the ext clk footprint on my ev kit to test this disabled code also mxc sys does not appear in the
| 0
|
45,377
| 7,179,986,871
|
IssuesEvent
|
2018-01-31 21:35:01
|
brunobuzzi/BpmFlow
|
https://api.github.com/repos/brunobuzzi/BpmFlow
|
opened
|
Check search procedure in WAOrbeonProcessBrowser
|
critical bug critical enhancement documentation frontoffice
|
Checks:
* When searching by app and process name --> only search in current users assignments.
* When search by field value --> search in all processes.
* When search by process id --> search in all matrix.
The criteria must be unified or add componente to select the boundary. (user/system/???)
Other check:
When doing any search if the current user see aBpmTaskAssignment of other user --> be sure that can NOT perform any action on it.
|
1.0
|
Check search procedure in WAOrbeonProcessBrowser - Checks:
* When searching by app and process name --> only search in current users assignments.
* When search by field value --> search in all processes.
* When search by process id --> search in all matrix.
The criteria must be unified or add componente to select the boundary. (user/system/???)
Other check:
When doing any search if the current user see aBpmTaskAssignment of other user --> be sure that can NOT perform any action on it.
|
non_process
|
check search procedure in waorbeonprocessbrowser checks when searching by app and process name only search in current users assignments when search by field value search in all processes when search by process id search in all matrix the criteria must be unified or add componente to select the boundary user system other check when doing any search if the current user see abpmtaskassignment of other user be sure that can not perform any action on it
| 0
|
160,746
| 25,225,378,302
|
IssuesEvent
|
2022-11-14 15:37:06
|
DeveloperAcademy-POSTECH/MacC-Team-Spacer
|
https://api.github.com/repos/DeveloperAcademy-POSTECH/MacC-Team-Spacer
|
closed
|
[FEAT] SearchListView에서 cell 선택 시 CafeDetailView로 연결
|
🐯 오션 🎨 Design ⭐️ Feature
|
# ISSUE
## 종류
ISSUE 종류를 선택하세요
- [ ] Code Review
- [x] New Feature
- [ ] Bug Fix
- [ ] Setup
## 제목
- SearchListView에서 cell 선택 시 CafeDetailView로 연결
## 내용
- SearchListView에서 cell 선택 시 CafeDetailView로 navigation으로 연결됨
## 체크리스트
- [x] SearchListView에서 cell 선택 시 CafeDetailView로 navigation으로 연결됨
|
1.0
|
[FEAT] SearchListView에서 cell 선택 시 CafeDetailView로 연결 - # ISSUE
## 종류
ISSUE 종류를 선택하세요
- [ ] Code Review
- [x] New Feature
- [ ] Bug Fix
- [ ] Setup
## 제목
- SearchListView에서 cell 선택 시 CafeDetailView로 연결
## 내용
- SearchListView에서 cell 선택 시 CafeDetailView로 navigation으로 연결됨
## 체크리스트
- [x] SearchListView에서 cell 선택 시 CafeDetailView로 navigation으로 연결됨
|
non_process
|
searchlistview에서 cell 선택 시 cafedetailview로 연결 issue 종류 issue 종류를 선택하세요 code review new feature bug fix setup 제목 searchlistview에서 cell 선택 시 cafedetailview로 연결 내용 searchlistview에서 cell 선택 시 cafedetailview로 navigation으로 연결됨 체크리스트 searchlistview에서 cell 선택 시 cafedetailview로 navigation으로 연결됨
| 0
|
3,412
| 6,523,907,534
|
IssuesEvent
|
2017-08-29 10:30:13
|
w3c/w3process
|
https://api.github.com/repos/w3c/w3process
|
closed
|
Consistency issue: is it 28 days or four weeks?
|
Editorial improvements Process2018Candidate
|
The time limits for candidate recommendations is set to four weeks, as in:
> must specify the deadline for further comments, which *MUST* be **at least** four weeks after publication,
In section 6.4.1. Just a few lines below (in 6.5), it says:
> deadline for Advisory Committee review, which _MUST_ be **at least** 28 days after the publication of the Proposed Recommendation...
Can we try to be consistent and speak either of four weeks or of 28 days? Actually, neither of the two are optimal, if there are, say, Xmas vacations in between, so I believe the ideal would be to say “**at least** 20 business days”. (This, I believe, is actually the current practice used by the Working Groups.)
|
1.0
|
Consistency issue: is it 28 days or four weeks? - The time limits for candidate recommendations is set to four weeks, as in:
> must specify the deadline for further comments, which *MUST* be **at least** four weeks after publication,
In section 6.4.1. Just a few lines below (in 6.5), it says:
> deadline for Advisory Committee review, which _MUST_ be **at least** 28 days after the publication of the Proposed Recommendation...
Can we try to be consistent and speak either of four weeks or of 28 days? Actually, neither of the two are optimal, if there are, say, Xmas vacations in between, so I believe the ideal would be to say “**at least** 20 business days”. (This, I believe, is actually the current practice used by the Working Groups.)
|
process
|
consistency issue is it days or four weeks the time limits for candidate recommendations is set to four weeks as in must specify the deadline for further comments which must be at least four weeks after publication in section just a few lines below in it says deadline for advisory committee review which must be at least days after the publication of the proposed recommendation can we try to be consistent and speak either of four weeks or of days actually neither of the two are optimal if there are say xmas vacations in between so i believe the ideal would be to say “ at least business days” this i believe is actually the current practice used by the working groups
| 1
|
5,871
| 8,691,574,393
|
IssuesEvent
|
2018-12-04 01:58:55
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
doc: process.stdout.fd is mentioned but undocumented
|
doc process question
|
`process.stdout.fd` is mentioned in these 2 sections:
https://nodejs.org/api/async_hooks.html#async_hooks_printing_in_asynchooks_callbacks
https://nodejs.org/api/async_hooks.html#async_hooks_asynchronous_context_example
But it is not documented in [`process.stdout`](https://nodejs.org/api/process.html#process_process_stdout) or in a section of any mentioned prototype (`net.Socket`, Duplex or Writable stream).
Should it be documented with `process.stdin.fd` and `process.stderr.fd`?
|
1.0
|
doc: process.stdout.fd is mentioned but undocumented - `process.stdout.fd` is mentioned in these 2 sections:
https://nodejs.org/api/async_hooks.html#async_hooks_printing_in_asynchooks_callbacks
https://nodejs.org/api/async_hooks.html#async_hooks_asynchronous_context_example
But it is not documented in [`process.stdout`](https://nodejs.org/api/process.html#process_process_stdout) or in a section of any mentioned prototype (`net.Socket`, Duplex or Writable stream).
Should it be documented with `process.stdin.fd` and `process.stderr.fd`?
|
process
|
doc process stdout fd is mentioned but undocumented process stdout fd is mentioned in these sections but it is not documented in or in a section of any mentioned prototype net socket duplex or writable stream should it be documented with process stdin fd and process stderr fd
| 1
|
14,639
| 17,770,737,345
|
IssuesEvent
|
2021-08-30 13:23:13
|
googleapis/python-bigquery
|
https://api.github.com/repos/googleapis/python-bigquery
|
reopened
|
Dependency Dashboard
|
api: bigquery type: process
|
This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Edited/Blocked
These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.
- [ ] <!-- rebase-branch=renovate/all -->[chore(deps): update all dependencies](../pull/914) (`google-api-core`, `google-auth`, `google-cloud-bigquery`, `google-resumable-media`, `importlib-metadata`, `pyproj`, `typing-extensions`)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Edited/Blocked
These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.
- [ ] <!-- rebase-branch=renovate/all -->[chore(deps): update all dependencies](../pull/914) (`google-api-core`, `google-auth`, `google-cloud-bigquery`, `google-resumable-media`, `importlib-metadata`, `pyproj`, `typing-extensions`)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue provides visibility into renovate updates and their statuses edited blocked these updates have been manually edited so renovate will no longer make changes to discard all commits and start over click on a checkbox pull google api core google auth google cloud bigquery google resumable media importlib metadata pyproj typing extensions check this box to trigger a request for renovate to run again on this repository
| 1
|
21,135
| 28,106,563,076
|
IssuesEvent
|
2023-03-31 01:32:42
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Remove --genrule_strategy
|
P3 type: process team-Local-Exec stale
|
I'm planning to deprecate and remove the --genrule_strategy flag in early 2019.
As of e6263c3b0b9a467d315b91eac14162cf7915dcfd, it is no longer necessary to set --genrule_strategy in addition to --spawn_strategy if both are set to the same value. If --genrule_strategy is not set, it now defaults to --spawn_strategy. This makes --genrule_strategy superfluous - it can already be replaced with --strategy=Genrule=<value>, and now the existence of the flag no longer implies a default strategy setting for genrules.
|
1.0
|
Remove --genrule_strategy - I'm planning to deprecate and remove the --genrule_strategy flag in early 2019.
As of e6263c3b0b9a467d315b91eac14162cf7915dcfd, it is no longer necessary to set --genrule_strategy in addition to --spawn_strategy if both are set to the same value. If --genrule_strategy is not set, it now defaults to --spawn_strategy. This makes --genrule_strategy superfluous - it can already be replaced with --strategy=Genrule=<value>, and now the existence of the flag no longer implies a default strategy setting for genrules.
|
process
|
remove genrule strategy i m planning to deprecate and remove the genrule strategy flag in early as of it is no longer necessary to set genrule strategy in addition to spawn strategy if both are set to the same value if genrule strategy is not set it now defaults to spawn strategy this makes genrule strategy superfluous it can already be replaced with strategy genrule lt value gt and now the existence of the flag no longer implies a default strategy setting for genrules
| 1
|
47,707
| 13,248,507,680
|
IssuesEvent
|
2020-08-19 19:05:51
|
kenferrara/cbp-theme
|
https://api.github.com/repos/kenferrara/cbp-theme
|
opened
|
WS-2020-0091 (High) detected in http-proxy-1.15.2.tgz
|
security vulnerability
|
## WS-2020-0091 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-proxy-1.15.2.tgz</b></p></summary>
<p>HTTP proxying for the masses</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-proxy/-/http-proxy-1.15.2.tgz">https://registry.npmjs.org/http-proxy/-/http-proxy-1.15.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/cbp-theme/cbp-theme/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/cbp-theme/cbp-theme/node_modules/http-proxy/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.26.3.tgz (Root Library)
- :x: **http-proxy-1.15.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kenferrara/cbp-theme/commit/00f1482f5efa0120a277f069fffcee0de8e6adec">00f1482f5efa0120a277f069fffcee0de8e6adec</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function.
<p>Publish Date: 2020-05-14
<p>URL: <a href=https://github.com/http-party/node-http-proxy/pull/1447>WS-2020-0091</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1486">https://www.npmjs.com/advisories/1486</a></p>
<p>Release Date: 2020-05-26</p>
<p>Fix Resolution: http-proxy - 1.18.1 </p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"http-proxy","packageVersion":"1.15.2","isTransitiveDependency":true,"dependencyTree":"browser-sync:2.26.3;http-proxy:1.15.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"http-proxy - 1.18.1 "}],"vulnerabilityIdentifier":"WS-2020-0091","vulnerabilityDetails":"Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function.","vulnerabilityUrl":"https://github.com/http-party/node-http-proxy/pull/1447","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
WS-2020-0091 (High) detected in http-proxy-1.15.2.tgz - ## WS-2020-0091 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-proxy-1.15.2.tgz</b></p></summary>
<p>HTTP proxying for the masses</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-proxy/-/http-proxy-1.15.2.tgz">https://registry.npmjs.org/http-proxy/-/http-proxy-1.15.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/cbp-theme/cbp-theme/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/cbp-theme/cbp-theme/node_modules/http-proxy/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.26.3.tgz (Root Library)
- :x: **http-proxy-1.15.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kenferrara/cbp-theme/commit/00f1482f5efa0120a277f069fffcee0de8e6adec">00f1482f5efa0120a277f069fffcee0de8e6adec</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function.
<p>Publish Date: 2020-05-14
<p>URL: <a href=https://github.com/http-party/node-http-proxy/pull/1447>WS-2020-0091</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1486">https://www.npmjs.com/advisories/1486</a></p>
<p>Release Date: 2020-05-26</p>
<p>Fix Resolution: http-proxy - 1.18.1 </p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"http-proxy","packageVersion":"1.15.2","isTransitiveDependency":true,"dependencyTree":"browser-sync:2.26.3;http-proxy:1.15.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"http-proxy - 1.18.1 "}],"vulnerabilityIdentifier":"WS-2020-0091","vulnerabilityDetails":"Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function.","vulnerabilityUrl":"https://github.com/http-party/node-http-proxy/pull/1447","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
ws high detected in http proxy tgz ws high severity vulnerability vulnerable library http proxy tgz http proxying for the masses library home page a href path to dependency file tmp ws scm cbp theme cbp theme package json path to vulnerable library tmp ws scm cbp theme cbp theme node modules http proxy package json dependency hierarchy browser sync tgz root library x http proxy tgz vulnerable library found in head commit a href vulnerability details versions of http proxy prior to are vulnerable to denial of service an http request with a long body triggers an err http headers sent unhandled exception that crashes the proxy server this is only possible when the proxy server sets headers in the proxy request using the proxyreq setheader function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution http proxy isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails versions of http proxy prior to are vulnerable to denial of service an http request with a long body triggers an err http headers sent unhandled exception that crashes the proxy server this is only possible when the proxy server sets headers in the proxy request using the proxyreq setheader function vulnerabilityurl
| 0
|
115,042
| 9,779,417,344
|
IssuesEvent
|
2019-06-07 14:28:04
|
SatelliteQE/robottelo
|
https://api.github.com/repos/SatelliteQE/robottelo
|
closed
|
UI - ActiveDirectoryUserGroupTestCase failing due to missed cherry-pick
|
6.2 High Priority UI test-failure
|
we're missing this fix: https://github.com/SatelliteQE/robottelo/pull/5116
also, there's one other issue with locating the 'search' bar on the hostgroups page - if we're looking for a HG and there are no Hostgorups, the search bar is ~~empty~~ hidden and we fail to locate it.
- also, we do some unnecessary searches in the test `tearDown`:
```python
def tearDown(self):
with Session(self) as session:
set_context(session, org=ANY_CONTEXT['org'])
if self.user.search(self.ldap_user_name):
self.user.delete(self.ldap_user_name)
if self.usergroup.search(self.usergroup_name): # <= first search
self.usergroup.delete(self.usergroup_name) # <= 2nd search, delete and 3rd search
super(ActiveDirectoryUserGroupTestCase, self).tearDown()
```
the first search is completely unnecessary as it is implemented in the delete call.
the 3rd one is alright, however it is implemented in a way that it goes through all the navigating again.
Let's use api to do the teardown, there is no point to waste time deleting stuff using ui.
|
1.0
|
UI - ActiveDirectoryUserGroupTestCase failing due to missed cherry-pick - we're missing this fix: https://github.com/SatelliteQE/robottelo/pull/5116
also, there's one other issue with locating the 'search' bar on the hostgroups page - if we're looking for a HG and there are no Hostgorups, the search bar is ~~empty~~ hidden and we fail to locate it.
- also, we do some unnecessary searches in the test `tearDown`:
```python
def tearDown(self):
with Session(self) as session:
set_context(session, org=ANY_CONTEXT['org'])
if self.user.search(self.ldap_user_name):
self.user.delete(self.ldap_user_name)
if self.usergroup.search(self.usergroup_name): # <= first search
self.usergroup.delete(self.usergroup_name) # <= 2nd search, delete and 3rd search
super(ActiveDirectoryUserGroupTestCase, self).tearDown()
```
the first search is completely unnecessary as it is implemented in the delete call.
the 3rd one is alright, however it is implemented in a way that it goes through all the navigating again.
Let's use api to do the teardown, there is no point to waste time deleting stuff using ui.
|
non_process
|
ui activedirectoryusergrouptestcase failing due to missed cherry pick we re missing this fix also there s one other issue with locating the search bar on the hostgroups page if we re looking for a hg and there are no hostgorups the search bar is empty hidden and we fail to locate it also we do some unnecessary searches in the test teardown python def teardown self with session self as session set context session org any context if self user search self ldap user name self user delete self ldap user name if self usergroup search self usergroup name first search self usergroup delete self usergroup name search delete and search super activedirectoryusergrouptestcase self teardown the first search is completely unnecessary as it is implemented in the delete call the one is alright however it is implemented in a way that it goes through all the navigating again let s use api to do the teardown there is no point to waste time deleting stuff using ui
| 0
|
9,351
| 12,365,586,412
|
IssuesEvent
|
2020-05-18 09:04:07
|
Arch666Angel/mods
|
https://api.github.com/repos/Arch666Angel/mods
|
closed
|
Migration for garden mutation
|
Angels Bio Processing Bug
|
For garden mutation, we could research the new tech and unlocking the mutations as well, it should be no issue as this is only early game, hence why I didn't do it in #223, however, when playing with tech overhaul, having to build an old tech tier for that might be tedious.
|
1.0
|
Migration for garden mutation - For garden mutation, we could research the new tech and unlocking the mutations as well, it should be no issue as this is only early game, hence why I didn't do it in #223, however, when playing with tech overhaul, having to build an old tech tier for that might be tedious.
|
process
|
migration for garden mutation for garden mutation we could research the new tech and unlocking the mutations as well it should be no issue as this is only early game hence why i didn t do it in however when playing with tech overhaul having to build an old tech tier for that might be tedious
| 1
|
768,750
| 26,978,853,853
|
IssuesEvent
|
2023-02-09 11:34:55
|
openghg/openghg
|
https://api.github.com/repos/openghg/openghg
|
opened
|
Update compression for current object store - move to Zarr
|
question low-priority
|
This might be low priority but if we have a large object store with the current NetCDF write system, could we include the functionality to read the current data, compress it and replace the existing NetCDFs with Zarr stores?
|
1.0
|
Update compression for current object store - move to Zarr - This might be low priority but if we have a large object store with the current NetCDF write system, could we include the functionality to read the current data, compress it and replace the existing NetCDFs with Zarr stores?
|
non_process
|
update compression for current object store move to zarr this might be low priority but if we have a large object store with the current netcdf write system could we include the functionality to read the current data compress it and replace the existing netcdfs with zarr stores
| 0
|
16,389
| 21,155,678,581
|
IssuesEvent
|
2022-04-07 02:46:35
|
yupix/Mi.py
|
https://api.github.com/repos/yupix/Mi.py
|
closed
|
potential future feature?
|
kind/feature Medium process/candidate priority/medium
|
I'd love for the ability to provide images via local directory and have the bot upload this image along with the text.
or perhaps have an object that can contain both image and text to be sent and then i can manually load the image into the object before sending it.
I'd attempt this myself if i was perhaps a little more familiar with the code base and misskey's api but i just begun tinkering with it.
|
1.0
|
potential future feature? -
I'd love for the ability to provide images via local directory and have the bot upload this image along with the text.
or perhaps have an object that can contain both image and text to be sent and then i can manually load the image into the object before sending it.
I'd attempt this myself if i was perhaps a little more familiar with the code base and misskey's api but i just begun tinkering with it.
|
process
|
potential future feature i d love for the ability to provide images via local directory and have the bot upload this image along with the text or perhaps have an object that can contain both image and text to be sent and then i can manually load the image into the object before sending it i d attempt this myself if i was perhaps a little more familiar with the code base and misskey s api but i just begun tinkering with it
| 1
|
807,466
| 30,004,463,253
|
IssuesEvent
|
2023-06-26 11:30:49
|
War-Brokers/War-Brokers
|
https://api.github.com/repos/War-Brokers/War-Brokers
|
opened
|
Add vehicle speedometer
|
priority:3 - low type:suggestion area:UI/UX
|
> What if both air and land vehicles had a speedometer. both in km and miles and that it was in a corner and if you want you can remove it.
- [Original Report](https://discordapp.com/channels/324984733102768128/393643849785802753/722484918256402432) - War Brokers Discord Server
|
1.0
|
Add vehicle speedometer - > What if both air and land vehicles had a speedometer. both in km and miles and that it was in a corner and if you want you can remove it.
- [Original Report](https://discordapp.com/channels/324984733102768128/393643849785802753/722484918256402432) - War Brokers Discord Server
|
non_process
|
add vehicle speedometer what if both air and land vehicles had a speedometer both in km and miles and that it was in a corner and if you want you can remove it war brokers discord server
| 0
|
11,497
| 14,370,282,555
|
IssuesEvent
|
2020-12-01 10:53:24
|
syncfusion/ej2-javascript-ui-controls
|
https://api.github.com/repos/syncfusion/ej2-javascript-ui-controls
|
closed
|
simultaneous saveAsBlob('Docx') trigger an error
|
word-processor
|
I noticed when I have 2 `saveAsBlob('Docx')` happening almost simultaneously, the first one fails with the following error :
```
Error: Uncaught (in promise): TypeError: Cannot read property 'length' of undefined
TypeError: Cannot read property 'length' of undefined
at ZipArchive.push../node_modules/@syncfusion/ej2-compression/src/zip-archive.js.ZipArchive.writeHeader (zip-archive.js:264)
at ZipArchive.push../node_modules/@syncfusion/ej2-compression/src/zip-archive.js.ZipArchive.constructZippedObject (zip-archive.js:249)
```
Is it a known problem ?
|
1.0
|
simultaneous saveAsBlob('Docx') trigger an error - I noticed when I have 2 `saveAsBlob('Docx')` happening almost simultaneously, the first one fails with the following error :
```
Error: Uncaught (in promise): TypeError: Cannot read property 'length' of undefined
TypeError: Cannot read property 'length' of undefined
at ZipArchive.push../node_modules/@syncfusion/ej2-compression/src/zip-archive.js.ZipArchive.writeHeader (zip-archive.js:264)
at ZipArchive.push../node_modules/@syncfusion/ej2-compression/src/zip-archive.js.ZipArchive.constructZippedObject (zip-archive.js:249)
```
Is it a known problem ?
|
process
|
simultaneous saveasblob docx trigger an error i noticed when i have saveasblob docx happening almost simultaneously the first one fails with the following error error uncaught in promise typeerror cannot read property length of undefined typeerror cannot read property length of undefined at ziparchive push node modules syncfusion compression src zip archive js ziparchive writeheader zip archive js at ziparchive push node modules syncfusion compression src zip archive js ziparchive constructzippedobject zip archive js is it a known problem
| 1
|
11,727
| 14,567,536,680
|
IssuesEvent
|
2020-12-17 10:23:05
|
MarcElrick/level-4-individual-project
|
https://api.github.com/repos/MarcElrick/level-4-individual-project
|
closed
|
Tie loading bar to progress of data analysis
|
data processing gui
|
Not entirely sure how we can know how long processing will take yet
|
1.0
|
Tie loading bar to progress of data analysis - Not entirely sure how we can know how long processing will take yet
|
process
|
tie loading bar to progress of data analysis not entirely sure how we can know how long processing will take yet
| 1
|
3,196
| 6,261,684,731
|
IssuesEvent
|
2017-07-15 02:15:36
|
gaocegege/Processing.R
|
https://api.github.com/repos/gaocegege/Processing.R
|
closed
|
howto.md: Document the new way
|
community/processing difficulty/low for-new-contributors priority/p0 size/medium status/to-be-claimed type/enhancement
|
Now we could download the mode from release page, so there is no need to build the mode from source code.
My suggestion is to split two files, one is for users, other is for developers. We could recommend users to download the mode from release page. And move the instructions about building from source code to developer's manual.
|
1.0
|
howto.md: Document the new way - Now we could download the mode from release page, so there is no need to build the mode from source code.
My suggestion is to split two files, one is for users, other is for developers. We could recommend users to download the mode from release page. And move the instructions about building from source code to developer's manual.
|
process
|
howto md document the new way now we could download the mode from release page so there is no need to build the mode from source code my suggestion is to split two files one is for users other is for developers we could recommend users to download the mode from release page and move the instructions about building from source code to developer s manual
| 1
|
12,337
| 14,882,738,426
|
IssuesEvent
|
2021-01-20 12:20:36
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Mobile] [Dev] Unable to rejoin into Open study once withdrawn
|
Blocker Bug P0 Process: Dev Process: Fixed Process: Tested dev Unknown backend
|
Unable to rejoin into Open study once withdrawn for Android and iOS

|
3.0
|
[Mobile] [Dev] Unable to rejoin into Open study once withdrawn - Unable to rejoin into Open study once withdrawn for Android and iOS

|
process
|
unable to rejoin into open study once withdrawn unable to rejoin into open study once withdrawn for android and ios
| 1
|
8,326
| 11,490,029,689
|
IssuesEvent
|
2020-02-11 16:25:19
|
xatkit-bot-platform/xatkit-runtime
|
https://api.github.com/repos/xatkit-bot-platform/xatkit-runtime
|
opened
|
Pre/Post processor to detect the input language
|
Enhancement Processors
|
In some use cases the user may ask a question in a language not supported by the bot. It would be nice to at least catch this language to return an appropriate reply.
|
1.0
|
Pre/Post processor to detect the input language - In some use cases the user may ask a question in a language not supported by the bot. It would be nice to at least catch this language to return an appropriate reply.
|
process
|
pre post processor to detect the input language in some use cases the user may ask a question in a language not supported by the bot it would be nice to at least catch this language to return an appropriate reply
| 1
|
274,265
| 20,829,849,941
|
IssuesEvent
|
2022-03-19 08:38:01
|
intel/dffml
|
https://api.github.com/repos/intel/dffml
|
opened
|
docs: missing screenshot of grading rubric
|
documentation
|
The screenshot of the grading rubric seems to be missing in https://intel.github.io/dffml/master/contributing/gsoc/rubric.html and is nowhere to be found in the repository.
|
1.0
|
docs: missing screenshot of grading rubric - The screenshot of the grading rubric seems to be missing in https://intel.github.io/dffml/master/contributing/gsoc/rubric.html and is nowhere to be found in the repository.
|
non_process
|
docs missing screenshot of grading rubric the screenshot of the grading rubric seems to be missing in and is nowhere to be found in the repository
| 0
|
723
| 3,211,165,664
|
IssuesEvent
|
2015-10-06 09:12:10
|
rofrischmann/react-look
|
https://api.github.com/repos/rofrischmann/react-look
|
opened
|
Mixins as plain functions
|
improvement processor
|
Instead of having MixinTypes and all that stuff we could just have a method that gets called on every key,value pair (This even improves performance).
e.g.
```
let mixin = (property, value, {newProps}) => {
if (property === 'css') {
newProps.className = newProps.className ? newProps.className : '' + value
}
}
|
1.0
|
Mixins as plain functions - Instead of having MixinTypes and all that stuff we could just have a method that gets called on every key,value pair (This even improves performance).
e.g.
```
let mixin = (property, value, {newProps}) => {
if (property === 'css') {
newProps.className = newProps.className ? newProps.className : '' + value
}
}
|
process
|
mixins as plain functions instead of having mixintypes and all that stuff we could just have a method that gets called on every key value pair this even improves performance e g let mixin property value newprops if property css newprops classname newprops classname newprops classname value
| 1
|
16,226
| 20,762,488,015
|
IssuesEvent
|
2022-03-15 17:23:01
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Layer could not be generated
|
Feedback Processing Bug
|
### What is the bug or the crash?
I am getting this error when using the Saga Thiessen polygon command. Output layer is not created.
I am getting this error on all SAGA commands. I saved output layer to different fields but it is not created. (save to file)
Qgis 3.16.0
QGIS code revision: 43b64b13f3
I use win10
I did following link but It is not created.
https://github.com/qgis/QGIS/commit/286fd207c42b11b4b8cc980f446088201ec1a5fa
This error;
The following layers were not correctly generated.
• C:/Users/özgür/AppData/Local/Temp/processing_VwvPoV/ad67039ce33b44e0970b4ade2dac6174/POLYGONS.shp
You can check the 'Log Messages Panel' in QGIS main window to find more information about the execution of the algorithm.
### Steps to reproduce the issue

### Versions
QGIS version
3.16.0-Hannover
QGIS code revision
43b64b13f3
Compiled against Qt
5.11.2
Running against Qt
5.11.2
Compiled against GDAL/OGR
3.1.4
Running against GDAL/OGR
3.1.4
Compiled against GEOS
3.8.1-CAPI-1.13.3
Running against GEOS
3.8.1-CAPI-1.13.3
Compiled against SQLite
3.29.0
Running against SQLite
3.29.0
PostgreSQL Client Version
11.5
SpatiaLite Version
4.3.0
QWT Version
6.1.3
QScintilla2 Version
2.10.8
Compiled against PROJ
6.3.2
Running against PROJ
Rel. 6.3.2, May 1st, 2020
OS Version
Windows 10 (10.0)
Active python plugins
profiletool;
quick_map_services;
SRTM-Downloader;
db_manager;
MetaSearch;
processing
### Supported QGIS version
- [ ] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_
|
1.0
|
Layer could not be generated - ### What is the bug or the crash?
I am getting this error when using the Saga Thiessen polygon command. Output layer is not created.
I am getting this error on all SAGA commands. I saved output layer to different fields but it is not created. (save to file)
Qgis 3.16.0
QGIS code revision: 43b64b13f3
I use win10
I did following link but It is not created.
https://github.com/qgis/QGIS/commit/286fd207c42b11b4b8cc980f446088201ec1a5fa
This error;
The following layers were not correctly generated.
• C:/Users/özgür/AppData/Local/Temp/processing_VwvPoV/ad67039ce33b44e0970b4ade2dac6174/POLYGONS.shp
You can check the 'Log Messages Panel' in QGIS main window to find more information about the execution of the algorithm.
### Steps to reproduce the issue

### Versions
QGIS version
3.16.0-Hannover
QGIS code revision
43b64b13f3
Compiled against Qt
5.11.2
Running against Qt
5.11.2
Compiled against GDAL/OGR
3.1.4
Running against GDAL/OGR
3.1.4
Compiled against GEOS
3.8.1-CAPI-1.13.3
Running against GEOS
3.8.1-CAPI-1.13.3
Compiled against SQLite
3.29.0
Running against SQLite
3.29.0
PostgreSQL Client Version
11.5
SpatiaLite Version
4.3.0
QWT Version
6.1.3
QScintilla2 Version
2.10.8
Compiled against PROJ
6.3.2
Running against PROJ
Rel. 6.3.2, May 1st, 2020
OS Version
Windows 10 (10.0)
Active python plugins
profiletool;
quick_map_services;
SRTM-Downloader;
db_manager;
MetaSearch;
processing
### Supported QGIS version
- [ ] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_
|
process
|
layer could not be generated what is the bug or the crash i am getting this error when using the saga thiessen polygon command output layer is not created i am getting this error on all saga commands i saved output layer to different fields but it is not created save to file qgis qgis code revision i use i did following link but it is not created this error the following layers were not correctly generated • c users özgür appdata local temp processing vwvpov polygons shp you can check the log messages panel in qgis main window to find more information about the execution of the algorithm steps to reproduce the issue versions qgis version hannover qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel may os version windows active python plugins profiletool quick map services srtm downloader db manager metasearch processing supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
| 1
|
11,026
| 13,822,490,026
|
IssuesEvent
|
2020-10-13 05:12:40
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
ntr: CMG & MCM complex processes
|
New term request PomBase cell cycle and DNA processes
|
id: GO:new1
name: CMG complex assembly
def: The aggregation, arrangement and bonding together of a set of components to form the CMG complex, a protein complex that contains the GINS complex, Cdc45p, and the heterohexameric MCM complex, and that is involved in unwinding DNA during replication. The process begins when additional proteins (e.g. Cdc45 and Sld3) join the loaded, inactive double MCM hexamer at replication origins, and ends when Mcm10 triggers the separation of the Mcm2-7 double hexamers, forming two active CMG complexes. [PMID:28501329, PMID:22718908]
is_a: GO:0022402 ! cell cycle process
is_a: GO:0065004 ! protein-DNA complex assembly
GO:1902315 ! nuclear cell cycle DNA replication initiation
id: GO:new2
name: MCM complex loading
def: The protein localization process in which two MCM complexes become associated with chromatin at replication origins. MCM loading begins when origin-bound ORC and Cdc6 (Cdc18 in fission yeast) recruit one MCM2-7/Cdt1 complex to the origin, includes formation of a succession of intermediate complexes and ATP hydrolysis-dependent Mcm2-7 ring closure, and ends when two MCM hexamers fully encircle DNA, and are oriented head-to-head. The double hexamer is inactive for DNA unwinding. MCM loading takes place during G1 phase, and precedes CMG complex assembly."
[MPDI:23603117, PMID:28191893, PMID:28191894, PMID:28501329]
synonym: "MCM complex loading at replication origin" EXACT [GOC:mah]
synonym: "MCM double hexamer formation at replication origin" EXACT [GOC:mah]
is_a: GO:0022402 ! cell cycle process
is_a: GO:0065004 ! protein-DNA complex assembly
GO:1902315 ! nuclear cell cycle DNA replication initiation
I don't know whether it's worth also adding terms for the various complexes formed as precursors or intermediates in either of these processes (e.g. OCCM, pre-LC; see figs 1 & 2 of PMID:28191893). It might spawn a mini-project that I don't have time to tackle!
|
1.0
|
ntr: CMG & MCM complex processes - id: GO:new1
name: CMG complex assembly
def: The aggregation, arrangement and bonding together of a set of components to form the CMG complex, a protein complex that contains the GINS complex, Cdc45p, and the heterohexameric MCM complex, and that is involved in unwinding DNA during replication. The process begins when additional proteins (e.g. Cdc45 and Sld3) join the loaded, inactive double MCM hexamer at replication origins, and ends when Mcm10 triggers the separation of the Mcm2-7 double hexamers, forming two active CMG complexes. [PMID:28501329, PMID:22718908]
is_a: GO:0022402 ! cell cycle process
is_a: GO:0065004 ! protein-DNA complex assembly
GO:1902315 ! nuclear cell cycle DNA replication initiation
id: GO:new2
name: MCM complex loading
def: The protein localization process in which two MCM complexes become associated with chromatin at replication origins. MCM loading begins when origin-bound ORC and Cdc6 (Cdc18 in fission yeast) recruit one MCM2-7/Cdt1 complex to the origin, includes formation of a succession of intermediate complexes and ATP hydrolysis-dependent Mcm2-7 ring closure, and ends when two MCM hexamers fully encircle DNA, and are oriented head-to-head. The double hexamer is inactive for DNA unwinding. MCM loading takes place during G1 phase, and precedes CMG complex assembly."
[MPDI:23603117, PMID:28191893, PMID:28191894, PMID:28501329]
synonym: "MCM complex loading at replication origin" EXACT [GOC:mah]
synonym: "MCM double hexamer formation at replication origin" EXACT [GOC:mah]
is_a: GO:0022402 ! cell cycle process
is_a: GO:0065004 ! protein-DNA complex assembly
GO:1902315 ! nuclear cell cycle DNA replication initiation
I don't know whether it's worth also adding terms for the various complexes formed as precursors or intermediates in either of these processes (e.g. OCCM, pre-LC; see figs 1 & 2 of PMID:28191893). It might spawn a mini-project that I don't have time to tackle!
|
process
|
ntr cmg mcm complex processes id go name cmg complex assembly def the aggregation arrangement and bonding together of a set of components to form the cmg complex a protein complex that contains the gins complex and the heterohexameric mcm complex and that is involved in unwinding dna during replication the process begins when additional proteins e g and join the loaded inactive double mcm hexamer at replication origins and ends when triggers the separation of the double hexamers forming two active cmg complexes is a go cell cycle process is a go protein dna complex assembly go nuclear cell cycle dna replication initiation id go name mcm complex loading def the protein localization process in which two mcm complexes become associated with chromatin at replication origins mcm loading begins when origin bound orc and in fission yeast recruit one complex to the origin includes formation of a succession of intermediate complexes and atp hydrolysis dependent ring closure and ends when two mcm hexamers fully encircle dna and are oriented head to head the double hexamer is inactive for dna unwinding mcm loading takes place during phase and precedes cmg complex assembly synonym mcm complex loading at replication origin exact synonym mcm double hexamer formation at replication origin exact is a go cell cycle process is a go protein dna complex assembly go nuclear cell cycle dna replication initiation i don t know whether it s worth also adding terms for the various complexes formed as precursors or intermediates in either of these processes e g occm pre lc see figs of pmid it might spawn a mini project that i don t have time to tackle
| 1
|
408,053
| 11,941,048,920
|
IssuesEvent
|
2020-04-02 17:45:09
|
DevotedMC/JukeAlert
|
https://api.github.com/repos/DevotedMC/JukeAlert
|
closed
|
From Gjum: Some snitch entries do not have item representation in /ja GUI
|
Priority: Low Type: Bug
|
I'll direct Gjum here to add more details
|
1.0
|
From Gjum: Some snitch entries do not have item representation in /ja GUI - I'll direct Gjum here to add more details
|
non_process
|
from gjum some snitch entries do not have item representation in ja gui i ll direct gjum here to add more details
| 0
|
22,344
| 31,020,764,387
|
IssuesEvent
|
2023-08-10 05:03:32
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
closed
|
Release Checklist 0.85
|
enhancement process
|
### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.85.0)
- [x] GitHub checks for branch are passing
- [x] No pre-release or snapshot dependencies present in build files
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Manual Submission for GCP Marketplace verification by google
- [x] Publish marketplace release
- [x] Publish release
## Performance
- [x] Deployed
- [x] gRPC API performance tests
- [x] Importer performance tests
- [x] REST API performance tests
## Previewnet
- [x] Deployed
## Staging
- [x] Deployed
## Testnet
- [x] Deployed
## Mainnet
- [x] Deployed to public
- [x] Deployed to private
### Alternatives
_No response_
|
1.0
|
Release Checklist 0.85 - ### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.85.0)
- [x] GitHub checks for branch are passing
- [x] No pre-release or snapshot dependencies present in build files
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Manual Submission for GCP Marketplace verification by google
- [x] Publish marketplace release
- [x] Publish release
## Performance
- [x] Deployed
- [x] gRPC API performance tests
- [x] Importer performance tests
- [x] REST API performance tests
## Previewnet
- [x] Deployed
## Staging
- [x] Deployed
## Testnet
- [x] Deployed
## Mainnet
- [x] Deployed to public
- [x] Deployed to private
### Alternatives
_No response_
|
process
|
release checklist problem we need a checklist to verify the release is rolled out successfully solution preparation milestone field populated on relevant nothing open for github checks for branch are passing no pre release or snapshot dependencies present in build files automated kubernetes deployment successful tag release upload release artifacts manual submission for gcp marketplace verification by google publish marketplace release publish release performance deployed grpc api performance tests importer performance tests rest api performance tests previewnet deployed staging deployed testnet deployed mainnet deployed to public deployed to private alternatives no response
| 1
|
11,687
| 14,542,868,831
|
IssuesEvent
|
2020-12-15 16:12:30
|
Blazebit/blaze-persistence
|
https://api.github.com/repos/Blazebit/blaze-persistence
|
closed
|
Optional parameter detection in annotation processor doesn't handle constant concatenation
|
component: entity-view component: entity-view-annotation-processor kind: bug worth: high
|
When using something like `":" + CONSTANT` in a view filter provider we determine an empty alias in the annotation processor. This should be improved. We should either detect the name or simply don't generate the apply method as it results in a compiler error anyway.
|
1.0
|
Optional parameter detection in annotation processor doesn't handle constant concatenation - When using something like `":" + CONSTANT` in a view filter provider we determine an empty alias in the annotation processor. This should be improved. We should either detect the name or simply don't generate the apply method as it results in a compiler error anyway.
|
process
|
optional parameter detection in annotation processor doesn t handle constant concatenation when using something like constant in a view filter provider we determine an empty alias in the annotation processor this should be improved we should either detect the name or simply don t generate the apply method as it results in a compiler error anyway
| 1
|
7,081
| 10,229,683,903
|
IssuesEvent
|
2019-08-17 14:49:58
|
ION28/BLUESPAWN
|
https://api.github.com/repos/ION28/BLUESPAWN
|
opened
|
Code Execution and Lateral Movement Detection Opportunities
|
basic enhancement epic logs processes services
|
“Offensive Lateral Movement” by Ryan Hausknecht https://link.medium.com/XjbyLXzfeZ
This covers a number of techniques in attack and contains some basic iocs we can implement in the first implementation of our detection functions.
|
1.0
|
Code Execution and Lateral Movement Detection Opportunities - “Offensive Lateral Movement” by Ryan Hausknecht https://link.medium.com/XjbyLXzfeZ
This covers a number of techniques in attack and contains some basic iocs we can implement in the first implementation of our detection functions.
|
process
|
code execution and lateral movement detection opportunities “offensive lateral movement” by ryan hausknecht this covers a number of techniques in attack and contains some basic iocs we can implement in the first implementation of our detection functions
| 1
|
263,446
| 23,058,806,475
|
IssuesEvent
|
2022-07-25 08:03:24
|
elastic/e2e-testing
|
https://api.github.com/repos/elastic/e2e-testing
|
closed
|
Support reading environment variables from ".env"
|
Team:Automation area:test priority:low size:S triaged
|
There are plenty of libraries reading configs from env vars from a well-known location (i.e. the `.env` dir), adding a chain of priority for multiple files (production, testing, local, etc)
We could leverage this kind of tools to simplify the execution of the tests.
|
1.0
|
Support reading environment variables from ".env" - There are plenty of libraries reading configs from env vars from a well-known location (i.e. the `.env` dir), adding a chain of priority for multiple files (production, testing, local, etc)
We could leverage this kind of tools to simplify the execution of the tests.
|
non_process
|
support reading environment variables from env there are plenty of libraries reading configs from env vars from a well known location i e the env dir adding a chain of priority for multiple files production testing local etc we could leverage this kind of tools to simplify the execution of the tests
| 0
|
8,729
| 11,863,293,689
|
IssuesEvent
|
2020-03-25 19:27:07
|
prisma/vscode
|
https://api.github.com/repos/prisma/vscode
|
opened
|
Add tag/release on publishing
|
kind/feature process/candidate
|
When a new version is released, the commit should be tagged so we can understand what exactly got released and the releases are listed in the "Releases" tab.
|
1.0
|
Add tag/release on publishing - When a new version is released, the commit should be tagged so we can understand what exactly got released and the releases are listed in the "Releases" tab.
|
process
|
add tag release on publishing when a new version is released the commit should be tagged so we can understand what exactly got released and the releases are listed in the releases tab
| 1
|
417,329
| 12,158,436,169
|
IssuesEvent
|
2020-04-26 03:53:04
|
radareorg/radare2
|
https://api.github.com/repos/radareorg/radare2
|
opened
|
[XX] db/cmd/write wen 3 writable bin - happens from time to time on some PRs
|
high-priority r2r
|
```
[XX] db/cmd/write wen 3 writable bin
R2_NOPLUGINS=1 radare2 -escr.utf8=0 -escr.color=0 -escr.interactive=0 -N -Qc 'mkdir .tmp
cp bins/mach0/mac-ls2 .tmp/ls-wen
o .tmp/ls-wen
r
wen 3
r
oo+
r
wen 3
r
rm .tmp/ls-wen
' --
-- stdout
--- .a 2020-04-25 22:11:09.364560136 +0000
+++ .b 2020-04-25 22:11:09.364560136 +0000
@@ -1,4 +1,4 @@
38704
38704
38704
-38707
+38704
-- stderr
r_io_extend failed
r_io_extend failed
```
|
1.0
|
[XX] db/cmd/write wen 3 writable bin - happens from time to time on some PRs - ```
[XX] db/cmd/write wen 3 writable bin
R2_NOPLUGINS=1 radare2 -escr.utf8=0 -escr.color=0 -escr.interactive=0 -N -Qc 'mkdir .tmp
cp bins/mach0/mac-ls2 .tmp/ls-wen
o .tmp/ls-wen
r
wen 3
r
oo+
r
wen 3
r
rm .tmp/ls-wen
' --
-- stdout
--- .a 2020-04-25 22:11:09.364560136 +0000
+++ .b 2020-04-25 22:11:09.364560136 +0000
@@ -1,4 +1,4 @@
38704
38704
38704
-38707
+38704
-- stderr
r_io_extend failed
r_io_extend failed
```
|
non_process
|
db cmd write wen writable bin happens from time to time on some prs db cmd write wen writable bin noplugins escr escr color escr interactive n qc mkdir tmp cp bins mac tmp ls wen o tmp ls wen r wen r oo r wen r rm tmp ls wen stdout a b stderr r io extend failed r io extend failed
| 0
|
57,805
| 14,219,792,008
|
IssuesEvent
|
2020-11-17 13:45:51
|
LalithK90/nandanaMotors
|
https://api.github.com/repos/LalithK90/nandanaMotors
|
opened
|
CVE-2020-10693 (Medium) detected in hibernate-validator-6.0.18.Final.jar
|
security vulnerability
|
## CVE-2020-10693 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>hibernate-validator-6.0.18.Final.jar</b></p></summary>
<p>Hibernate's Bean Validation (JSR-380) reference implementation.</p>
<p>Library home page: <a href="http://hibernate.org/validator">http://hibernate.org/validator</a></p>
<p>Path to dependency file: nandanaMotors/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.hibernate.validator/hibernate-validator/6.0.18.Final/7fd00bcd87e14b6ba66279282ef15efa30dd2492/hibernate-validator-6.0.18.Final.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.4.RELEASE.jar (Root Library)
- spring-boot-starter-validation-2.2.4.RELEASE.jar
- :x: **hibernate-validator-6.0.18.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/LalithK90/nandanaMotors/commit/9ebf8ba435ba9756d5a6f1ff78590adcf1ee8487">9ebf8ba435ba9756d5a6f1ff78590adcf1ee8487</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in Hibernate Validator version 6.1.2.Final. A bug in the message interpolation processor enables invalid EL expressions to be evaluated as if they were valid. This flaw allows attackers to bypass input sanitation (escaping, stripping) controls that developers may have put in place when handling user-controlled data in error messages.
<p>Publish Date: 2020-05-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10693>CVE-2020-10693</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://hibernate.atlassian.net/projects/HV/issues/HV-1774">https://hibernate.atlassian.net/projects/HV/issues/HV-1774</a></p>
<p>Release Date: 2020-05-06</p>
<p>Fix Resolution: org.hibernate.validator:hibernate-validator:6.0.20.Final,org.hibernate.validator:hibernate-validator:6.1.5.Final,org.hibernate.validator:hibernate-validator:7.0.0.Alpha2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-10693 (Medium) detected in hibernate-validator-6.0.18.Final.jar - ## CVE-2020-10693 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>hibernate-validator-6.0.18.Final.jar</b></p></summary>
<p>Hibernate's Bean Validation (JSR-380) reference implementation.</p>
<p>Library home page: <a href="http://hibernate.org/validator">http://hibernate.org/validator</a></p>
<p>Path to dependency file: nandanaMotors/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.hibernate.validator/hibernate-validator/6.0.18.Final/7fd00bcd87e14b6ba66279282ef15efa30dd2492/hibernate-validator-6.0.18.Final.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.4.RELEASE.jar (Root Library)
- spring-boot-starter-validation-2.2.4.RELEASE.jar
- :x: **hibernate-validator-6.0.18.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/LalithK90/nandanaMotors/commit/9ebf8ba435ba9756d5a6f1ff78590adcf1ee8487">9ebf8ba435ba9756d5a6f1ff78590adcf1ee8487</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in Hibernate Validator version 6.1.2.Final. A bug in the message interpolation processor enables invalid EL expressions to be evaluated as if they were valid. This flaw allows attackers to bypass input sanitation (escaping, stripping) controls that developers may have put in place when handling user-controlled data in error messages.
<p>Publish Date: 2020-05-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10693>CVE-2020-10693</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://hibernate.atlassian.net/projects/HV/issues/HV-1774">https://hibernate.atlassian.net/projects/HV/issues/HV-1774</a></p>
<p>Release Date: 2020-05-06</p>
<p>Fix Resolution: org.hibernate.validator:hibernate-validator:6.0.20.Final,org.hibernate.validator:hibernate-validator:6.1.5.Final,org.hibernate.validator:hibernate-validator:7.0.0.Alpha2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in hibernate validator final jar cve medium severity vulnerability vulnerable library hibernate validator final jar hibernate s bean validation jsr reference implementation library home page a href path to dependency file nandanamotors build gradle path to vulnerable library home wss scanner gradle caches modules files org hibernate validator hibernate validator final hibernate validator final jar dependency hierarchy spring boot starter web release jar root library spring boot starter validation release jar x hibernate validator final jar vulnerable library found in head commit a href found in base branch master vulnerability details a flaw was found in hibernate validator version final a bug in the message interpolation processor enables invalid el expressions to be evaluated as if they were valid this flaw allows attackers to bypass input sanitation escaping stripping controls that developers may have put in place when handling user controlled data in error messages publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org hibernate validator hibernate validator final org hibernate validator hibernate validator final org hibernate validator hibernate validator step up your open source security game with whitesource
| 0
|
243,190
| 20,369,039,617
|
IssuesEvent
|
2022-02-21 09:25:53
|
hydrocode-de/RUINSapp
|
https://api.github.com/repos/hydrocode-de/RUINSapp
|
closed
|
build e2e test, if somehow possible
|
tests
|
This is a bit of a challenge. There seems to be no native way, how a streamlit app can be unit-tested. This makes maintenance way more complicated.
However, few resources I found:
- https://discuss.streamlit.io/t/access-app-errors-in-pytest/8015/3
Idea is to at least start streamlit apps and check that no `streamlit.exception` output is visible, using selenium.
Other pathway would be to build a thin wrapper around each streamlit app that catches any output on stderr and catches unhandled exceptions. Additionally, any exception handling within the apps needs to be configuable to pass exceptions in *test mode*. Silimar to:
```python
def main_app(debug=False):
try:
st.function_could_error()
except Exception as e:
if debug:
raise e
else:
st.exception(e)
```
|
1.0
|
build e2e test, if somehow possible - This is a bit of a challenge. There seems to be no native way, how a streamlit app can be unit-tested. This makes maintenance way more complicated.
However, few resources I found:
- https://discuss.streamlit.io/t/access-app-errors-in-pytest/8015/3
Idea is to at least start streamlit apps and check that no `streamlit.exception` output is visible, using selenium.
Other pathway would be to build a thin wrapper around each streamlit app that catches any output on stderr and catches unhandled exceptions. Additionally, any exception handling within the apps needs to be configuable to pass exceptions in *test mode*. Silimar to:
```python
def main_app(debug=False):
try:
st.function_could_error()
except Exception as e:
if debug:
raise e
else:
st.exception(e)
```
|
non_process
|
build test if somehow possible this is a bit of a challenge there seems to be no native way how a streamlit app can be unit tested this makes maintenance way more complicated however few resources i found idea is to at least start streamlit apps and check that no streamlit exception output is visible using selenium other pathway would be to build a thin wrapper around each streamlit app that catches any output on stderr and catches unhandled exceptions additionally any exception handling within the apps needs to be configuable to pass exceptions in test mode silimar to python def main app debug false try st function could error except exception as e if debug raise e else st exception e
| 0
|
43,200
| 5,583,656,219
|
IssuesEvent
|
2017-03-29 01:18:37
|
SEED-platform/seed
|
https://api.github.com/repos/SEED-platform/seed
|
closed
|
Move Filtering and Sorting to Front End
|
Filtering Needs Design
|
Any time you sort or filter a column it calls a functioncalled search_buildings within the search service. This should be handled on the front end using Angular.
The program should be sorting and filtering on the full data set.
Look at NYC data that has 12,000 records -- will there be a performance hit by putting the filtering and sorting in the front end?
|
1.0
|
Move Filtering and Sorting to Front End - Any time you sort or filter a column it calls a functioncalled search_buildings within the search service. This should be handled on the front end using Angular.
The program should be sorting and filtering on the full data set.
Look at NYC data that has 12,000 records -- will there be a performance hit by putting the filtering and sorting in the front end?
|
non_process
|
move filtering and sorting to front end any time you sort or filter a column it calls a functioncalled search buildings within the search service this should be handled on the front end using angular the program should be sorting and filtering on the full data set look at nyc data that has records will there be a performance hit by putting the filtering and sorting in the front end
| 0
|
25,060
| 12,216,960,402
|
IssuesEvent
|
2020-05-01 16:12:13
|
microsoft/BotFramework-WebChat
|
https://api.github.com/repos/microsoft/BotFramework-WebChat
|
opened
|
WebChat OAuth SSO doesn't continue after Login
|
Bot Services Bug customer-reported
|
Hey,
The issue we're having right now is that once we click on the `Login` button in the OAuthPrompt from the webchat, the SSO takes over and the Sign in Happens but when the flow returns to the webchat, nothing happens and the bot just hangs.
In the Microsoft Bot Framework (v4) bot that we're building, we have implemented the new SSO OAuth features that were recommended in this blog [here](https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-tutorial-authentication?view=azure-bot-service-3.0&tabs=aadv2&viewFallbackFrom=azure-bot-service-4.0) and [here](https://blog.botframework.com/2018/09/01/using-webchat-with-azure-bot-services-authentication/).
1) We initially had an `<iframe>` setup which prompted for the Magic code.
2) We then changed the `<iframe>` setup and migrated to a DirectLine channel by changing the webchat's source code to exchange the bot secret for a token (we also pass a unique userId in the format - `dl_guid()`
3) We pass that token down to `window.WebChat.createDirectLine` method sourced from the CDN - `https://cdn.botframework.com/botframework-webchat/latest/webchat.js`
4) We have AADV2 Setup with the right scopes and we also have the bot configured for this AAD.
5) We also have `Enhanced Authentication options` enabled for the DirectLine channel and have the `localhost` dev environment & the hosted server environment added to the **Trusted Origin** list
6) We've also enabled 3rd party cookies in the browser
## Screenshots

## Bot Source Code
Here's a snippet from the AuthDialog that we are using (TypeScript)
```
export class AuthDialog extends BaseDialog {
constructor(
private dialogContextUtils: DialogContextUtils,
private userManager: UserManager,
appConfig: AppConfig
) {
super(AUTH_DIALOG_ID, AUTH_WATERFALL_DIALOG, [
step => this.promptStep(step),
step => this.loginStep(step)
]);
this.addDialog(
new OAuthPrompt(OAUTH_PROMPT, {
connectionName: appConfig.connectionName,
text: 'Please login',
title: 'Login',
timeout: 300000
})
);
}
private async promptStep(step: WaterfallStepContext) {
return await step.beginDialog(OAUTH_PROMPT);
}
private async loginStep(step: WaterfallStepContext) {
const tokenResponse = step.result;
if (tokenResponse) {
await step.context.sendActivity(`Hi`);
}
return await step.endDialog(tokenResponse);
}
```
If we take a look at the code, the bot should essentially enter into `loginStep` but it doesn't (Tried it by setting breakpoints)
Would really appreciate some help with this issue.
|
1.0
|
WebChat OAuth SSO doesn't continue after Login - Hey,
The issue we're having right now is that once we click on the `Login` button in the OAuthPrompt from the webchat, the SSO takes over and the Sign in Happens but when the flow returns to the webchat, nothing happens and the bot just hangs.
In the Microsoft Bot Framework (v4) bot that we're building, we have implemented the new SSO OAuth features that were recommended in this blog [here](https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-tutorial-authentication?view=azure-bot-service-3.0&tabs=aadv2&viewFallbackFrom=azure-bot-service-4.0) and [here](https://blog.botframework.com/2018/09/01/using-webchat-with-azure-bot-services-authentication/).
1) We initially had an `<iframe>` setup which prompted for the Magic code.
2) We then changed the `<iframe>` setup and migrated to a DirectLine channel by changing the webchat's source code to exchange the bot secret for a token (we also pass a unique userId in the format - `dl_guid()`
3) We pass that token down to `window.WebChat.createDirectLine` method sourced from the CDN - `https://cdn.botframework.com/botframework-webchat/latest/webchat.js`
4) We have AADV2 Setup with the right scopes and we also have the bot configured for this AAD.
5) We also have `Enhanced Authentication options` enabled for the DirectLine channel and have the `localhost` dev environment & the hosted server environment added to the **Trusted Origin** list
6) We've also enabled 3rd party cookies in the browser
## Screenshots

## Bot Source Code
Here's a snippet from the AuthDialog that we are using (TypeScript)
```
export class AuthDialog extends BaseDialog {
constructor(
private dialogContextUtils: DialogContextUtils,
private userManager: UserManager,
appConfig: AppConfig
) {
super(AUTH_DIALOG_ID, AUTH_WATERFALL_DIALOG, [
step => this.promptStep(step),
step => this.loginStep(step)
]);
this.addDialog(
new OAuthPrompt(OAUTH_PROMPT, {
connectionName: appConfig.connectionName,
text: 'Please login',
title: 'Login',
timeout: 300000
})
);
}
private async promptStep(step: WaterfallStepContext) {
return await step.beginDialog(OAUTH_PROMPT);
}
private async loginStep(step: WaterfallStepContext) {
const tokenResponse = step.result;
if (tokenResponse) {
await step.context.sendActivity(`Hi`);
}
return await step.endDialog(tokenResponse);
}
```
If we take a look at the code, the bot should essentially enter into `loginStep` but it doesn't (Tried it by setting breakpoints)
Would really appreciate some help with this issue.
|
non_process
|
webchat oauth sso doesn t continue after login hey the issue we re having right now is that once we click on the login button in the oauthprompt from the webchat the sso takes over and the sign in happens but when the flow returns to the webchat nothing happens and the bot just hangs in the microsoft bot framework bot that we re building we have implemented the new sso oauth features that were recommended in this blog and we initially had an setup which prompted for the magic code we then changed the setup and migrated to a directline channel by changing the webchat s source code to exchange the bot secret for a token we also pass a unique userid in the format dl guid we pass that token down to window webchat createdirectline method sourced from the cdn we have setup with the right scopes and we also have the bot configured for this aad we also have enhanced authentication options enabled for the directline channel and have the localhost dev environment the hosted server environment added to the trusted origin list we ve also enabled party cookies in the browser screenshots bot source code here s a snippet from the authdialog that we are using typescript export class authdialog extends basedialog constructor private dialogcontextutils dialogcontextutils private usermanager usermanager appconfig appconfig super auth dialog id auth waterfall dialog step this promptstep step step this loginstep step this adddialog new oauthprompt oauth prompt connectionname appconfig connectionname text please login title login timeout private async promptstep step waterfallstepcontext return await step begindialog oauth prompt private async loginstep step waterfallstepcontext const tokenresponse step result if tokenresponse await step context sendactivity hi return await step enddialog tokenresponse if we take a look at the code the bot should essentially enter into loginstep but it doesn t tried it by setting breakpoints would really appreciate some help with this issue
| 0
|
33,984
| 28,062,634,507
|
IssuesEvent
|
2023-03-29 13:33:37
|
nf-core/tools
|
https://api.github.com/repos/nf-core/tools
|
closed
|
Implement tests for 'nf-core subworkflows create-test-yml' command
|
enhancement infrastructure
|
### Description of feature
Linked to #1883
|
1.0
|
Implement tests for 'nf-core subworkflows create-test-yml' command - ### Description of feature
Linked to #1883
|
non_process
|
implement tests for nf core subworkflows create test yml command description of feature linked to
| 0
|
2,108
| 4,940,433,163
|
IssuesEvent
|
2016-11-29 16:48:16
|
pelias/pelias
|
https://api.github.com/repos/pelias/pelias
|
closed
|
Street fallback
|
epic glorious future processed
|
In cases where no street numbers exist for a certain street, or there are few numbers on that street it would be ideal to simply return the name of the street with a centroid of the polyline.
An example would be if for 'Main Street' we only had numbers 1,2 and 42. A user should still be able to type 'Main Street' and get the central point for that street, while also being able to search for "1 Main Street" etc.
Street segments may need to be re-assembled to accurately compute the centroid, or alternatively we could try to import the roads as a geoJSON polyline type.
reported by: @randymeech
|
1.0
|
Street fallback - In cases where no street numbers exist for a certain street, or there are few numbers on that street it would be ideal to simply return the name of the street with a centroid of the polyline.
An example would be if for 'Main Street' we only had numbers 1,2 and 42. A user should still be able to type 'Main Street' and get the central point for that street, while also being able to search for "1 Main Street" etc.
Street segments may need to be re-assembled to accurately compute the centroid, or alternatively we could try to import the roads as a geoJSON polyline type.
reported by: @randymeech
|
process
|
street fallback in cases where no street numbers exist for a certain street or there are few numbers on that street it would be ideal to simply return the name of the street with a centroid of the polyline an example would be if for main street we only had numbers and a user should still be able to type main street and get the central point for that street while also being able to search for main street etc street segments may need to be re assembled to accurately compute the centroid or alternatively we could try to import the roads as a geojson polyline type reported by randymeech
| 1
|
379,143
| 11,216,378,736
|
IssuesEvent
|
2020-01-07 06:07:54
|
apache/incubator-echarts
|
https://api.github.com/repos/apache/incubator-echarts
|
closed
|
group support dragging in graphic component
|
difficulty: normal enhancement priority: high
|
<!--
为了方便我们能够复现和修复 bug,请遵从下面的规范描述您的问题。
-->
draggable: 'true',包含几个children,似乎没办法整体拖拽?如何解决呢?
### One-line summary [问题简述]
### Version & Environment [版本及环境]
+ ECharts version [ECharts 版本]: 最新版本
+ Browser version [浏览器类型和版本]:
+ OS Version [操作系统类型和版本]:
### Expected behaviour [期望结果]
### ECharts option [ECharts配置项]
<!-- Copy and paste your 'echarts option' here. -->
<!-- [下方贴你的option,注意不要删掉下方 ```javascript 和 尾部的 ``` 字样。最好是我们能够直接运行的 option。如何得到能运行的 option 参见上方的 guidelines for contributing] -->
```javascript
option = {
graphic: {
type: 'group',
draggable: 'true',
children: [...]
}
}
```
### Other comments [其他信息]
<!-- For example: Screenshot or Online demo -->
<!-- [例如,截图或线上实例 (JSFiddle/JSBin/Codepen)] -->
|
1.0
|
group support dragging in graphic component - <!--
为了方便我们能够复现和修复 bug,请遵从下面的规范描述您的问题。
-->
draggable: 'true',包含几个children,似乎没办法整体拖拽?如何解决呢?
### One-line summary [问题简述]
### Version & Environment [版本及环境]
+ ECharts version [ECharts 版本]: 最新版本
+ Browser version [浏览器类型和版本]:
+ OS Version [操作系统类型和版本]:
### Expected behaviour [期望结果]
### ECharts option [ECharts配置项]
<!-- Copy and paste your 'echarts option' here. -->
<!-- [下方贴你的option,注意不要删掉下方 ```javascript 和 尾部的 ``` 字样。最好是我们能够直接运行的 option。如何得到能运行的 option 参见上方的 guidelines for contributing] -->
```javascript
option = {
graphic: {
type: 'group',
draggable: 'true',
children: [...]
}
}
```
### Other comments [其他信息]
<!-- For example: Screenshot or Online demo -->
<!-- [例如,截图或线上实例 (JSFiddle/JSBin/Codepen)] -->
|
non_process
|
group support dragging in graphic component 为了方便我们能够复现和修复 bug,请遵从下面的规范描述您的问题。 draggable true ,包含几个children,似乎没办法整体拖拽?如何解决呢? one line summary version environment echarts version 最新版本 browser version os version expected behaviour echarts option javascript option graphic type group draggable true children other comments
| 0
|
21,990
| 30,485,525,583
|
IssuesEvent
|
2023-07-18 01:38:19
|
pingcap/tidb
|
https://api.github.com/repos/pingcap/tidb
|
closed
|
Coprocessor cache evict monitor always shows 0 in TiDB
|
type/bug component/coprocessor severity/moderate
|
## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
<!-- a step by step guide for reproducing the bug. -->
### 2. What did you expect to see? (Required)
The coprocessor cache should be properly evicted, and the monitor should show non-zero values when cache evict happens.
### 3. What did you see instead (Required)
The monitor always shows 0, indicating that the cache is not being properly evicted.
### 4. What is your TiDB version? (Required)
<!-- Paste the output of SELECT tidb_version() -->
|
1.0
|
Coprocessor cache evict monitor always shows 0 in TiDB - ## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
<!-- a step by step guide for reproducing the bug. -->
### 2. What did you expect to see? (Required)
The coprocessor cache should be properly evicted, and the monitor should show non-zero values when cache evict happens.
### 3. What did you see instead (Required)
The monitor always shows 0, indicating that the cache is not being properly evicted.
### 4. What is your TiDB version? (Required)
<!-- Paste the output of SELECT tidb_version() -->
|
process
|
coprocessor cache evict monitor always shows in tidb bug report please answer these questions before submitting your issue thanks minimal reproduce step required what did you expect to see required the coprocessor cache should be properly evicted and the monitor should show non zero values when cache evict happens what did you see instead required the monitor always shows indicating that the cache is not being properly evicted what is your tidb version required
| 1
|
341,004
| 10,281,411,839
|
IssuesEvent
|
2019-08-26 08:26:57
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
verified.capitalone.com - see bug description
|
browser-fenix engine-gecko priority-important
|
<!-- @browser: Firefox Mobile 69.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:69.0) Gecko/69.0 Firefox/69.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://verified.capitalone.com/auth/signin
**Browser / Version**: Firefox Mobile 69.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: Can't paste into empty field
**Steps to Reproduce**:
On the login form for capitalone.com I am unable to paste into blank fields. If I first type a few letters, then I can highlight those letters and then paste. I've noticed this behavior on many but not all websites when using Firefox Preview. The form I am filling out right now does allow me to past into empty fields
The non-preview version of Firefox mobile and Firefox Focus also misbehave. The mobile version of Chrome behaves as I expect.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
verified.capitalone.com - see bug description - <!-- @browser: Firefox Mobile 69.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:69.0) Gecko/69.0 Firefox/69.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://verified.capitalone.com/auth/signin
**Browser / Version**: Firefox Mobile 69.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: Can't paste into empty field
**Steps to Reproduce**:
On the login form for capitalone.com I am unable to paste into blank fields. If I first type a few letters, then I can highlight those letters and then paste. I've noticed this behavior on many but not all websites when using Firefox Preview. The form I am filling out right now does allow me to past into empty fields
The non-preview version of Firefox mobile and Firefox Focus also misbehave. The mobile version of Chrome behaves as I expect.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
verified capitalone com see bug description url browser version firefox mobile operating system android tested another browser yes problem type something else description can t paste into empty field steps to reproduce on the login form for capitalone com i am unable to paste into blank fields if i first type a few letters then i can highlight those letters and then paste i ve noticed this behavior on many but not all websites when using firefox preview the form i am filling out right now does allow me to past into empty fields the non preview version of firefox mobile and firefox focus also misbehave the mobile version of chrome behaves as i expect browser configuration none from with ❤️
| 0
|
3,362
| 13,033,007,647
|
IssuesEvent
|
2020-07-28 05:53:15
|
arcticicestudio/nord
|
https://api.github.com/repos/arcticicestudio/nord
|
closed
|
Repo label system
|
context-workflow resolution-answered scope-maintainability type-question
|
This has nothing to do with the wonderful Nord theme (that I've been using everywhere I can), but more about the label system you have in place for the repo.
I find it more useful than the usual (default) label system that has Github and I want to implement this system (and colors) for my own repos. Is there any kind of documentation about your workflow that I can read?
Also, is there any way to easily configure the labels in new repos? Doing this by hand is long and tedious!
|
True
|
Repo label system - This has nothing to do with the wonderful Nord theme (that I've been using everywhere I can), but more about the label system you have in place for the repo.
I find it more useful than the usual (default) label system that has Github and I want to implement this system (and colors) for my own repos. Is there any kind of documentation about your workflow that I can read?
Also, is there any way to easily configure the labels in new repos? Doing this by hand is long and tedious!
|
non_process
|
repo label system this has nothing to do with the wonderful nord theme that i ve been using everywhere i can but more about the label system you have in place for the repo i find it more useful than the usual default label system that has github and i want to implement this system and colors for my own repos is there any kind of documentation about your workflow that i can read also is there any way to easily configure the labels in new repos doing this by hand is long and tedious
| 0
|
131,669
| 18,248,623,541
|
IssuesEvent
|
2021-10-01 22:42:50
|
ghc-dev/Natalie-Smith
|
https://api.github.com/repos/ghc-dev/Natalie-Smith
|
opened
|
CVE-2020-7656 (Medium) detected in jquery-1.8.1.min.js
|
security vulnerability
|
## CVE-2020-7656 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: Natalie-Smith/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: /node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Natalie-Smith/commit/a6baa6be0beed1a68ccb3d4022d00f20152a5f8b">a6baa6be0beed1a68ccb3d4022d00f20152a5f8b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jquery prior to 1.9.0 allows Cross-site Scripting attacks via the load method. The load method fails to recognize and remove "<script>" HTML tags that contain a whitespace character, i.e: "</script >", which results in the enclosed script logic to be executed.
<p>Publish Date: 2020-05-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7656>CVE-2020-7656</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-q4m3-2j7h-f7xw">https://github.com/advisories/GHSA-q4m3-2j7h-f7xw</a></p>
<p>Release Date: 2020-05-28</p>
<p>Fix Resolution: jquery - 1.9.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.8.1","packageFilePaths":["/node_modules/redeyed/examples/browser/index.html"],"isTransitiveDependency":false,"dependencyTree":"jquery:1.8.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jquery - 1.9.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-7656","vulnerabilityDetails":"jquery prior to 1.9.0 allows Cross-site Scripting attacks via the load method. The load method fails to recognize and remove \"\u003cscript\u003e\" HTML tags that contain a whitespace character, i.e: \"\u003c/script \u003e\", which results in the enclosed script logic to be executed.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7656","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-7656 (Medium) detected in jquery-1.8.1.min.js - ## CVE-2020-7656 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: Natalie-Smith/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: /node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Natalie-Smith/commit/a6baa6be0beed1a68ccb3d4022d00f20152a5f8b">a6baa6be0beed1a68ccb3d4022d00f20152a5f8b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jquery prior to 1.9.0 allows Cross-site Scripting attacks via the load method. The load method fails to recognize and remove "<script>" HTML tags that contain a whitespace character, i.e: "</script >", which results in the enclosed script logic to be executed.
<p>Publish Date: 2020-05-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7656>CVE-2020-7656</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-q4m3-2j7h-f7xw">https://github.com/advisories/GHSA-q4m3-2j7h-f7xw</a></p>
<p>Release Date: 2020-05-28</p>
<p>Fix Resolution: jquery - 1.9.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.8.1","packageFilePaths":["/node_modules/redeyed/examples/browser/index.html"],"isTransitiveDependency":false,"dependencyTree":"jquery:1.8.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jquery - 1.9.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-7656","vulnerabilityDetails":"jquery prior to 1.9.0 allows Cross-site Scripting attacks via the load method. The load method fails to recognize and remove \"\u003cscript\u003e\" HTML tags that contain a whitespace character, i.e: \"\u003c/script \u003e\", which results in the enclosed script logic to be executed.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7656","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file natalie smith node modules redeyed examples browser index html path to vulnerable library node modules redeyed examples browser index html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details jquery prior to allows cross site scripting attacks via the load method the load method fails to recognize and remove html tags that contain a whitespace character i e which results in the enclosed script logic to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree jquery isminimumfixversionavailable true minimumfixversion jquery basebranches vulnerabilityidentifier cve vulnerabilitydetails jquery prior to allows cross site scripting attacks via the load method the load method fails to recognize and remove html tags that contain a whitespace character i e script which results in the enclosed script logic to be executed vulnerabilityurl
| 0
|
5,208
| 7,979,112,861
|
IssuesEvent
|
2018-07-17 20:35:11
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
getTraceCount
|
libs-etherlib status-inprocess type-enhancement
|
Some code uses getTraceCount to decide whether or not to process traces, thus:
if (getTraceCount() < 250) {
CTraceArray traces;
getTraces(traces, trans->hash);
....
}
I.E. only get traces if there are not that many of them.
This is a bug since it silently ignores internal transactions that may have accounting effects.
This is related to the bug of `lightTracing` documented elsewhere.
|
1.0
|
getTraceCount - Some code uses getTraceCount to decide whether or not to process traces, thus:
if (getTraceCount() < 250) {
CTraceArray traces;
getTraces(traces, trans->hash);
....
}
I.E. only get traces if there are not that many of them.
This is a bug since it silently ignores internal transactions that may have accounting effects.
This is related to the bug of `lightTracing` documented elsewhere.
|
process
|
gettracecount some code uses gettracecount to decide whether or not to process traces thus if gettracecount ctracearray traces gettraces traces trans hash i e only get traces if there are not that many of them this is a bug since it silently ignores internal transactions that may have accounting effects this is related to the bug of lighttracing documented elsewhere
| 1
|
130,208
| 12,425,130,924
|
IssuesEvent
|
2020-05-24 14:58:48
|
weso/hercules-ontology
|
https://api.github.com/repos/weso/hercules-ontology
|
closed
|
[HOI-0190] Analysis: Use of evOWLuator
|
affects: documentation affects: ontology status: awaiting-triage
|
Version v0.1.1 of evOWLuator - a cross-platform, energy aware evaluation tool for OWL reasoners - was released a few days ago. We should try this tool and analyse whether it could be added to our generated documentation or not.
Github repository: https://github.com/sisinflab-swot/evowluator
Documentation: http://sisinflab.poliba.it/swottools/evowluator/
|
1.0
|
[HOI-0190] Analysis: Use of evOWLuator - Version v0.1.1 of evOWLuator - a cross-platform, energy aware evaluation tool for OWL reasoners - was released a few days ago. We should try this tool and analyse whether it could be added to our generated documentation or not.
Github repository: https://github.com/sisinflab-swot/evowluator
Documentation: http://sisinflab.poliba.it/swottools/evowluator/
|
non_process
|
analysis use of evowluator version of evowluator a cross platform energy aware evaluation tool for owl reasoners was released a few days ago we should try this tool and analyse whether it could be added to our generated documentation or not github repository documentation
| 0
|
292,054
| 25,196,089,066
|
IssuesEvent
|
2022-11-12 14:33:07
|
Test-Automation-Crash-Course-24-10-22/team_04
|
https://api.github.com/repos/Test-Automation-Crash-Course-24-10-22/team_04
|
opened
|
Check the function of comparison
|
Test Case
|
**Descriptions:**
Check out the multi-product comparison feature
**Preconditions:**
Open https://rozetka.com.ua/ua/
Log in to your account
In the wishlist must be saved several products of the same group (for example, mobile phones)
**Test steps**
| Step | Test Data | Expected result |
| ------------- | ------------- | ------------- |
|1. Click on the wishlist icon on the header | | Wishlist opened |
|2. In the upper right corner of all products in the wishlist, click on the scale icon | | A green checkmark appears on the scale icon. A message appears: "Товар додано до порівняння". The counter of products for comparison in the header increases by the quantity of products wich were added |
|3. Click on the comparison list icon on the header | |A window with a list of comparisons opened |
|4. Click on the desired list | | A page with products for comparison opened with all parameters by default |
|5. Click on the button "Тільки відмінності" | | Only different parameters remained on the page |
|6. Remove the first product by clicking on the three vertical buttons next to the product name | | The selected product has disappeared from the comparison list. The counter of products for comparison in the header decreases by 1 |
|7. Click on " + Додати ще модель" | | Page with all products of group opened |
|8. Click on the scale icon of any product | | A green checkmark appears on the scale icon. A message appears: "Товар додано до порівняння". The counter of products for comparison in the header increases by 1 |
|
1.0
|
Check the function of comparison - **Descriptions:**
Check out the multi-product comparison feature
**Preconditions:**
Open https://rozetka.com.ua/ua/
Log in to your account
In the wishlist must be saved several products of the same group (for example, mobile phones)
**Test steps**
| Step | Test Data | Expected result |
| ------------- | ------------- | ------------- |
|1. Click on the wishlist icon on the header | | Wishlist opened |
|2. In the upper right corner of all products in the wishlist, click on the scale icon | | A green checkmark appears on the scale icon. A message appears: "Товар додано до порівняння". The counter of products for comparison in the header increases by the quantity of products wich were added |
|3. Click on the comparison list icon on the header | |A window with a list of comparisons opened |
|4. Click on the desired list | | A page with products for comparison opened with all parameters by default |
|5. Click on the button "Тільки відмінності" | | Only different parameters remained on the page |
|6. Remove the first product by clicking on the three vertical buttons next to the product name | | The selected product has disappeared from the comparison list. The counter of products for comparison in the header decreases by 1 |
|7. Click on " + Додати ще модель" | | Page with all products of group opened |
|8. Click on the scale icon of any product | | A green checkmark appears on the scale icon. A message appears: "Товар додано до порівняння". The counter of products for comparison in the header increases by 1 |
|
non_process
|
check the function of comparison descriptions check out the multi product comparison feature preconditions open log in to your account in the wishlist must be saved several products of the same group for example mobile phones test steps step test data expected result click on the wishlist icon on the header wishlist opened in the upper right corner of all products in the wishlist click on the scale icon a green checkmark appears on the scale icon a message appears товар додано до порівняння the counter of products for comparison in the header increases by the quantity of products wich were added click on the comparison list icon on the header a window with a list of comparisons opened click on the desired list a page with products for comparison opened with all parameters by default click on the button тільки відмінності only different parameters remained on the page remove the first product by clicking on the three vertical buttons next to the product name the selected product has disappeared from the comparison list the counter of products for comparison in the header decreases by click on додати ще модель page with all products of group opened click on the scale icon of any product a green checkmark appears on the scale icon a message appears товар додано до порівняння the counter of products for comparison in the header increases by
| 0
|
7,811
| 10,964,369,491
|
IssuesEvent
|
2019-11-27 22:22:04
|
codeuniversity/smag-mvp
|
https://api.github.com/repos/codeuniversity/smag-mvp
|
closed
|
Create face recognition endpoint and worker
|
Image Processing
|
- read internal_image_url, post_id and boundries from kafka topic
- call face recognition GRPC endpoint with image url
- take internal_image_url and boundries to crop image, creating image_crop_url for image proxy
- write array of faces with boundries and encoding to kafka topic
|
1.0
|
Create face recognition endpoint and worker - - read internal_image_url, post_id and boundries from kafka topic
- call face recognition GRPC endpoint with image url
- take internal_image_url and boundries to crop image, creating image_crop_url for image proxy
- write array of faces with boundries and encoding to kafka topic
|
process
|
create face recognition endpoint and worker read internal image url post id and boundries from kafka topic call face recognition grpc endpoint with image url take internal image url and boundries to crop image creating image crop url for image proxy write array of faces with boundries and encoding to kafka topic
| 1
|
373,882
| 26,089,706,750
|
IssuesEvent
|
2022-12-26 09:30:14
|
mentors-service/mentors-service-FE
|
https://api.github.com/repos/mentors-service/mentors-service-FE
|
closed
|
[FE] 디자인 시스템 작업
|
documentation frontend feat
|
프로젝트에 사용할 디자인 시스템 작업
고려할 요소들: 폰트, 여백, 너비, 컬러, 기타 등등...
ex) 색상
```
export const colors = {
$primary: '#FF055C',
$secondary: '#0BBFAD',
$black: '#0D0D0D',
$white: '#F2F2F2',
};
```
ex) 폰트
```
export const fonts = {
$xs: '14px',
$sm: '16px',
$base: '18px',
$lg: '20px',
$xl: '22px',
};
```
구글 폰트 - 영어: Open Sans, 한국어: Noto Sans Korean
Icon은 SVG 컴포넌트로 처리
default - width: 24, height: 24
large - width: 34, height: 34
button 관련
- 아이콘을 사용해서 어쩔 수 없이 높이가 34px로 맞춰져서 참고
참고 문서
[styled-components 공통 스타일 설정](https://styled-components.com/docs/api#create-a-declarations-file)
|
1.0
|
[FE] 디자인 시스템 작업 - 프로젝트에 사용할 디자인 시스템 작업
고려할 요소들: 폰트, 여백, 너비, 컬러, 기타 등등...
ex) 색상
```
export const colors = {
$primary: '#FF055C',
$secondary: '#0BBFAD',
$black: '#0D0D0D',
$white: '#F2F2F2',
};
```
ex) 폰트
```
export const fonts = {
$xs: '14px',
$sm: '16px',
$base: '18px',
$lg: '20px',
$xl: '22px',
};
```
구글 폰트 - 영어: Open Sans, 한국어: Noto Sans Korean
Icon은 SVG 컴포넌트로 처리
default - width: 24, height: 24
large - width: 34, height: 34
button 관련
- 아이콘을 사용해서 어쩔 수 없이 높이가 34px로 맞춰져서 참고
참고 문서
[styled-components 공통 스타일 설정](https://styled-components.com/docs/api#create-a-declarations-file)
|
non_process
|
디자인 시스템 작업 프로젝트에 사용할 디자인 시스템 작업 고려할 요소들 폰트 여백 너비 컬러 기타 등등 ex 색상 export const colors primary secondary black white ex 폰트 export const fonts xs sm base lg xl 구글 폰트 영어 open sans 한국어 noto sans korean icon은 svg 컴포넌트로 처리 default width height large width height button 관련 아이콘을 사용해서 어쩔 수 없이 높이가 맞춰져서 참고 참고 문서
| 0
|
13,589
| 16,162,950,032
|
IssuesEvent
|
2021-05-01 01:26:53
|
tdwg/chrono
|
https://api.github.com/repos/tdwg/chrono
|
closed
|
Add IRI-value terms section to term list document
|
Process - prepare for Executive review
|
I see the statement "Once ratified, the Darwin Core RDF Guide will be updated to include the IRI versions of appropriate terms from this vocabulary, in the chronoiri: namespace, for use in RDF (e.g., as Linked Open Data)." in [the issue above](https://github.com/tdwg/chrono/issues/18), however, this feels like something that should be an explicit part of the chrono vocabulary, not relegated to the RDF guide, and some possible future action on the RDF guide.
_Originally posted by @chicoreus in https://github.com/tdwg/chrono/issues/15#issuecomment-732207968_
|
1.0
|
Add IRI-value terms section to term list document - I see the statement "Once ratified, the Darwin Core RDF Guide will be updated to include the IRI versions of appropriate terms from this vocabulary, in the chronoiri: namespace, for use in RDF (e.g., as Linked Open Data)." in [the issue above](https://github.com/tdwg/chrono/issues/18), however, this feels like something that should be an explicit part of the chrono vocabulary, not relegated to the RDF guide, and some possible future action on the RDF guide.
_Originally posted by @chicoreus in https://github.com/tdwg/chrono/issues/15#issuecomment-732207968_
|
process
|
add iri value terms section to term list document i see the statement once ratified the darwin core rdf guide will be updated to include the iri versions of appropriate terms from this vocabulary in the chronoiri namespace for use in rdf e g as linked open data in however this feels like something that should be an explicit part of the chrono vocabulary not relegated to the rdf guide and some possible future action on the rdf guide originally posted by chicoreus in
| 1
|
5,258
| 8,051,791,503
|
IssuesEvent
|
2018-08-01 17:12:42
|
dealii/dealii
|
https://api.github.com/repos/dealii/dealii
|
closed
|
Address comments in #6994
|
Post-processing
|
It would be good for us to attend to the few remaining comments listed in #6994.
|
1.0
|
Address comments in #6994 - It would be good for us to attend to the few remaining comments listed in #6994.
|
process
|
address comments in it would be good for us to attend to the few remaining comments listed in
| 1
|
22,322
| 30,884,968,413
|
IssuesEvent
|
2023-08-03 20:52:15
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
opened
|
Release 6.3.2 - Aug 2023
|
P1 type: process release team-OSS
|
# Status of Bazel 6.3.2
- Expected first release candidate date: 2023-08-03
- Expected release date: 2023-08-07
- [List of release blockers](https://github.com/bazelbuild/bazel/milestone/59)
To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone.
To cherry-pick a mainline commit into 6.3.2, simply send a PR against the `release-6.3.2` branch.
**Task list:**
<!-- The first item is only needed for major releases (X.0.0) -->
- [ ] Create release candidate: 6.3.2
- [ ] Check downstream projects
- [ ] Create [draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit) <!-- Note that there should be a new Bazel Release Announcement document for every major release. For minor and patch releases, use the latest open doc. -->
- [ ] Push the blog post: [link to blog post] <!-- Only for major releases. -->
- [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
1.0
|
Release 6.3.2 - Aug 2023 - # Status of Bazel 6.3.2
- Expected first release candidate date: 2023-08-03
- Expected release date: 2023-08-07
- [List of release blockers](https://github.com/bazelbuild/bazel/milestone/59)
To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone.
To cherry-pick a mainline commit into 6.3.2, simply send a PR against the `release-6.3.2` branch.
**Task list:**
<!-- The first item is only needed for major releases (X.0.0) -->
- [ ] Create release candidate: 6.3.2
- [ ] Check downstream projects
- [ ] Create [draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit) <!-- Note that there should be a new Bazel Release Announcement document for every major release. For minor and patch releases, use the latest open doc. -->
- [ ] Push the blog post: [link to blog post] <!-- Only for major releases. -->
- [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
process
|
release aug status of bazel expected first release candidate date expected release date to report a release blocking bug please add a comment with the text bazel io flag to the issue a release manager will triage it and add it to the milestone to cherry pick a mainline commit into simply send a pr against the release branch task list create release candidate check downstream projects create push the blog post update the
| 1
|
7,603
| 10,720,284,492
|
IssuesEvent
|
2019-10-26 16:34:12
|
kavics/SnTraceViewer
|
https://api.github.com/repos/kavics/SnTraceViewer
|
opened
|
JOIN command
|
PROCESSOR
|
Write a command that can join trace files by any reader. Options:
- All
- Per session
|
1.0
|
JOIN command - Write a command that can join trace files by any reader. Options:
- All
- Per session
|
process
|
join command write a command that can join trace files by any reader options all per session
| 1
|
3,771
| 6,742,012,028
|
IssuesEvent
|
2017-10-20 05:03:33
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
Vagrant post processor changes disk size
|
need-more-info post-processor/vagrant question
|
Using the following JSON file the virtual box is built with the correct disk space of approx 11GB.
However, when is passes through the vagrant post processor the size is reset to 4GB, which is a default value.
The preseed.cfg file is below too
````
{
"builders": [{
"type": "qemu",
"iso_url": "{{user `mirror`}}/14.04/ubuntu-14.04.4-server-amd64.iso",
"iso_checksum": "{{user `iso_checksum`}}",
"iso_checksum_type": "{{user `iso_checksum_type`}}",
"output_directory": "output-ubuntu-14.04-amd64-{{build_type}}",
"vm_name": "packer-ubuntu-14.04-amd64",
"disk_size": "{{user `disk_size`}}",
"headless": "{{user `headless`}}",
"http_directory": "http",
"boot_wait": "5s",
"boot_command": [
"<esc><wait>",
"<esc><wait>",
"<enter><wait>",
"/install/vmlinuz ",
"initrd=/install/initrd.gz ",
"biosdevname=0 ",
"auto-install/enable=true ",
"debconf/priority=critical ",
"preseed/url=http://{{.HTTPIP}}:{{.HTTPPort}}/ubuntu-14.04/preseed.cfg ",
"<enter>"
],
"ssh_timeout": "{{user `ssh_timeout`}}",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"shutdown_command": "sudo poweroff",
"qemuargs": [
["-m", "{{user `memory`}}"],
["-smp", "{{user `cpus`}}"]
]
}, {
"type": "virtualbox-iso",
"guest_os_type": "Ubuntu_64",
"iso_url": "{{user `mirror`}}/14.04/ubuntu-14.04.4-server-amd64.iso",
"iso_checksum": "{{user `iso_checksum`}}",
"iso_checksum_type": "{{user `iso_checksum_type`}}",
"output_directory": "output-ubuntu-14.04-amd64-{{build_type}}",
"vm_name": "packer-ubuntu-14.04-amd64",
"format": "ova",
"disk_size": "{{user `disk_size`}}",
"headless": "{{user `headless`}}",
"http_directory": "http",
"boot_wait": "5s",
"boot_command": [
"<esc><wait>",
"<esc><wait>",
"<enter><wait>",
"/install/vmlinuz ",
"initrd=/install/initrd.gz ",
"biosdevname=0 ",
"auto-install/enable=true ",
"debconf/priority=critical ",
"preseed/url=http://{{.HTTPIP}}:{{.HTTPPort}}/ubuntu-14.04/preseed.cfg ",
"<enter>"
],
"ssh_timeout": "{{user `ssh_timeout`}}",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"shutdown_command": "sudo poweroff",
"vboxmanage": [
["modifyvm", "{{.Name}}", "--memory", "{{user `memory`}}"],
["modifyvm", "{{.Name}}", "--cpus", "{{user `cpus`}}"]
]
}, {
"type": "vmware-iso",
"guest_os_type": "ubuntu-64",
"iso_url": "{{user `mirror`}}/14.04/ubuntu-14.04.4-server-amd64.iso",
"iso_checksum": "{{user `iso_checksum`}}",
"iso_checksum_type": "{{user `iso_checksum_type`}}",
"output_directory": "output-ubuntu-14.04-amd64-{{build_type}}",
"vm_name": "packer-ubuntu-14.04-amd64",
"disk_size": "{{user `disk_size`}}",
"headless": "{{user `headless`}}",
"http_directory": "http",
"boot_wait": "5s",
"boot_command": [
"<esc><wait>",
"<esc><wait>",
"<enter><wait>",
"/install/vmlinuz ",
"initrd=/install/initrd.gz ",
"biosdevname=0 ",
"auto-install/enable=true ",
"debconf/priority=critical ",
"preseed/url=http://{{.HTTPIP}}:{{.HTTPPort}}/ubuntu-14.04/preseed.cfg ",
"<enter>"
],
"ssh_timeout": "{{user `ssh_timeout`}}",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"tools_upload_flavor": "linux",
"shutdown_command": "sudo poweroff",
"vmx_data": {
"memsize": "{{user `memory`}}",
"numvcpus": "{{user `cpus`}}"
}
}],
"provisioners": [{
"type": "shell",
"scripts": [
"scripts/ubuntu/apt.sh",
"scripts/ubuntu/virtualbox.sh",
"scripts/ubuntu-14.04/vmware.sh",
"scripts/ubuntu/init.sh",
"scripts/common/vagrant.sh",
"scripts/common/sshd.sh",
"scripts/ubuntu/cleanup.sh",
"scripts/common/minimize.sh"
]
}],
"post-processors": [
{
"type": "vagrant",
"compression_level": "{{user `compression_level`}}",
"output": "ubuntu-14.04-amd64-{{.Provider}}.box",
"keep_input_artifact": true
}
],
"variables": {
"compression_level": "6",
"cpus": "1",
"disk_size": "11264",
"headless": "false",
"iso_checksum": "07e4bb5569814eab41fafac882ba127893e3ff0bdb7ec931c9b2d040e3e94e7a",
"iso_checksum_type": "sha256",
"memory": "512",
"mirror": "http://releases.ubuntu.com",
"ssh_timeout": "60m"
}
}
````
preseed.cfg:
````
d-i debian-installer/locale string en_US
d-i time/zone string UTC
d-i keyboard-configuration/xkb-keymap select us
d-i partman-auto/method string regular
d-i partman-auto/expert_recipe string \
scheme :: \
200 2048 200 ext4 \
$primary{ } \
$bootable{ } \
method{ format } \
format{ } \
use_filesystem{ } \
filesystem{ ext4 } \
mountpoint{ /boot } . \
512 512 512 linux-swap \
$primary{ } \
method{ swap } \
format{ } . \
10240 10240 1000000000 ext4 \
$primary{ } \
method{ format } \
format{ } \
use_filesystem{ } \
filesystem{ ext4 } \
mountpoint{ / } .
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
d-i base-installer/excludes string laptop-detect
d-i passwd/root-password-again password vagrant
d-i passwd/root-password password vagrant
d-i passwd/user-fullname string vagrant
d-i passwd/username string vagrant
d-i passwd/user-password password vagrant
d-i passwd/user-password-again password vagrant
d-i user-setup/allow-password-weak boolean true
d-i pkgsel/include string curl openssh-server sudo
d-i pkgsel/language-packs multiselect
d-i finish-install/reboot_in_progress note
d-i preseed/early_command string \
mkdir -p /usr/lib/post-base-installer.d && \
echo "sed -i -e 's/^in-target.*tasksel.*/#\\0/' /var/lib/dpkg/info/pkgsel.postinst" > /usr/lib/post-base-installer.d/90skip-tasksel && \
chmod +x /usr/lib/post-base-installer.d/90skip-tasksel
````
|
1.0
|
Vagrant post processor changes disk size - Using the following JSON file the virtual box is built with the correct disk space of approx 11GB.
However, when is passes through the vagrant post processor the size is reset to 4GB, which is a default value.
The preseed.cfg file is below too
````
{
"builders": [{
"type": "qemu",
"iso_url": "{{user `mirror`}}/14.04/ubuntu-14.04.4-server-amd64.iso",
"iso_checksum": "{{user `iso_checksum`}}",
"iso_checksum_type": "{{user `iso_checksum_type`}}",
"output_directory": "output-ubuntu-14.04-amd64-{{build_type}}",
"vm_name": "packer-ubuntu-14.04-amd64",
"disk_size": "{{user `disk_size`}}",
"headless": "{{user `headless`}}",
"http_directory": "http",
"boot_wait": "5s",
"boot_command": [
"<esc><wait>",
"<esc><wait>",
"<enter><wait>",
"/install/vmlinuz ",
"initrd=/install/initrd.gz ",
"biosdevname=0 ",
"auto-install/enable=true ",
"debconf/priority=critical ",
"preseed/url=http://{{.HTTPIP}}:{{.HTTPPort}}/ubuntu-14.04/preseed.cfg ",
"<enter>"
],
"ssh_timeout": "{{user `ssh_timeout`}}",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"shutdown_command": "sudo poweroff",
"qemuargs": [
["-m", "{{user `memory`}}"],
["-smp", "{{user `cpus`}}"]
]
}, {
"type": "virtualbox-iso",
"guest_os_type": "Ubuntu_64",
"iso_url": "{{user `mirror`}}/14.04/ubuntu-14.04.4-server-amd64.iso",
"iso_checksum": "{{user `iso_checksum`}}",
"iso_checksum_type": "{{user `iso_checksum_type`}}",
"output_directory": "output-ubuntu-14.04-amd64-{{build_type}}",
"vm_name": "packer-ubuntu-14.04-amd64",
"format": "ova",
"disk_size": "{{user `disk_size`}}",
"headless": "{{user `headless`}}",
"http_directory": "http",
"boot_wait": "5s",
"boot_command": [
"<esc><wait>",
"<esc><wait>",
"<enter><wait>",
"/install/vmlinuz ",
"initrd=/install/initrd.gz ",
"biosdevname=0 ",
"auto-install/enable=true ",
"debconf/priority=critical ",
"preseed/url=http://{{.HTTPIP}}:{{.HTTPPort}}/ubuntu-14.04/preseed.cfg ",
"<enter>"
],
"ssh_timeout": "{{user `ssh_timeout`}}",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"shutdown_command": "sudo poweroff",
"vboxmanage": [
["modifyvm", "{{.Name}}", "--memory", "{{user `memory`}}"],
["modifyvm", "{{.Name}}", "--cpus", "{{user `cpus`}}"]
]
}, {
"type": "vmware-iso",
"guest_os_type": "ubuntu-64",
"iso_url": "{{user `mirror`}}/14.04/ubuntu-14.04.4-server-amd64.iso",
"iso_checksum": "{{user `iso_checksum`}}",
"iso_checksum_type": "{{user `iso_checksum_type`}}",
"output_directory": "output-ubuntu-14.04-amd64-{{build_type}}",
"vm_name": "packer-ubuntu-14.04-amd64",
"disk_size": "{{user `disk_size`}}",
"headless": "{{user `headless`}}",
"http_directory": "http",
"boot_wait": "5s",
"boot_command": [
"<esc><wait>",
"<esc><wait>",
"<enter><wait>",
"/install/vmlinuz ",
"initrd=/install/initrd.gz ",
"biosdevname=0 ",
"auto-install/enable=true ",
"debconf/priority=critical ",
"preseed/url=http://{{.HTTPIP}}:{{.HTTPPort}}/ubuntu-14.04/preseed.cfg ",
"<enter>"
],
"ssh_timeout": "{{user `ssh_timeout`}}",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"tools_upload_flavor": "linux",
"shutdown_command": "sudo poweroff",
"vmx_data": {
"memsize": "{{user `memory`}}",
"numvcpus": "{{user `cpus`}}"
}
}],
"provisioners": [{
"type": "shell",
"scripts": [
"scripts/ubuntu/apt.sh",
"scripts/ubuntu/virtualbox.sh",
"scripts/ubuntu-14.04/vmware.sh",
"scripts/ubuntu/init.sh",
"scripts/common/vagrant.sh",
"scripts/common/sshd.sh",
"scripts/ubuntu/cleanup.sh",
"scripts/common/minimize.sh"
]
}],
"post-processors": [
{
"type": "vagrant",
"compression_level": "{{user `compression_level`}}",
"output": "ubuntu-14.04-amd64-{{.Provider}}.box",
"keep_input_artifact": true
}
],
"variables": {
"compression_level": "6",
"cpus": "1",
"disk_size": "11264",
"headless": "false",
"iso_checksum": "07e4bb5569814eab41fafac882ba127893e3ff0bdb7ec931c9b2d040e3e94e7a",
"iso_checksum_type": "sha256",
"memory": "512",
"mirror": "http://releases.ubuntu.com",
"ssh_timeout": "60m"
}
}
````
preseed.cfg:
````
d-i debian-installer/locale string en_US
d-i time/zone string UTC
d-i keyboard-configuration/xkb-keymap select us
d-i partman-auto/method string regular
d-i partman-auto/expert_recipe string \
scheme :: \
200 2048 200 ext4 \
$primary{ } \
$bootable{ } \
method{ format } \
format{ } \
use_filesystem{ } \
filesystem{ ext4 } \
mountpoint{ /boot } . \
512 512 512 linux-swap \
$primary{ } \
method{ swap } \
format{ } . \
10240 10240 1000000000 ext4 \
$primary{ } \
method{ format } \
format{ } \
use_filesystem{ } \
filesystem{ ext4 } \
mountpoint{ / } .
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
d-i base-installer/excludes string laptop-detect
d-i passwd/root-password-again password vagrant
d-i passwd/root-password password vagrant
d-i passwd/user-fullname string vagrant
d-i passwd/username string vagrant
d-i passwd/user-password password vagrant
d-i passwd/user-password-again password vagrant
d-i user-setup/allow-password-weak boolean true
d-i pkgsel/include string curl openssh-server sudo
d-i pkgsel/language-packs multiselect
d-i finish-install/reboot_in_progress note
d-i preseed/early_command string \
mkdir -p /usr/lib/post-base-installer.d && \
echo "sed -i -e 's/^in-target.*tasksel.*/#\\0/' /var/lib/dpkg/info/pkgsel.postinst" > /usr/lib/post-base-installer.d/90skip-tasksel && \
chmod +x /usr/lib/post-base-installer.d/90skip-tasksel
````
|
process
|
vagrant post processor changes disk size using the following json file the virtual box is built with the correct disk space of approx however when is passes through the vagrant post processor the size is reset to which is a default value the preseed cfg file is below too builders type qemu iso url user mirror ubuntu server iso iso checksum user iso checksum iso checksum type user iso checksum type output directory output ubuntu build type vm name packer ubuntu disk size user disk size headless user headless http directory http boot wait boot command install vmlinuz initrd install initrd gz biosdevname auto install enable true debconf priority critical preseed url ssh timeout user ssh timeout ssh username vagrant ssh password vagrant shutdown command sudo poweroff qemuargs type virtualbox iso guest os type ubuntu iso url user mirror ubuntu server iso iso checksum user iso checksum iso checksum type user iso checksum type output directory output ubuntu build type vm name packer ubuntu format ova disk size user disk size headless user headless http directory http boot wait boot command install vmlinuz initrd install initrd gz biosdevname auto install enable true debconf priority critical preseed url ssh timeout user ssh timeout ssh username vagrant ssh password vagrant shutdown command sudo poweroff vboxmanage type vmware iso guest os type ubuntu iso url user mirror ubuntu server iso iso checksum user iso checksum iso checksum type user iso checksum type output directory output ubuntu build type vm name packer ubuntu disk size user disk size headless user headless http directory http boot wait boot command install vmlinuz initrd install initrd gz biosdevname auto install enable true debconf priority critical preseed url ssh timeout user ssh timeout ssh username vagrant ssh password vagrant tools upload flavor linux shutdown command sudo poweroff vmx data memsize user memory numvcpus user cpus provisioners type shell scripts scripts ubuntu apt sh scripts ubuntu virtualbox sh scripts ubuntu vmware sh scripts ubuntu init sh scripts common vagrant sh scripts common sshd sh scripts ubuntu cleanup sh scripts common minimize sh post processors type vagrant compression level user compression level output ubuntu provider box keep input artifact true variables compression level cpus disk size headless false iso checksum iso checksum type memory mirror ssh timeout preseed cfg d i debian installer locale string en us d i time zone string utc d i keyboard configuration xkb keymap select us d i partman auto method string regular d i partman auto expert recipe string scheme primary bootable method format format use filesystem filesystem mountpoint boot linux swap primary method swap format primary method format format use filesystem filesystem mountpoint d i partman partitioning confirm write new label boolean true d i partman choose partition select finish d i partman confirm boolean true d i partman confirm nooverwrite boolean true d i base installer excludes string laptop detect d i passwd root password again password vagrant d i passwd root password password vagrant d i passwd user fullname string vagrant d i passwd username string vagrant d i passwd user password password vagrant d i passwd user password again password vagrant d i user setup allow password weak boolean true d i pkgsel include string curl openssh server sudo d i pkgsel language packs multiselect d i finish install reboot in progress note d i preseed early command string mkdir p usr lib post base installer d echo sed i e s in target tasksel var lib dpkg info pkgsel postinst usr lib post base installer d tasksel chmod x usr lib post base installer d tasksel
| 1
|
17,240
| 22,969,303,996
|
IssuesEvent
|
2022-07-20 00:21:02
|
googleapis/google-cloud-go
|
https://api.github.com/repos/googleapis/google-cloud-go
|
opened
|
storage: propagate retry config to grpc methods
|
api: storage type: process
|
While doing #6370 I noticed that `ShouldRetry` is referenced directly from some methods in grpc_client.go. This means that these methods do not honor the user-configured WithErrorFunc RetryOption, if it has been passed. Filing an issue to fix this up.
cc @noahdietz
|
1.0
|
storage: propagate retry config to grpc methods - While doing #6370 I noticed that `ShouldRetry` is referenced directly from some methods in grpc_client.go. This means that these methods do not honor the user-configured WithErrorFunc RetryOption, if it has been passed. Filing an issue to fix this up.
cc @noahdietz
|
process
|
storage propagate retry config to grpc methods while doing i noticed that shouldretry is referenced directly from some methods in grpc client go this means that these methods do not honor the user configured witherrorfunc retryoption if it has been passed filing an issue to fix this up cc noahdietz
| 1
|
309,459
| 26,662,736,539
|
IssuesEvent
|
2023-01-25 22:53:24
|
MPMG-DCC-UFMG/F01
|
https://api.github.com/repos/MPMG-DCC-UFMG/F01
|
closed
|
Teste de generalizacao para a tag Despesas com diárias - Despesas com diárias - Itabirito
|
generalization test development template - ABO (21) tag - Despesas com diárias subtag - Despesas com diárias
|
DoD: Realizar o teste de Generalização do validador da tag Despesas com diárias - Despesas com diárias para o Município de Itabirito.
|
1.0
|
Teste de generalizacao para a tag Despesas com diárias - Despesas com diárias - Itabirito - DoD: Realizar o teste de Generalização do validador da tag Despesas com diárias - Despesas com diárias para o Município de Itabirito.
|
non_process
|
teste de generalizacao para a tag despesas com diárias despesas com diárias itabirito dod realizar o teste de generalização do validador da tag despesas com diárias despesas com diárias para o município de itabirito
| 0
|
330,727
| 28,484,909,306
|
IssuesEvent
|
2023-04-18 07:08:58
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
closed
|
Fix miscellaneous.test_numpy_copysign
|
NumPy Frontend Sub Task Failing Test
|
| | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4728598063/jobs/8390290760" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4728598063/jobs/8390290760" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4728598063/jobs/8390290760" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4728598063/jobs/8390290760" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|
1.0
|
Fix miscellaneous.test_numpy_copysign - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4728598063/jobs/8390290760" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4728598063/jobs/8390290760" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4728598063/jobs/8390290760" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4728598063/jobs/8390290760" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|
non_process
|
fix miscellaneous test numpy copysign tensorflow img src torch img src numpy img src jax img src
| 0
|
32,861
| 7,611,173,768
|
IssuesEvent
|
2018-05-01 12:43:39
|
chrisblakley/Nebula
|
https://api.github.com/repos/chrisblakley/Nebula
|
opened
|
"Next Nebula Version" data is stuck
|
:beetle: Bug :fire: High Priority Backend (Server) WP Admin / Shortcode / Widget
|
The Nebula theme updater does not appear to be updating the remote Nebula theme version from Github.

It's stuck at `5.11.25.8631` even after manually updating the theme.
|
1.0
|
"Next Nebula Version" data is stuck - The Nebula theme updater does not appear to be updating the remote Nebula theme version from Github.

It's stuck at `5.11.25.8631` even after manually updating the theme.
|
non_process
|
next nebula version data is stuck the nebula theme updater does not appear to be updating the remote nebula theme version from github it s stuck at even after manually updating the theme
| 0
|
20,690
| 27,361,353,764
|
IssuesEvent
|
2023-02-27 16:04:02
|
helmholtz-analytics/heat
|
https://api.github.com/repos/helmholtz-analytics/heat
|
closed
|
[Bug]: convolve with distributed kernel on multiple GPUs
|
bug :bug: signal processing communication signal
|
### What happened?
convolve does not work if the kernel is distributed when more than one GPU is available.
### Code snippet triggering the error
```python
import heat as ht
dis_signal = ht.arange(0, 16, split=0, device='gpu', dtype=ht.int)
dis_kernel_odd = ht.ones(3, split=0, dtype=ht.int, device='gpu')
conv = ht.convolve(dis_signal, dis_kernel_odd, mode='full')
```
### Error message or erroneous outcome
```shell
$ CUDA_VISIBLE_DEVICES=0,1,2,3 srun --ntasks=2 -l python test.py
1:Traceback (most recent call last):
1: File ".../test.py", line 7, in <module>
1: conv = ht.convolve(dis_signal, dis_kernel_odd, mode='full')
1: File ".../heat-venv_2023/lib/python3.10/site-packages/heat/core/signal.py", line 161, in convolve
1: local_signal_filtered = fc.conv1d(signal, t_v1)
1: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument weight in method wrapper__cudnn_convolution)
```
### Version
main (development branch)
### Python version
3.10
### PyTorch version
1.12
### MPI version
```shell
OpenMPI 4.1.4
```
|
1.0
|
[Bug]: convolve with distributed kernel on multiple GPUs - ### What happened?
convolve does not work if the kernel is distributed when more than one GPU is available.
### Code snippet triggering the error
```python
import heat as ht
dis_signal = ht.arange(0, 16, split=0, device='gpu', dtype=ht.int)
dis_kernel_odd = ht.ones(3, split=0, dtype=ht.int, device='gpu')
conv = ht.convolve(dis_signal, dis_kernel_odd, mode='full')
```
### Error message or erroneous outcome
```shell
$ CUDA_VISIBLE_DEVICES=0,1,2,3 srun --ntasks=2 -l python test.py
1:Traceback (most recent call last):
1: File ".../test.py", line 7, in <module>
1: conv = ht.convolve(dis_signal, dis_kernel_odd, mode='full')
1: File ".../heat-venv_2023/lib/python3.10/site-packages/heat/core/signal.py", line 161, in convolve
1: local_signal_filtered = fc.conv1d(signal, t_v1)
1: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument weight in method wrapper__cudnn_convolution)
```
### Version
main (development branch)
### Python version
3.10
### PyTorch version
1.12
### MPI version
```shell
OpenMPI 4.1.4
```
|
process
|
convolve with distributed kernel on multiple gpus what happened convolve does not work if the kernel is distributed when more than one gpu is available code snippet triggering the error python import heat as ht dis signal ht arange split device gpu dtype ht int dis kernel odd ht ones split dtype ht int device gpu conv ht convolve dis signal dis kernel odd mode full error message or erroneous outcome shell cuda visible devices srun ntasks l python test py traceback most recent call last file test py line in conv ht convolve dis signal dis kernel odd mode full file heat venv lib site packages heat core signal py line in convolve local signal filtered fc signal t runtimeerror expected all tensors to be on the same device but found at least two devices cuda and cuda when checking argument for argument weight in method wrapper cudnn convolution version main development branch python version pytorch version mpi version shell openmpi
| 1
|
14,255
| 17,190,889,688
|
IssuesEvent
|
2021-07-16 10:46:10
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
opened
|
Status of Bazel 5.0.0-pre.20210708.4
|
P1 release team-XProduct type: process
|
- Expected release date: 2021-07-16
Task list:
- [ ] Pick release baseline: [ca1d20fd](https://github.com/bazelbuild/bazel/commit/ca1d20fdfa95dad533c64aba08ba9d7d98be41b7) with cherrypicks [802901e6](https://github.com/bazelbuild/bazel/commit/802901e697015ee6a56ac36cd0000c1079207d12) [aa768ada](https://github.com/bazelbuild/bazel/commit/aa768ada9ef6bcd8de878a5ca2dbd9932f0868fc) [4bcf2e83](https://github.com/bazelbuild/bazel/commit/4bcf2e83c5cb4f459aae815b38f1edd823286a29) [b27fd22f](https://github.com/bazelbuild/bazel/commit/b27fd22f1bd1e29ec2475a3935c9004cc14713bf) [5d926349](https://github.com/bazelbuild/bazel/commit/5d926349949060c5a3e6699550fa1ac64761901e)
- [ ] Create release candidate: https://releases.bazel.build/5.0.0/rolling/5.0.0-pre.20210708.4rc1/index.html
- [ ] Post-submit: https://buildkite.com/bazel/bazel-bazel
- [ ] Push the release: https://releases.bazel.build/5.0.0/rolling/5.0.0-pre.20210708.4/index.html
- [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
1.0
|
Status of Bazel 5.0.0-pre.20210708.4 -
- Expected release date: 2021-07-16
Task list:
- [ ] Pick release baseline: [ca1d20fd](https://github.com/bazelbuild/bazel/commit/ca1d20fdfa95dad533c64aba08ba9d7d98be41b7) with cherrypicks [802901e6](https://github.com/bazelbuild/bazel/commit/802901e697015ee6a56ac36cd0000c1079207d12) [aa768ada](https://github.com/bazelbuild/bazel/commit/aa768ada9ef6bcd8de878a5ca2dbd9932f0868fc) [4bcf2e83](https://github.com/bazelbuild/bazel/commit/4bcf2e83c5cb4f459aae815b38f1edd823286a29) [b27fd22f](https://github.com/bazelbuild/bazel/commit/b27fd22f1bd1e29ec2475a3935c9004cc14713bf) [5d926349](https://github.com/bazelbuild/bazel/commit/5d926349949060c5a3e6699550fa1ac64761901e)
- [ ] Create release candidate: https://releases.bazel.build/5.0.0/rolling/5.0.0-pre.20210708.4rc1/index.html
- [ ] Post-submit: https://buildkite.com/bazel/bazel-bazel
- [ ] Push the release: https://releases.bazel.build/5.0.0/rolling/5.0.0-pre.20210708.4/index.html
- [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
process
|
status of bazel pre expected release date task list pick release baseline with cherrypicks create release candidate post submit push the release update the
| 1
|
699,823
| 24,033,702,350
|
IssuesEvent
|
2022-09-15 17:06:16
|
mlibrary/heliotrope
|
https://api.github.com/repos/mlibrary/heliotrope
|
closed
|
Set a default press for a user
|
dashboard roles low priority
|
**STORY**
As a user with rights to multiple presses, I would like to be able to set a default press for myself, so that I don't constantly have to select a press for each action I perform.
**DETIALS**
This could be a session-level selection that I make when I log in, or a setting that I could change in my user profile - i.e. 'set "Indiana University" as my default press ' or 'Always prompt me for the press to use'.
**ACCEPTANCE**
- [ ] There is some kind of mapping that associates each press administrator or editor with a specific press
- [ ] Platform admins can update or set the mappings as new users are created
- [ ] When I create a new monograph or collection, my default press is selected
|
1.0
|
Set a default press for a user - **STORY**
As a user with rights to multiple presses, I would like to be able to set a default press for myself, so that I don't constantly have to select a press for each action I perform.
**DETIALS**
This could be a session-level selection that I make when I log in, or a setting that I could change in my user profile - i.e. 'set "Indiana University" as my default press ' or 'Always prompt me for the press to use'.
**ACCEPTANCE**
- [ ] There is some kind of mapping that associates each press administrator or editor with a specific press
- [ ] Platform admins can update or set the mappings as new users are created
- [ ] When I create a new monograph or collection, my default press is selected
|
non_process
|
set a default press for a user story as a user with rights to multiple presses i would like to be able to set a default press for myself so that i don t constantly have to select a press for each action i perform detials this could be a session level selection that i make when i log in or a setting that i could change in my user profile i e set indiana university as my default press or always prompt me for the press to use acceptance there is some kind of mapping that associates each press administrator or editor with a specific press platform admins can update or set the mappings as new users are created when i create a new monograph or collection my default press is selected
| 0
|
5,559
| 2,791,781,975
|
IssuesEvent
|
2015-05-10 12:45:14
|
fsr-itse/1327
|
https://api.github.com/repos/fsr-itse/1327
|
closed
|
TOC in sidebar
|
design functionality minor
|
In view mode the sidebar of information pages should have a TOC of the main headlines of the page linking to the respective position on the page.
|
1.0
|
TOC in sidebar - In view mode the sidebar of information pages should have a TOC of the main headlines of the page linking to the respective position on the page.
|
non_process
|
toc in sidebar in view mode the sidebar of information pages should have a toc of the main headlines of the page linking to the respective position on the page
| 0
|
49,978
| 6,289,340,522
|
IssuesEvent
|
2017-07-19 18:59:21
|
CentralProgramming/CentralProgramming.github.io
|
https://api.github.com/repos/CentralProgramming/CentralProgramming.github.io
|
closed
|
Team Section Layout
|
design help wanted
|
The way it's setup makes it spread out and very long. Would it be better to set it up in a 2x2 table instead?
|
1.0
|
Team Section Layout - The way it's setup makes it spread out and very long. Would it be better to set it up in a 2x2 table instead?
|
non_process
|
team section layout the way it s setup makes it spread out and very long would it be better to set it up in a table instead
| 0
|
21,161
| 3,466,418,383
|
IssuesEvent
|
2015-12-22 03:19:18
|
netty/netty
|
https://api.github.com/repos/netty/netty
|
closed
|
Http2ConnectionHandler builder API design issues
|
defect
|
I see the following design issues with the builder API of `Http2ConnectionHandler`.
- A user might want to extend `Http2ConnectionHandler` and define his/her own static inner `Builder` class that extends `Http2ConnectionHandler.BuilderBase`. This introduces potential confusion because there's already `Http2ConnectionHandler.Builder`. Your IDE will warn about this name duplication as well.
- `BuilderBase` exposes all setters with `public` modifier. A user's `Builder` might not want to expose them to enforce it to certain configuration. There's no way to hide them because it's public already and they are final.
- `BuilderBase.build(Http2ConnectionDecoder, Http2ConnectionEncoder)` ignores most properties exposed by `BuilderBase`, such as `validateHeaders`, `frameLogger` and `encoderEnforceMaxConcurrentStreams`. If any `build()` method ignores the properties exposed by the builder, there's something wrong.
- A user's `Builder` that extends `BuilderBase` might want to require more parameters in `build()`. There's no way to do that cleanly because `build()` is public and final already.
|
1.0
|
Http2ConnectionHandler builder API design issues - I see the following design issues with the builder API of `Http2ConnectionHandler`.
- A user might want to extend `Http2ConnectionHandler` and define his/her own static inner `Builder` class that extends `Http2ConnectionHandler.BuilderBase`. This introduces potential confusion because there's already `Http2ConnectionHandler.Builder`. Your IDE will warn about this name duplication as well.
- `BuilderBase` exposes all setters with `public` modifier. A user's `Builder` might not want to expose them to enforce it to certain configuration. There's no way to hide them because it's public already and they are final.
- `BuilderBase.build(Http2ConnectionDecoder, Http2ConnectionEncoder)` ignores most properties exposed by `BuilderBase`, such as `validateHeaders`, `frameLogger` and `encoderEnforceMaxConcurrentStreams`. If any `build()` method ignores the properties exposed by the builder, there's something wrong.
- A user's `Builder` that extends `BuilderBase` might want to require more parameters in `build()`. There's no way to do that cleanly because `build()` is public and final already.
|
non_process
|
builder api design issues i see the following design issues with the builder api of a user might want to extend and define his her own static inner builder class that extends builderbase this introduces potential confusion because there s already builder your ide will warn about this name duplication as well builderbase exposes all setters with public modifier a user s builder might not want to expose them to enforce it to certain configuration there s no way to hide them because it s public already and they are final builderbase build ignores most properties exposed by builderbase such as validateheaders framelogger and encoderenforcemaxconcurrentstreams if any build method ignores the properties exposed by the builder there s something wrong a user s builder that extends builderbase might want to require more parameters in build there s no way to do that cleanly because build is public and final already
| 0
|
6,092
| 8,951,510,326
|
IssuesEvent
|
2019-01-25 14:10:18
|
jasonblais/mattermost-community
|
https://api.github.com/repos/jasonblais/mattermost-community
|
opened
|
Update CONTRIBUTING.md file for Mattermost repos
|
Contributor Journey Process
|
Reviewing https://github.com/mattermost/mattermost-server/blob/master/CONTRIBUTING.md is overwhelming for contributors, and not standard for what's included in a contribution file.
We should instead link to a contribution checklist (https://developers.mattermost.com/contribute/getting-started/contribution-checklist/) and have a separate document for describing the PR review process and the different labels we have.
|
1.0
|
Update CONTRIBUTING.md file for Mattermost repos - Reviewing https://github.com/mattermost/mattermost-server/blob/master/CONTRIBUTING.md is overwhelming for contributors, and not standard for what's included in a contribution file.
We should instead link to a contribution checklist (https://developers.mattermost.com/contribute/getting-started/contribution-checklist/) and have a separate document for describing the PR review process and the different labels we have.
|
process
|
update contributing md file for mattermost repos reviewing is overwhelming for contributors and not standard for what s included in a contribution file we should instead link to a contribution checklist and have a separate document for describing the pr review process and the different labels we have
| 1
|
5,625
| 8,481,800,181
|
IssuesEvent
|
2018-10-25 16:40:53
|
googleapis/google-cloud-node
|
https://api.github.com/repos/googleapis/google-cloud-node
|
closed
|
Investigate remaining Kokoro failures
|
type: process
|
- [x] https://github.com/googleapis/nodejs-translate/pull/127 - translate requires an `API_KEY` in it's env var config.
- [x] https://github.com/googleapis/nodejs-os-login/pull/76 - all of the kokoro tests seemed to hang
- [x] https://github.com/googleapis/nodejs-dns/pull/110 - system tests are failing. I think there's an env var in circleCI we need to bring over.
- [x] https://github.com/googleapis/nodejs-dlp/pull/144 - not sure why, the samples tests are failing
- [ ] https://github.com/googleapis/nodejs-storage/pull/406 - the system tests are failing here
- [x] https://github.com/googleapis/nodejs-dialogflow/pull/171 - no backend config
- [ ] https://github.com/googleapis/nodejs-logging/pull/225 - needs an updated synth file
|
1.0
|
Investigate remaining Kokoro failures - - [x] https://github.com/googleapis/nodejs-translate/pull/127 - translate requires an `API_KEY` in it's env var config.
- [x] https://github.com/googleapis/nodejs-os-login/pull/76 - all of the kokoro tests seemed to hang
- [x] https://github.com/googleapis/nodejs-dns/pull/110 - system tests are failing. I think there's an env var in circleCI we need to bring over.
- [x] https://github.com/googleapis/nodejs-dlp/pull/144 - not sure why, the samples tests are failing
- [ ] https://github.com/googleapis/nodejs-storage/pull/406 - the system tests are failing here
- [x] https://github.com/googleapis/nodejs-dialogflow/pull/171 - no backend config
- [ ] https://github.com/googleapis/nodejs-logging/pull/225 - needs an updated synth file
|
process
|
investigate remaining kokoro failures translate requires an api key in it s env var config all of the kokoro tests seemed to hang system tests are failing i think there s an env var in circleci we need to bring over not sure why the samples tests are failing the system tests are failing here no backend config needs an updated synth file
| 1
|
4,974
| 7,807,793,636
|
IssuesEvent
|
2018-06-11 18:02:50
|
decidim/decidim
|
https://api.github.com/repos/decidim/decidim
|
closed
|
Add posibility to weight-order Processes and Assemblies
|
space: assemblies space: processes stale-issue wontfix
|
# This is a Feature Proposal
#### :tophat: Description
* When multiple processes are active, on top of starring a process, it is convenient for technicians to be able to weight/order the processes and control the way the are displayed on the front page and process sections.
* The same goes for Assemblies.
|
1.0
|
Add posibility to weight-order Processes and Assemblies -
# This is a Feature Proposal
#### :tophat: Description
* When multiple processes are active, on top of starring a process, it is convenient for technicians to be able to weight/order the processes and control the way the are displayed on the front page and process sections.
* The same goes for Assemblies.
|
process
|
add posibility to weight order processes and assemblies this is a feature proposal tophat description when multiple processes are active on top of starring a process it is convenient for technicians to be able to weight order the processes and control the way the are displayed on the front page and process sections the same goes for assemblies
| 1
|
9,310
| 12,322,832,681
|
IssuesEvent
|
2020-05-13 11:03:26
|
threebotserver/publishingtools
|
https://api.github.com/repos/threebotserver/publishingtools
|
closed
|
Blog: Logo not working above the date in blog articles
|
process_wontfix
|
<img width="788" alt="Screen Shot 2020-05-12 at 3 05 56 PM" src="https://user-images.githubusercontent.com/50134479/81694827-1b109980-9462-11ea-87f2-5bc3c1009ec5.png">
|
1.0
|
Blog: Logo not working above the date in blog articles - <img width="788" alt="Screen Shot 2020-05-12 at 3 05 56 PM" src="https://user-images.githubusercontent.com/50134479/81694827-1b109980-9462-11ea-87f2-5bc3c1009ec5.png">
|
process
|
blog logo not working above the date in blog articles img width alt screen shot at pm src
| 1
|
12,850
| 15,238,459,315
|
IssuesEvent
|
2021-02-19 01:58:53
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Fix Geometries be topologically aware/compliant
|
Feature Request Feedback Processing stale
|
Author Name: **Brett Carlock** (@Saijin-Naib)
Original Redmine Issue: [21708](https://issues.qgis.org/issues/21708)
Redmine category:processing/core
---
I'd love to see the Fix Geometries tool be made a bit more robust to help repair topological issues, such as psuedo-nodes as identified by the Topology Checker tool.
I had to turn to ArcMap to remove these topological issues as I was not able to in QGIS.
|
1.0
|
Fix Geometries be topologically aware/compliant - Author Name: **Brett Carlock** (@Saijin-Naib)
Original Redmine Issue: [21708](https://issues.qgis.org/issues/21708)
Redmine category:processing/core
---
I'd love to see the Fix Geometries tool be made a bit more robust to help repair topological issues, such as psuedo-nodes as identified by the Topology Checker tool.
I had to turn to ArcMap to remove these topological issues as I was not able to in QGIS.
|
process
|
fix geometries be topologically aware compliant author name brett carlock saijin naib original redmine issue redmine category processing core i d love to see the fix geometries tool be made a bit more robust to help repair topological issues such as psuedo nodes as identified by the topology checker tool i had to turn to arcmap to remove these topological issues as i was not able to in qgis
| 1
|
12,206
| 14,742,762,673
|
IssuesEvent
|
2021-01-07 12:51:28
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Memphis SD Import
|
anc-process anp-1 ant-support has attachment
|
In GitLab by @kdjstudios on Jun 5, 2019, 13:43
Laura Duckworth wrote:
> All of the accounts listed are before my time, so the information I am providing below is just based on what I have access to currently.
>
> All accounts in green have contracts that were located and stated a deposit was paid
>
> All accounts in Orange have contracts that were located but no mention of the deposit on the contract
>
> All account blank , had contracts I could not locate
NOTE: I have attached the full email, as the table would not copy paste with highlighted background colors. [original_message__5_.html](/uploads/3ff44060a7585831d52ff6c645221b9f/original_message__5_.html)
|
1.0
|
Memphis SD Import - In GitLab by @kdjstudios on Jun 5, 2019, 13:43
Laura Duckworth wrote:
> All of the accounts listed are before my time, so the information I am providing below is just based on what I have access to currently.
>
> All accounts in green have contracts that were located and stated a deposit was paid
>
> All accounts in Orange have contracts that were located but no mention of the deposit on the contract
>
> All account blank , had contracts I could not locate
NOTE: I have attached the full email, as the table would not copy paste with highlighted background colors. [original_message__5_.html](/uploads/3ff44060a7585831d52ff6c645221b9f/original_message__5_.html)
|
process
|
memphis sd import in gitlab by kdjstudios on jun laura duckworth wrote all of the accounts listed are before my time so the information i am providing below is just based on what i have access to currently all accounts in green have contracts that were located and stated a deposit was paid all accounts in orange have contracts that were located but no mention of the deposit on the contract all account blank had contracts i could not locate note i have attached the full email as the table would not copy paste with highlighted background colors uploads original message html
| 1
|
3,654
| 6,691,433,457
|
IssuesEvent
|
2017-10-09 13:09:15
|
Alfresco/alfresco-ng2-components
|
https://api.github.com/repos/Alfresco/alfresco-ng2-components
|
closed
|
Task List / Process List define columns in the HTML and not in the JS
|
comp: activiti-processList comp: activiti-taskList New Feature
|
<!--
PLEASE FILL OUT THE FOLLOWING INFORMATION, THIS WILL HELP US TO RESOLVE YOUR PROBLEM FASTER.
REMEMBER FOR SUPPORT REQUESTS YOU CAN ALSO ASK ON OUR GITTER CHAT:
Please ask before on our gitter channel https://gitter.im/Alfresco/alfresco-ng2-components
-->
**Type of issue:** (check with "[x]")
```
- [ x] New feature request
- [ ] Bug
- [ ] Support request
- [ ] Documentation
```
**Current behavior:**
<!-- Describe the current behavior. -->
For Process and Task Lists you need to define the columns by using JS
```ts
this.dataTasks = new ObjectDataTableAdapter(
[],
[
{type: 'text', key: 'id', title: 'Id', sortable: true},
{type: 'text', key: 'name', title: 'Name', cssClass: 'name-column', sortable: true}
]
);
```
**Expected behavior:**
<!-- Describe the expected behavior. -->
You should be able to define the columns in the HTML following the Document List approach:
```html
<alfresco-document-list ...>
<content-columns>
<content-column key="$thumbnail" type="image"></content-column>
<content-column
title="Name"
key="name"
sortable="true"
class="full-width ellipsis-cell">
</content-column>
```
Which opens the possibility to add custom columns with more data.
**Steps to reproduce the issue:**
<!-- Describe the steps to reproduce the issue. -->
**Component name and version:**
<!-- Example: ng2-alfresco-login. Check before if this issue is still present in the most recent version -->
**Browser and version:**
<!-- [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ] -->
**Node version (for build issues):**
<!-- To check the version: node --version -->
**New feature request:**
<!-- Describe the feature, motivation and the concrete use case (only in case of new feature request) -->
|
1.0
|
Task List / Process List define columns in the HTML and not in the JS - <!--
PLEASE FILL OUT THE FOLLOWING INFORMATION, THIS WILL HELP US TO RESOLVE YOUR PROBLEM FASTER.
REMEMBER FOR SUPPORT REQUESTS YOU CAN ALSO ASK ON OUR GITTER CHAT:
Please ask before on our gitter channel https://gitter.im/Alfresco/alfresco-ng2-components
-->
**Type of issue:** (check with "[x]")
```
- [ x] New feature request
- [ ] Bug
- [ ] Support request
- [ ] Documentation
```
**Current behavior:**
<!-- Describe the current behavior. -->
For Process and Task Lists you need to define the columns by using JS
```ts
this.dataTasks = new ObjectDataTableAdapter(
[],
[
{type: 'text', key: 'id', title: 'Id', sortable: true},
{type: 'text', key: 'name', title: 'Name', cssClass: 'name-column', sortable: true}
]
);
```
**Expected behavior:**
<!-- Describe the expected behavior. -->
You should be able to define the columns in the HTML following the Document List approach:
```html
<alfresco-document-list ...>
<content-columns>
<content-column key="$thumbnail" type="image"></content-column>
<content-column
title="Name"
key="name"
sortable="true"
class="full-width ellipsis-cell">
</content-column>
```
Which opens the possibility to add custom columns with more data.
**Steps to reproduce the issue:**
<!-- Describe the steps to reproduce the issue. -->
**Component name and version:**
<!-- Example: ng2-alfresco-login. Check before if this issue is still present in the most recent version -->
**Browser and version:**
<!-- [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ] -->
**Node version (for build issues):**
<!-- To check the version: node --version -->
**New feature request:**
<!-- Describe the feature, motivation and the concrete use case (only in case of new feature request) -->
|
process
|
task list process list define columns in the html and not in the js please fill out the following information this will help us to resolve your problem faster remember for support requests you can also ask on our gitter chat please ask before on our gitter channel type of issue check with new feature request bug support request documentation current behavior for process and task lists you need to define the columns by using js ts this datatasks new objectdatatableadapter type text key id title id sortable true type text key name title name cssclass name column sortable true expected behavior you should be able to define the columns in the html following the document list approach html content column title name key name sortable true class full width ellipsis cell which opens the possibility to add custom columns with more data steps to reproduce the issue component name and version browser and version node version for build issues new feature request
| 1
|
16,083
| 20,253,843,116
|
IssuesEvent
|
2022-02-14 20:46:15
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Oracle queries don't work when aliases (auto-generated or otherwise) contain double quotes or null characters
|
Type:Bug Priority:P2 Querying/Processor Database/Oracle .Backend
|
Given an MBQL query like this
```clj
(mt/mbql-query venues
{:joins [{:source-table $$categories
:alias "My \"Stuff\""
:condition [:= $id $id]
:fields [[:field %categories.id {:join-alias "My \"Stuff\""}]]}]
:limit 1})
```
We generate SQL like this
```sql
SELECT
*
FROM
(
SELECT
"CAM_2"."test_data_venues"."id" AS "id",
"CAM_2"."test_data_venues"."name" AS "name",
"CAM_2"."test_data_venues"."category_id" AS "category_id",
"CAM_2"."test_data_venues"."latitude" AS "latitude",
"CAM_2"."test_data_venues"."longitude" AS "longitude",
"CAM_2"."test_data_venues"."price" AS "price",
"My ""Stuff"""."id" AS "My ""Stuff""__id"
FROM
"CAM_2"."test_data_venues"
LEFT JOIN "CAM_2"."test_data_categories" "My ""Stuff"""
ON "CAM_2"."test_data_venues"."id" = "CAM_2"."test_data_venues"."id"
)
WHERE
rownum <= 1
```
Which results in the following error:
```
clojure.lang.ExceptionInfo: Error executing query: ORA-03001: unimplemented feature
```
Even tho we are correctly escaping the double quotes to prevent SQL injection, double quotes are STILL not allowed in Oracle identifiers (https://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements008.htm):
> Quoted identifiers can contain any characters and punctuations marks as well as spaces. However, neither quoted nor nonquoted identifiers can contain double quotation marks or the null character (\0).
|
1.0
|
Oracle queries don't work when aliases (auto-generated or otherwise) contain double quotes or null characters - Given an MBQL query like this
```clj
(mt/mbql-query venues
{:joins [{:source-table $$categories
:alias "My \"Stuff\""
:condition [:= $id $id]
:fields [[:field %categories.id {:join-alias "My \"Stuff\""}]]}]
:limit 1})
```
We generate SQL like this
```sql
SELECT
*
FROM
(
SELECT
"CAM_2"."test_data_venues"."id" AS "id",
"CAM_2"."test_data_venues"."name" AS "name",
"CAM_2"."test_data_venues"."category_id" AS "category_id",
"CAM_2"."test_data_venues"."latitude" AS "latitude",
"CAM_2"."test_data_venues"."longitude" AS "longitude",
"CAM_2"."test_data_venues"."price" AS "price",
"My ""Stuff"""."id" AS "My ""Stuff""__id"
FROM
"CAM_2"."test_data_venues"
LEFT JOIN "CAM_2"."test_data_categories" "My ""Stuff"""
ON "CAM_2"."test_data_venues"."id" = "CAM_2"."test_data_venues"."id"
)
WHERE
rownum <= 1
```
Which results in the following error:
```
clojure.lang.ExceptionInfo: Error executing query: ORA-03001: unimplemented feature
```
Even tho we are correctly escaping the double quotes to prevent SQL injection, double quotes are STILL not allowed in Oracle identifiers (https://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements008.htm):
> Quoted identifiers can contain any characters and punctuations marks as well as spaces. However, neither quoted nor nonquoted identifiers can contain double quotation marks or the null character (\0).
|
process
|
oracle queries don t work when aliases auto generated or otherwise contain double quotes or null characters given an mbql query like this clj mt mbql query venues joins source table categories alias my stuff condition fields limit we generate sql like this sql select from select cam test data venues id as id cam test data venues name as name cam test data venues category id as category id cam test data venues latitude as latitude cam test data venues longitude as longitude cam test data venues price as price my stuff id as my stuff id from cam test data venues left join cam test data categories my stuff on cam test data venues id cam test data venues id where rownum which results in the following error clojure lang exceptioninfo error executing query ora unimplemented feature even tho we are correctly escaping the double quotes to prevent sql injection double quotes are still not allowed in oracle identifiers quoted identifiers can contain any characters and punctuations marks as well as spaces however neither quoted nor nonquoted identifiers can contain double quotation marks or the null character
| 1
|
15,978
| 20,188,184,652
|
IssuesEvent
|
2022-02-11 01:16:09
|
savitamittalmsft/WAS-SEC-TEST
|
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
|
opened
|
Establish a detection and response strategy for identity risks
|
WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Health Modeling & Monitoring Application Level Monitoring
|
<a href="https://docs.microsoft.com/azure/architecture/framework/security/monitor-identity-network#review-identity-risks">Establish a detection and response strategy for identity risks</a>
<p><b>Why Consider This?</b></p>
Ensure that your organization is prepared to respond to an identity theft event. Most security incidents take place after an attacker gains initial access using a stolen identity.
<p><b>Context</b></p>
<p><span>These identities can often start with low privileges, but the attackers then use that identity to traverse laterally and gain access to more privileged identities. This repeats as needed until the attacker controls access to the ultimate target data or systems. Reported risk events for Azure AD can be viewed in Azure AD reporting, or Azure AD Identity Protection. Additionally, the Identity Protection risk events API can be used to programmatically access identity related security detections using Microsoft Graph.</span></p>
<p><b>Suggested Actions</b></p>
<p><span>Build strategy to monitor for identity risks and establish processes for responding to identity risk alerts.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/Security/governance#monitor-identity-risk" target="_blank"><span>Monitor identity Risk</span></a><span /></p>
|
1.0
|
Establish a detection and response strategy for identity risks - <a href="https://docs.microsoft.com/azure/architecture/framework/security/monitor-identity-network#review-identity-risks">Establish a detection and response strategy for identity risks</a>
<p><b>Why Consider This?</b></p>
Ensure that your organization is prepared to respond to an identity theft event. Most security incidents take place after an attacker gains initial access using a stolen identity.
<p><b>Context</b></p>
<p><span>These identities can often start with low privileges, but the attackers then use that identity to traverse laterally and gain access to more privileged identities. This repeats as needed until the attacker controls access to the ultimate target data or systems. Reported risk events for Azure AD can be viewed in Azure AD reporting, or Azure AD Identity Protection. Additionally, the Identity Protection risk events API can be used to programmatically access identity related security detections using Microsoft Graph.</span></p>
<p><b>Suggested Actions</b></p>
<p><span>Build strategy to monitor for identity risks and establish processes for responding to identity risk alerts.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/Security/governance#monitor-identity-risk" target="_blank"><span>Monitor identity Risk</span></a><span /></p>
|
process
|
establish a detection and response strategy for identity risks why consider this ensure that your organization is prepared to respond to an identity theft event most security incidents take place after an attacker gains initial access using a stolen identity context these identities can often start with low privileges but the attackers then use that identity to traverse laterally and gain access to more privileged identities this repeats as needed until the attacker controls access to the ultimate target data or systems reported risk events for azure ad can be viewed in azure ad reporting or azure ad identity protection additionally the identity protection risk events api can be used to programmatically access identity related security detections using microsoft graph suggested actions build strategy to monitor for identity risks and establish processes for responding to identity risk alerts learn more monitor identity risk
| 1
|
21,243
| 6,132,518,478
|
IssuesEvent
|
2017-06-25 03:14:09
|
ganeti/ganeti
|
https://api.github.com/repos/ganeti/ganeti
|
closed
|
Merge cli.py and client.py
|
imported_from_google_code Status:WontFix Type-Refactoring
|
Originally reported of Google Code with ID 458.
```
The two perform the same function.
We should move cli constants/functions to client/__init__.py, or common.py
Thanks,
Guido
```
Originally added on 2013-05-10 09:05:34 +0000 UTC.
|
1.0
|
Merge cli.py and client.py - Originally reported of Google Code with ID 458.
```
The two perform the same function.
We should move cli constants/functions to client/__init__.py, or common.py
Thanks,
Guido
```
Originally added on 2013-05-10 09:05:34 +0000 UTC.
|
non_process
|
merge cli py and client py originally reported of google code with id the two perform the same function we should move cli constants functions to client init py or common py thanks guido originally added on utc
| 0
|
2,581
| 5,343,601,390
|
IssuesEvent
|
2017-02-17 11:53:55
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
opened
|
[subtitles] [fr] Video title MÉLENCHON - Réunion publique à Strasbourg - #JLMStrasbourg
|
Language: French Process: [1] Writing in progress
|
# Video titre
MÉLENCHON - Réunion publique à Strasbourg - #JLMStrasbourg
# Date de mise en ligne :
15/02/2017
# URL
https://youtu.be/_9Grnn1f24k
# Youtube sous-titre langue
France
# Durée
2:07:00
# Sous-tire URL
https://www.youtube.com/timedtext_editor?bl=watch&lang=fr&action_mde_edit_form=1&ref=wt&v=_9Grnn1f24k&ui=hd&tab=captions
|
1.0
|
[subtitles] [fr] Video title MÉLENCHON - Réunion publique à Strasbourg - #JLMStrasbourg - # Video titre
MÉLENCHON - Réunion publique à Strasbourg - #JLMStrasbourg
# Date de mise en ligne :
15/02/2017
# URL
https://youtu.be/_9Grnn1f24k
# Youtube sous-titre langue
France
# Durée
2:07:00
# Sous-tire URL
https://www.youtube.com/timedtext_editor?bl=watch&lang=fr&action_mde_edit_form=1&ref=wt&v=_9Grnn1f24k&ui=hd&tab=captions
|
process
|
video title mélenchon réunion publique à strasbourg jlmstrasbourg video titre mélenchon réunion publique à strasbourg jlmstrasbourg date de mise en ligne url youtube sous titre langue france durée sous tire url
| 1
|
284,737
| 30,913,679,307
|
IssuesEvent
|
2023-08-05 02:35:26
|
Nivaskumark/kernel_v4.19.72_old
|
https://api.github.com/repos/Nivaskumark/kernel_v4.19.72_old
|
reopened
|
CVE-2019-12380 (Medium) detected in linux-yoctov5.4.51, linux-yoctov5.4.51
|
Mend: dependency security vulnerability
|
## CVE-2019-12380 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
**DISPUTED** An issue was discovered in the efi subsystem in the Linux kernel through 5.1.5. phys_efi_set_virtual_address_map in arch/x86/platform/efi/efi.c and efi_call_phys_prolog in arch/x86/platform/efi/efi_64.c mishandle memory allocation failures. NOTE: This id is disputed as not being an issue because “All the code touched by the referenced commit runs only at boot, before any user processes are started. Therefore, there is no possibility for an unprivileged user to control it.”.
<p>Publish Date: 2019-05-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-12380>CVE-2019-12380</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2019-12380">https://www.linuxkernelcves.com/cves/CVE-2019-12380</a></p>
<p>Release Date: 2020-08-03</p>
<p>Fix Resolution: v5.2-rc3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-12380 (Medium) detected in linux-yoctov5.4.51, linux-yoctov5.4.51 - ## CVE-2019-12380 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
**DISPUTED** An issue was discovered in the efi subsystem in the Linux kernel through 5.1.5. phys_efi_set_virtual_address_map in arch/x86/platform/efi/efi.c and efi_call_phys_prolog in arch/x86/platform/efi/efi_64.c mishandle memory allocation failures. NOTE: This id is disputed as not being an issue because “All the code touched by the referenced commit runs only at boot, before any user processes are started. Therefore, there is no possibility for an unprivileged user to control it.”.
<p>Publish Date: 2019-05-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-12380>CVE-2019-12380</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2019-12380">https://www.linuxkernelcves.com/cves/CVE-2019-12380</a></p>
<p>Release Date: 2020-08-03</p>
<p>Fix Resolution: v5.2-rc3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in linux linux cve medium severity vulnerability vulnerable libraries linux linux vulnerability details disputed an issue was discovered in the efi subsystem in the linux kernel through phys efi set virtual address map in arch platform efi efi c and efi call phys prolog in arch platform efi efi c mishandle memory allocation failures note this id is disputed as not being an issue because “all the code touched by the referenced commit runs only at boot before any user processes are started therefore there is no possibility for an unprivileged user to control it ” publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
19,894
| 26,340,408,594
|
IssuesEvent
|
2023-01-10 17:13:48
|
temporalio/sdk-typescript
|
https://api.github.com/repos/temporalio/sdk-typescript
|
closed
|
Automated release to `next` tag on merge to `main`
|
CICD processes
|
### Is your feature request related to a problem? Please describe.
Updates to the SDK are hard to test without a release.
We'd like to be able to sample some of the new features and fixes for improving the samples and docs.
### Describe the solution you'd like
Automatically publish to npm with the `next` tag (`npm install temporalio@next`) on merge to `main`.
### Additional context
This is a common practice in the node ecosystem.
|
1.0
|
Automated release to `next` tag on merge to `main` - ### Is your feature request related to a problem? Please describe.
Updates to the SDK are hard to test without a release.
We'd like to be able to sample some of the new features and fixes for improving the samples and docs.
### Describe the solution you'd like
Automatically publish to npm with the `next` tag (`npm install temporalio@next`) on merge to `main`.
### Additional context
This is a common practice in the node ecosystem.
|
process
|
automated release to next tag on merge to main is your feature request related to a problem please describe updates to the sdk are hard to test without a release we d like to be able to sample some of the new features and fixes for improving the samples and docs describe the solution you d like automatically publish to npm with the next tag npm install temporalio next on merge to main additional context this is a common practice in the node ecosystem
| 1
|
4,687
| 2,610,140,769
|
IssuesEvent
|
2015-02-26 18:44:22
|
chrsmith/hedgewars
|
https://api.github.com/repos/chrsmith/hedgewars
|
closed
|
Playing with 48 hedgehogs and Per Hedgehog Ammo is not possible
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Select game mode with Per Hedgehog Ammo.
2. Add 6 team with 8 player each.
3. Run fight.
What is the expected output? What do you see instead?
The fight don't run, I get a error window:
"Last two engine messages:
Establishing IPC connection... ok
Ammo stores overflow"
What version of the product are you using? On what operating system?
0.9.14.1 On Windows XP SP2
Please provide any additional information below.
```
-----
Original issue reported on code.google.com by `adibiaz...@gmail.com` on 21 Nov 2010 at 3:11
|
1.0
|
Playing with 48 hedgehogs and Per Hedgehog Ammo is not possible - ```
What steps will reproduce the problem?
1. Select game mode with Per Hedgehog Ammo.
2. Add 6 team with 8 player each.
3. Run fight.
What is the expected output? What do you see instead?
The fight don't run, I get a error window:
"Last two engine messages:
Establishing IPC connection... ok
Ammo stores overflow"
What version of the product are you using? On what operating system?
0.9.14.1 On Windows XP SP2
Please provide any additional information below.
```
-----
Original issue reported on code.google.com by `adibiaz...@gmail.com` on 21 Nov 2010 at 3:11
|
non_process
|
playing with hedgehogs and per hedgehog ammo is not possible what steps will reproduce the problem select game mode with per hedgehog ammo add team with player each run fight what is the expected output what do you see instead the fight don t run i get a error window last two engine messages establishing ipc connection ok ammo stores overflow what version of the product are you using on what operating system on windows xp please provide any additional information below original issue reported on code google com by adibiaz gmail com on nov at
| 0
|
11,383
| 14,222,923,448
|
IssuesEvent
|
2020-11-17 17:30:13
|
unicode-org/icu4x
|
https://api.github.com/repos/unicode-org/icu4x
|
closed
|
Follow-up: Codecov vs. Coveralls
|
C-process T-task
|
We gathered feedback on the two tools. I suggested that we enable both tools for a little while and see which tool we like better.
|
1.0
|
Follow-up: Codecov vs. Coveralls - We gathered feedback on the two tools. I suggested that we enable both tools for a little while and see which tool we like better.
|
process
|
follow up codecov vs coveralls we gathered feedback on the two tools i suggested that we enable both tools for a little while and see which tool we like better
| 1
|
125,274
| 10,339,652,266
|
IssuesEvent
|
2019-09-03 19:52:56
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
Failing test: X-Pack Jest Tests.x-pack/plugins/rollup/__jest__/client_integration - Create Rollup Job, step 5: Metrics save() should call the "create" Api server endpoint
|
Feature:Rollups Team:Elasticsearch UI failed-test
|
A test failed on a tracked branch
```
TypeError: Cannot destructure property `id` of 'undefined' or 'null'.
at config (/var/lib/jenkins/workspace/elastic+kibana+7.x/JOB/x-pack-intake/node/immutable/kibana/x-pack/plugins/rollup/public/crud_app/services/jobs.js:81:5)
at /var/lib/jenkins/workspace/elastic+kibana+7.x/JOB/x-pack-intake/node/immutable/kibana/x-pack/plugins/rollup/public/crud_app/store/actions/create_job.js:83:21
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.x/JOB=x-pack-intake,node=immutable/1891/)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Jest Tests.x-pack/plugins/rollup/__jest__/client_integration","test.name":"Create Rollup Job, step 5: Metrics save() should call the \"create\" Api server endpoint","test.failCount":2}} -->
|
1.0
|
Failing test: X-Pack Jest Tests.x-pack/plugins/rollup/__jest__/client_integration - Create Rollup Job, step 5: Metrics save() should call the "create" Api server endpoint - A test failed on a tracked branch
```
TypeError: Cannot destructure property `id` of 'undefined' or 'null'.
at config (/var/lib/jenkins/workspace/elastic+kibana+7.x/JOB/x-pack-intake/node/immutable/kibana/x-pack/plugins/rollup/public/crud_app/services/jobs.js:81:5)
at /var/lib/jenkins/workspace/elastic+kibana+7.x/JOB/x-pack-intake/node/immutable/kibana/x-pack/plugins/rollup/public/crud_app/store/actions/create_job.js:83:21
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.x/JOB=x-pack-intake,node=immutable/1891/)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Jest Tests.x-pack/plugins/rollup/__jest__/client_integration","test.name":"Create Rollup Job, step 5: Metrics save() should call the \"create\" Api server endpoint","test.failCount":2}} -->
|
non_process
|
failing test x pack jest tests x pack plugins rollup jest client integration create rollup job step metrics save should call the create api server endpoint a test failed on a tracked branch typeerror cannot destructure property id of undefined or null at config var lib jenkins workspace elastic kibana x job x pack intake node immutable kibana x pack plugins rollup public crud app services jobs js at var lib jenkins workspace elastic kibana x job x pack intake node immutable kibana x pack plugins rollup public crud app store actions create job js first failure
| 0
|
9,778
| 4,641,460,267
|
IssuesEvent
|
2016-09-30 04:59:10
|
debugworkbench/hydragon
|
https://api.github.com/repos/debugworkbench/hydragon
|
closed
|
Consider replacing DefinitelyTyped typings
|
build Status: Pending Type: Cleanup
|
Seems like https://github.com/typings/typings claims to work with proper external module based typings instead of just ambient external module typings. I'm not entirely sure how it manages to work with TypeScript's node-like module resolution, but that should be easy enough to test with the typings at https://github.com/typings/typed-source-map
If it works as claimed it would be nice to convert the Electron typings over to the proper external module d.ts format.
|
1.0
|
Consider replacing DefinitelyTyped typings - Seems like https://github.com/typings/typings claims to work with proper external module based typings instead of just ambient external module typings. I'm not entirely sure how it manages to work with TypeScript's node-like module resolution, but that should be easy enough to test with the typings at https://github.com/typings/typed-source-map
If it works as claimed it would be nice to convert the Electron typings over to the proper external module d.ts format.
|
non_process
|
consider replacing definitelytyped typings seems like claims to work with proper external module based typings instead of just ambient external module typings i m not entirely sure how it manages to work with typescript s node like module resolution but that should be easy enough to test with the typings at if it works as claimed it would be nice to convert the electron typings over to the proper external module d ts format
| 0
|
247,487
| 26,711,658,529
|
IssuesEvent
|
2023-01-28 01:19:50
|
panasalap/linux-4.1.15
|
https://api.github.com/repos/panasalap/linux-4.1.15
|
reopened
|
CVE-2015-6937 (Medium) detected in linuxlinux-4.1.17
|
security vulnerability
|
## CVE-2015-6937 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.17</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.1.15/commit/aae4c2fa46027fd4c477372871df090c6b94f3f1">aae4c2fa46027fd4c477372871df090c6b94f3f1</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/rds/connection.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The __rds_conn_create function in net/rds/connection.c in the Linux kernel through 4.2.3 allows local users to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact by using a socket that was not properly bound.
<p>Publish Date: 2015-10-19
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-6937>CVE-2015-6937</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2015-6937">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2015-6937</a></p>
<p>Release Date: 2015-10-19</p>
<p>Fix Resolution: v4.3-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2015-6937 (Medium) detected in linuxlinux-4.1.17 - ## CVE-2015-6937 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.17</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.1.15/commit/aae4c2fa46027fd4c477372871df090c6b94f3f1">aae4c2fa46027fd4c477372871df090c6b94f3f1</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/rds/connection.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The __rds_conn_create function in net/rds/connection.c in the Linux kernel through 4.2.3 allows local users to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact by using a socket that was not properly bound.
<p>Publish Date: 2015-10-19
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-6937>CVE-2015-6937</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2015-6937">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2015-6937</a></p>
<p>Release Date: 2015-10-19</p>
<p>Fix Resolution: v4.3-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux apache software foundation asf library home page a href found in head commit a href found in base branch master vulnerable source files net rds connection c vulnerability details the rds conn create function in net rds connection c in the linux kernel through allows local users to cause a denial of service null pointer dereference and system crash or possibly have unspecified other impact by using a socket that was not properly bound publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
142,270
| 13,019,020,716
|
IssuesEvent
|
2020-07-26 20:16:44
|
aaugustin/websockets
|
https://api.github.com/repos/aaugustin/websockets
|
closed
|
Object oriented Websocket Client and Server example
|
documentation help wanted low priority
|
I'm trying to implement object oriented Websocket client and server. Since `websockets` is completely coroutine based, I'm finding it difficult to go ahead with implementation. Is it possible to wrap it around classes? Are there any examples of using it in this way?
|
1.0
|
Object oriented Websocket Client and Server example - I'm trying to implement object oriented Websocket client and server. Since `websockets` is completely coroutine based, I'm finding it difficult to go ahead with implementation. Is it possible to wrap it around classes? Are there any examples of using it in this way?
|
non_process
|
object oriented websocket client and server example i m trying to implement object oriented websocket client and server since websockets is completely coroutine based i m finding it difficult to go ahead with implementation is it possible to wrap it around classes are there any examples of using it in this way
| 0
|
309
| 2,750,570,476
|
IssuesEvent
|
2015-04-24 00:02:39
|
hammerlab/pileup.js
|
https://api.github.com/repos/hammerlab/pileup.js
|
closed
|
Don't run flow on NPM packages
|
process
|
Running `flow` takes ~14 seconds. Most of that time is spent churning through the ~20,000 files in my `node_modules` directory. I can make `flow` run significantly more quickly by excluding individual subdirs in `node_modules`, but I can't exclude `node_modules` entirely because I `require` stuff from there.
Running `flow src` is also fast, but it produces errors that I don't get otherwise.
|
1.0
|
Don't run flow on NPM packages - Running `flow` takes ~14 seconds. Most of that time is spent churning through the ~20,000 files in my `node_modules` directory. I can make `flow` run significantly more quickly by excluding individual subdirs in `node_modules`, but I can't exclude `node_modules` entirely because I `require` stuff from there.
Running `flow src` is also fast, but it produces errors that I don't get otherwise.
|
process
|
don t run flow on npm packages running flow takes seconds most of that time is spent churning through the files in my node modules directory i can make flow run significantly more quickly by excluding individual subdirs in node modules but i can t exclude node modules entirely because i require stuff from there running flow src is also fast but it produces errors that i don t get otherwise
| 1
|
48,011
| 2,990,117,000
|
IssuesEvent
|
2015-07-21 07:02:24
|
jayway/rest-assured
|
https://api.github.com/repos/jayway/rest-assured
|
closed
|
RestAssuredResponseImpl.detailedCookie(String) hasn't been implemented yet.
|
bug imported Priority-Medium
|
_From [vkra...@gmail.com](https://code.google.com/u/112718581933173099221/) on April 26, 2013 16:27:34_
What steps will reproduce the problem? assertNotNull(given().auth().none()
.params(FORM_AUTH_CONFIG.getUserInputTagName(), login,
FORM_AUTH_CONFIG.getPasswordInputTagName(), password)
.expect().statusCode(HttpStatus.SC_MOVED_TEMPORARILY)
.post(FORM_AUTH_CONFIG.getFormAction())
.sessionId());
assertNotNull(given().auth().none()
.params(FORM_AUTH_CONFIG.getUserInputTagName(), login,
FORM_AUTH_CONFIG.getPasswordInputTagName(), password)
.expect().statusCode(HttpStatus.SC_MOVED_TEMPORARILY)
.post(FORM_AUTH_CONFIG.getFormAction())
.detailedCookie(RestAssured.sessionId));
What is the expected output?
NotNull in both asserts.
What do you see instead?
Second assert fails.
What version of the product are you using?
1.8.0
On what operating system?
Windows Please provide any additional information below. com.jayway.restassured.internal.RestAssuredResponseImpl
method detailedCookie() haven't had implementation yet.
def Cookie detailedCookie(String name) {
return null
}
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=230_
|
1.0
|
RestAssuredResponseImpl.detailedCookie(String) hasn't been implemented yet. - _From [vkra...@gmail.com](https://code.google.com/u/112718581933173099221/) on April 26, 2013 16:27:34_
What steps will reproduce the problem? assertNotNull(given().auth().none()
.params(FORM_AUTH_CONFIG.getUserInputTagName(), login,
FORM_AUTH_CONFIG.getPasswordInputTagName(), password)
.expect().statusCode(HttpStatus.SC_MOVED_TEMPORARILY)
.post(FORM_AUTH_CONFIG.getFormAction())
.sessionId());
assertNotNull(given().auth().none()
.params(FORM_AUTH_CONFIG.getUserInputTagName(), login,
FORM_AUTH_CONFIG.getPasswordInputTagName(), password)
.expect().statusCode(HttpStatus.SC_MOVED_TEMPORARILY)
.post(FORM_AUTH_CONFIG.getFormAction())
.detailedCookie(RestAssured.sessionId));
What is the expected output?
NotNull in both asserts.
What do you see instead?
Second assert fails.
What version of the product are you using?
1.8.0
On what operating system?
Windows Please provide any additional information below. com.jayway.restassured.internal.RestAssuredResponseImpl
method detailedCookie() haven't had implementation yet.
def Cookie detailedCookie(String name) {
return null
}
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=230_
|
non_process
|
restassuredresponseimpl detailedcookie string hasn t been implemented yet from on april what steps will reproduce the problem assertnotnull given auth none params form auth config getuserinputtagname login form auth config getpasswordinputtagname password expect statuscode httpstatus sc moved temporarily post form auth config getformaction sessionid assertnotnull given auth none params form auth config getuserinputtagname login form auth config getpasswordinputtagname password expect statuscode httpstatus sc moved temporarily post form auth config getformaction detailedcookie restassured sessionid what is the expected output notnull in both asserts what do you see instead second assert fails what version of the product are you using on what operating system windows please provide any additional information below com jayway restassured internal restassuredresponseimpl method detailedcookie haven t had implementation yet def cookie detailedcookie string name return null original issue
| 0
|
14,680
| 25,461,255,272
|
IssuesEvent
|
2022-11-24 19:25:32
|
ssibrahimbas/deno-sheet-kpi
|
https://api.github.com/repos/ssibrahimbas/deno-sheet-kpi
|
closed
|
feature: support this metric: `Avg. Revenue by Brand`
|
requirement
|
The metric named `Avg. Revenue by Brand` should be developed.
Example Request
```http
GET /metrics?id=revenue&dimensions=brand&aggregate=avg
```
Example Response
```json
{
"metric": "revenue",
"dimensions": ["brand"],
"aggregation": "avg",
"data": {
"Nike": [
{
"value": "1.23"
}
],
"Samsung": [
{
"value": "2.00"
}
],
"Apple": [
{
"value": "3.00"
}
]
}
}
```
|
1.0
|
feature: support this metric: `Avg. Revenue by Brand` - The metric named `Avg. Revenue by Brand` should be developed.
Example Request
```http
GET /metrics?id=revenue&dimensions=brand&aggregate=avg
```
Example Response
```json
{
"metric": "revenue",
"dimensions": ["brand"],
"aggregation": "avg",
"data": {
"Nike": [
{
"value": "1.23"
}
],
"Samsung": [
{
"value": "2.00"
}
],
"Apple": [
{
"value": "3.00"
}
]
}
}
```
|
non_process
|
feature support this metric avg revenue by brand the metric named avg revenue by brand should be developed example request http get metrics id revenue dimensions brand aggregate avg example response json metric revenue dimensions aggregation avg data nike value samsung value apple value
| 0
|
8,416
| 11,582,422,200
|
IssuesEvent
|
2020-02-22 03:19:57
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Dashboard Error 500
|
Database/MySQL Priority:P2 Querying/ Querying/Processor Type:Bug
|
**Describe the bug**
If you create a Dashboard and add any question against your first table (first = the first one from `metabase_table`, not necessarily with ID 1), the Dashboard crashes with Error 500. If you just use questions that fetch their data from any other table, the Dashboard works as expected.
**Logs**
`java[7364]: 09-10 13:38:18 ERROR middleware.log :: GET /api/dashboard/8 500 17.6 ms (11 DB calls)
java[7364]: [189B blob data]
java[7364]: :type java.sql.SQLSyntaxErrorException,
java[7364]: :stacktrace
java[7364]: ("org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.get(ExceptionMapper.java:236)"
java[7364]: "org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.getException(ExceptionMapper.java:165)"
java[7364]: "org.mariadb.jdbc.MariaDbStatement.executeExceptionEpilogue(MariaDbStatement.java:238)"
java[7364]: "org.mariadb.jdbc.MariaDbPreparedStatementClient.executeInternal(MariaDbPreparedStatementClient.java:232)"
java[7364]: "org.mariadb.jdbc.MariaDbPreparedStatementClient.execute(MariaDbPreparedStatementClient.java:159)"
java[7364]: "org.mariadb.jdbc.MariaDbPreparedStatementClient.executeQuery(MariaDbPreparedStatementClient.java:174)"
java[7364]: "com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:431)"
java[7364]: "clojure.java.jdbc$execute_query_with_params.invokeStatic(jdbc.clj:1072)"
java[7364]: "clojure.java.jdbc$execute_query_with_params.invoke(jdbc.clj:1066)"
java[7364]: "clojure.java.jdbc$db_query_with_resultset_STAR_.invokeStatic(jdbc.clj:1095)"
java[7364]: "clojure.java.jdbc$db_query_with_resultset_STAR_.invoke(jdbc.clj:1075)"
java[7364]: "clojure.java.jdbc$query.invokeStatic(jdbc.clj:1164)"
java[7364]: "clojure.java.jdbc$query.invoke(jdbc.clj:1126)"
java[7364]: "toucan.db$query.invokeStatic(db.clj:285)"
java[7364]: "toucan.db$query.doInvoke(db.clj:281)"
java[7364]: "clojure.lang.RestFn.invoke(RestFn.java:410)"
java[7364]: "toucan.db$simple_select.invokeStatic(db.clj:391)"
java[7364]: "toucan.db$simple_select.invoke(db.clj:380)"
java[7364]: "toucan.db$select.invokeStatic(db.clj:659)"
java[7364]: "toucan.db$select.doInvoke(db.clj:653)"
java[7364]: "clojure.lang.RestFn.applyTo(RestFn.java:139)"
java[7364]: "clojure.core$apply.invokeStatic(core.clj:667)"
java[7364]: "clojure.core$apply.invoke(core.clj:660)"
java[7364]: "toucan.db$select_field__GT_field.invokeStatic(db.clj:701)"
java[7364]: "toucan.db$select_field__GT_field.doInvoke(db.clj:694)"
java[7364]: "clojure.lang.RestFn.invoke(RestFn.java:494)"
java[7364]: "--> api.dashboard$hashes__GT_hash_vec__GT_avg_time.invokeStatic(dashboard.clj:162)"
java[7364]: "api.dashboard$hashes__GT_hash_vec__GT_avg_time.invoke(dashboard.clj:156)"
java[7364]: "api.dashboard$add_query_average_duration_to_dashcards.invokeStatic(dashboard.clj:176)"
java[7364]: "api.dashboard$add_query_average_duration_to_dashcards.invoke(dashboard.clj:173)"
java[7364]: "api.dashboard$add_query_average_durations.invokeStatic(dashboard.clj:188)"
java[7364]: "api.dashboard$add_query_average_durations.invoke(dashboard.clj:185)"
java[7364]: "api.dashboard$get_dashboard.invokeStatic(dashboard.clj:194)"
java[7364]: "api.dashboard$get_dashboard.invoke(dashboard.clj:191)"
java[7364]: "api.dashboard$fn__48680.invokeStatic(dashboard.clj:236)"
java[7364]: "api.dashboard$fn__48680.invoke(dashboard.clj:233)"
java[7364]: "middleware.auth$enforce_authentication$fn__64059.invoke(auth.clj:14)"
java[7364]: "routes$fn__65209$fn__65210.doInvoke(routes.clj:56)"
java[7364]: "middleware.exceptions$catch_uncaught_exceptions$fn__64158.invoke(exceptions.clj:104)"
java[7364]: "middleware.exceptions$catch_api_exceptions$fn__64155.invoke(exceptions.clj:92)"
java[7364]: "middleware.log$log_api_call$fn__65583$fn__65584.invoke(log.clj:170)"
java[7364]: "middleware.log$log_api_call$fn__65583.invoke(log.clj:164)"
java[7364]: "middleware.security$add_security_headers$fn__64121.invoke(security.clj:128)"
java[7364]: "middleware.json$wrap_json_body$fn__65288.invoke(json.clj:61)"
java[7364]: "middleware.json$wrap_streamed_json_response$fn__65306.invoke(json.clj:97)"
java[7364]: "middleware.session$bind_current_user$fn__62021$fn__62022.invoke(session.clj:193)"
java[7364]: "middleware.session$do_with_current_user.invokeStatic(session.clj:176)"
java[7364]: "middleware.session$do_with_current_user.invoke(session.clj:170)"
java[7364]: "middleware.session$bind_current_user$fn__62021.invoke(session.clj:192)"
java[7364]: "middleware.session$wrap_current_user_id$fn__62010.invoke(session.clj:161)"
java[7364]: "middleware.session$wrap_session_id$fn__61995.invoke(session.clj:123)"
java[7364]: "middleware.auth$wrap_api_key$fn__64067.invoke(auth.clj:27)"
java[7364]: "middleware.misc$maybe_set_site_url$fn__65610.invoke(misc.clj:56)"
java[7364]: "middleware.misc$bind_user_locale$fn__65613.invoke(misc.clj:72)"
java[7364]: "middleware.misc$add_content_type$fn__65598.invoke(misc.clj:28)"
java[7364]: "middleware.misc$disable_streaming_buffering$fn__65621.invoke(misc.clj:87)"),
java[7364]: :sql-exception-chain
java[7364]: ["SQLSyntaxErrorException:"
java[7364]: [187B blob data]
java[7364]: "SQLState: 42000"
java[7364]: "Error Code: 1064"]}
`
**To Reproduce**
Steps to reproduce the behavior:
1. Create some questions that fetch their data from any but your first table.
2. Create a Dashboard and attach those questions to it.
3. Create a question that get its data from your first table.
4. Add this question to the Dashboard from 2.
5. You'll get `<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>500 Server Error</title> </head><body> <h1>Server Error</h1> <p>The server encountered an internal error or misconfiguration and was unable to complete your request.</p> <p>Please contact the server administrator at ... to inform them of the time this error occurred, and the actions you performed just before this error.</p> <p>More information about this error may be available in the server error log.</p> </body></html> `
You could test this directly in the metabase database in `report_dashboardcard.card_id` (for example to heal your Dashboard).
**Expected behavior**
The Dashboard should work as expected, regardless of where the questions fetch their data from.
**Information about your Metabase Installation:**
- Your operating system: CentOS 3.10.0-957.27.2.el7.x86_64
- Your databases: MariaDB ColumnStore
- Metabase version: 0.33.2
- Metabase hosting environment: jar-File, systemd
- Metabase internal database: MariaDB
**Severity**
Blocker for all Users, annoying for the Admin.
|
1.0
|
Dashboard Error 500 - **Describe the bug**
If you create a Dashboard and add any question against your first table (first = the first one from `metabase_table`, not necessarily with ID 1), the Dashboard crashes with Error 500. If you just use questions that fetch their data from any other table, the Dashboard works as expected.
**Logs**
`java[7364]: 09-10 13:38:18 ERROR middleware.log :: GET /api/dashboard/8 500 17.6 ms (11 DB calls)
java[7364]: [189B blob data]
java[7364]: :type java.sql.SQLSyntaxErrorException,
java[7364]: :stacktrace
java[7364]: ("org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.get(ExceptionMapper.java:236)"
java[7364]: "org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.getException(ExceptionMapper.java:165)"
java[7364]: "org.mariadb.jdbc.MariaDbStatement.executeExceptionEpilogue(MariaDbStatement.java:238)"
java[7364]: "org.mariadb.jdbc.MariaDbPreparedStatementClient.executeInternal(MariaDbPreparedStatementClient.java:232)"
java[7364]: "org.mariadb.jdbc.MariaDbPreparedStatementClient.execute(MariaDbPreparedStatementClient.java:159)"
java[7364]: "org.mariadb.jdbc.MariaDbPreparedStatementClient.executeQuery(MariaDbPreparedStatementClient.java:174)"
java[7364]: "com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:431)"
java[7364]: "clojure.java.jdbc$execute_query_with_params.invokeStatic(jdbc.clj:1072)"
java[7364]: "clojure.java.jdbc$execute_query_with_params.invoke(jdbc.clj:1066)"
java[7364]: "clojure.java.jdbc$db_query_with_resultset_STAR_.invokeStatic(jdbc.clj:1095)"
java[7364]: "clojure.java.jdbc$db_query_with_resultset_STAR_.invoke(jdbc.clj:1075)"
java[7364]: "clojure.java.jdbc$query.invokeStatic(jdbc.clj:1164)"
java[7364]: "clojure.java.jdbc$query.invoke(jdbc.clj:1126)"
java[7364]: "toucan.db$query.invokeStatic(db.clj:285)"
java[7364]: "toucan.db$query.doInvoke(db.clj:281)"
java[7364]: "clojure.lang.RestFn.invoke(RestFn.java:410)"
java[7364]: "toucan.db$simple_select.invokeStatic(db.clj:391)"
java[7364]: "toucan.db$simple_select.invoke(db.clj:380)"
java[7364]: "toucan.db$select.invokeStatic(db.clj:659)"
java[7364]: "toucan.db$select.doInvoke(db.clj:653)"
java[7364]: "clojure.lang.RestFn.applyTo(RestFn.java:139)"
java[7364]: "clojure.core$apply.invokeStatic(core.clj:667)"
java[7364]: "clojure.core$apply.invoke(core.clj:660)"
java[7364]: "toucan.db$select_field__GT_field.invokeStatic(db.clj:701)"
java[7364]: "toucan.db$select_field__GT_field.doInvoke(db.clj:694)"
java[7364]: "clojure.lang.RestFn.invoke(RestFn.java:494)"
java[7364]: "--> api.dashboard$hashes__GT_hash_vec__GT_avg_time.invokeStatic(dashboard.clj:162)"
java[7364]: "api.dashboard$hashes__GT_hash_vec__GT_avg_time.invoke(dashboard.clj:156)"
java[7364]: "api.dashboard$add_query_average_duration_to_dashcards.invokeStatic(dashboard.clj:176)"
java[7364]: "api.dashboard$add_query_average_duration_to_dashcards.invoke(dashboard.clj:173)"
java[7364]: "api.dashboard$add_query_average_durations.invokeStatic(dashboard.clj:188)"
java[7364]: "api.dashboard$add_query_average_durations.invoke(dashboard.clj:185)"
java[7364]: "api.dashboard$get_dashboard.invokeStatic(dashboard.clj:194)"
java[7364]: "api.dashboard$get_dashboard.invoke(dashboard.clj:191)"
java[7364]: "api.dashboard$fn__48680.invokeStatic(dashboard.clj:236)"
java[7364]: "api.dashboard$fn__48680.invoke(dashboard.clj:233)"
java[7364]: "middleware.auth$enforce_authentication$fn__64059.invoke(auth.clj:14)"
java[7364]: "routes$fn__65209$fn__65210.doInvoke(routes.clj:56)"
java[7364]: "middleware.exceptions$catch_uncaught_exceptions$fn__64158.invoke(exceptions.clj:104)"
java[7364]: "middleware.exceptions$catch_api_exceptions$fn__64155.invoke(exceptions.clj:92)"
java[7364]: "middleware.log$log_api_call$fn__65583$fn__65584.invoke(log.clj:170)"
java[7364]: "middleware.log$log_api_call$fn__65583.invoke(log.clj:164)"
java[7364]: "middleware.security$add_security_headers$fn__64121.invoke(security.clj:128)"
java[7364]: "middleware.json$wrap_json_body$fn__65288.invoke(json.clj:61)"
java[7364]: "middleware.json$wrap_streamed_json_response$fn__65306.invoke(json.clj:97)"
java[7364]: "middleware.session$bind_current_user$fn__62021$fn__62022.invoke(session.clj:193)"
java[7364]: "middleware.session$do_with_current_user.invokeStatic(session.clj:176)"
java[7364]: "middleware.session$do_with_current_user.invoke(session.clj:170)"
java[7364]: "middleware.session$bind_current_user$fn__62021.invoke(session.clj:192)"
java[7364]: "middleware.session$wrap_current_user_id$fn__62010.invoke(session.clj:161)"
java[7364]: "middleware.session$wrap_session_id$fn__61995.invoke(session.clj:123)"
java[7364]: "middleware.auth$wrap_api_key$fn__64067.invoke(auth.clj:27)"
java[7364]: "middleware.misc$maybe_set_site_url$fn__65610.invoke(misc.clj:56)"
java[7364]: "middleware.misc$bind_user_locale$fn__65613.invoke(misc.clj:72)"
java[7364]: "middleware.misc$add_content_type$fn__65598.invoke(misc.clj:28)"
java[7364]: "middleware.misc$disable_streaming_buffering$fn__65621.invoke(misc.clj:87)"),
java[7364]: :sql-exception-chain
java[7364]: ["SQLSyntaxErrorException:"
java[7364]: [187B blob data]
java[7364]: "SQLState: 42000"
java[7364]: "Error Code: 1064"]}
`
**To Reproduce**
Steps to reproduce the behavior:
1. Create some questions that fetch their data from any but your first table.
2. Create a Dashboard and attach those questions to it.
3. Create a question that get its data from your first table.
4. Add this question to the Dashboard from 2.
5. You'll get `<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>500 Server Error</title> </head><body> <h1>Server Error</h1> <p>The server encountered an internal error or misconfiguration and was unable to complete your request.</p> <p>Please contact the server administrator at ... to inform them of the time this error occurred, and the actions you performed just before this error.</p> <p>More information about this error may be available in the server error log.</p> </body></html> `
You could test this directly in the metabase database in `report_dashboardcard.card_id` (for example to heal your Dashboard).
**Expected behavior**
The Dashboard should work as expected, regardless of where the questions fetch their data from.
**Information about your Metabase Installation:**
- Your operating system: CentOS 3.10.0-957.27.2.el7.x86_64
- Your databases: MariaDB ColumnStore
- Metabase version: 0.33.2
- Metabase hosting environment: jar-File, systemd
- Metabase internal database: MariaDB
**Severity**
Blocker for all Users, annoying for the Admin.
|
process
|
dashboard error describe the bug if you create a dashboard and add any question against your first table first the first one from metabase table not necessarily with id the dashboard crashes with error if you just use questions that fetch their data from any other table the dashboard works as expected logs java error middleware log get api dashboard ms db calls java java type java sql sqlsyntaxerrorexception java stacktrace java org mariadb jdbc internal util exceptions exceptionmapper get exceptionmapper java java org mariadb jdbc internal util exceptions exceptionmapper getexception exceptionmapper java java org mariadb jdbc mariadbstatement executeexceptionepilogue mariadbstatement java java org mariadb jdbc mariadbpreparedstatementclient executeinternal mariadbpreparedstatementclient java java org mariadb jdbc mariadbpreparedstatementclient execute mariadbpreparedstatementclient java java org mariadb jdbc mariadbpreparedstatementclient executequery mariadbpreparedstatementclient java java com mchange impl newproxypreparedstatement executequery newproxypreparedstatement java java clojure java jdbc execute query with params invokestatic jdbc clj java clojure java jdbc execute query with params invoke jdbc clj java clojure java jdbc db query with resultset star invokestatic jdbc clj java clojure java jdbc db query with resultset star invoke jdbc clj java clojure java jdbc query invokestatic jdbc clj java clojure java jdbc query invoke jdbc clj java toucan db query invokestatic db clj java toucan db query doinvoke db clj java clojure lang restfn invoke restfn java java toucan db simple select invokestatic db clj java toucan db simple select invoke db clj java toucan db select invokestatic db clj java toucan db select doinvoke db clj java clojure lang restfn applyto restfn java java clojure core apply invokestatic core clj java clojure core apply invoke core clj java toucan db select field gt field invokestatic db clj java toucan db select field gt field doinvoke db clj java clojure lang restfn invoke restfn java java api dashboard hashes gt hash vec gt avg time invokestatic dashboard clj java api dashboard hashes gt hash vec gt avg time invoke dashboard clj java api dashboard add query average duration to dashcards invokestatic dashboard clj java api dashboard add query average duration to dashcards invoke dashboard clj java api dashboard add query average durations invokestatic dashboard clj java api dashboard add query average durations invoke dashboard clj java api dashboard get dashboard invokestatic dashboard clj java api dashboard get dashboard invoke dashboard clj java api dashboard fn invokestatic dashboard clj java api dashboard fn invoke dashboard clj java middleware auth enforce authentication fn invoke auth clj java routes fn fn doinvoke routes clj java middleware exceptions catch uncaught exceptions fn invoke exceptions clj java middleware exceptions catch api exceptions fn invoke exceptions clj java middleware log log api call fn fn invoke log clj java middleware log log api call fn invoke log clj java middleware security add security headers fn invoke security clj java middleware json wrap json body fn invoke json clj java middleware json wrap streamed json response fn invoke json clj java middleware session bind current user fn fn invoke session clj java middleware session do with current user invokestatic session clj java middleware session do with current user invoke session clj java middleware session bind current user fn invoke session clj java middleware session wrap current user id fn invoke session clj java middleware session wrap session id fn invoke session clj java middleware auth wrap api key fn invoke auth clj java middleware misc maybe set site url fn invoke misc clj java middleware misc bind user locale fn invoke misc clj java middleware misc add content type fn invoke misc clj java middleware misc disable streaming buffering fn invoke misc clj java sql exception chain java sqlsyntaxerrorexception java java sqlstate java error code to reproduce steps to reproduce the behavior create some questions that fetch their data from any but your first table create a dashboard and attach those questions to it create a question that get its data from your first table add this question to the dashboard from you ll get server error server error the server encountered an internal error or misconfiguration and was unable to complete your request please contact the server administrator at to inform them of the time this error occurred and the actions you performed just before this error more information about this error may be available in the server error log you could test this directly in the metabase database in report dashboardcard card id for example to heal your dashboard expected behavior the dashboard should work as expected regardless of where the questions fetch their data from information about your metabase installation your operating system centos your databases mariadb columnstore metabase version metabase hosting environment jar file systemd metabase internal database mariadb severity blocker for all users annoying for the admin
| 1
|
16,983
| 22,343,836,652
|
IssuesEvent
|
2022-06-15 05:31:32
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
reopened
|
Unable to detect cpp toolchain in 5.1.0 on macOS M1
|
type: support / not a bug (process) team-Rules-CPP duplicate
|
### Description of the bug:
After upgrading from Bazel 5.0.0 to 5.1.0 (or 5.2.0), Bazel is not able to detect the cpp toolchain. This is on a macOS Monterey on M1 trying to compile a `java_library`. Works fine on x86 or Linux machines.
`--toolchain_resolution_debug=@bazel_tools//tools/cpp:toolchain_type` on 5.0.0
```
INFO: Build option --toolchain_resolution_debug has changed, discarding analysis cache.
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-armeabi-v7a; mismatching values: arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-armeabi-v7a; mismatching values: arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: execution @local_config_platform//:host: Selected toolchain @local_config_cc//:cc-compiler-darwin_arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_x86_64; mismatching values: x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_x86_64; mismatching values: x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64e; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64e; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_armv7; mismatching values: ios, arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_armv7; mismatching values: ios, arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_i386; mismatching values: ios, x86_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_i386; mismatching values: ios, x86_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_sim_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_sim_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64_32; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64_32; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_armv7k; mismatching values: ios, arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_armv7k; mismatching values: ios, arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_i386; mismatching values: ios, x86_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_i386; mismatching values: ios, x86_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Target platform @local_config_platform//:host: Selected execution platform @local_config_platform//:host, type @bazel_tools//tools/cpp:toolchain_type -> toolchain @local_config_cc//:cc-compiler-darwin_arm64
INFO: ToolchainResolution: Target platform @local_config_platform//:host: Selected execution platform @local_config_platform//:host, type @bazel_tools//tools/cpp:toolchain_type -> toolchain @local_config_cc//:cc-compiler-darwin_arm64, type @bazel_tools//tools/python:toolchain_type -> toolchain @bazel_tools//tools/python:_autodetecting_py_runtime_pair
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-armeabi-v7a; mismatching values: arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-armeabi-v7a; mismatching values: arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: execution @local_config_platform//:host: Selected toolchain @local_config_cc//:cc-compiler-darwin_arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_x86_64; mismatching values: x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_x86_64; mismatching values: x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64e; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64e; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_armv7; mismatching values: ios, arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_armv7; mismatching values: ios, arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_i386; mismatching values: ios, x86_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_i386; mismatching values: ios, x86_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_sim_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_sim_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64_32; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64_32; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_armv7k; mismatching values: ios, arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_armv7k; mismatching values: ios, arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_i386; mismatching values: ios, x86_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_i386; mismatching values: ios, x86_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Target platform @local_config_platform//:host: Selected execution platform @local_config_platform//:host, type @bazel_tools//tools/cpp:toolchain_type -> toolchain @local_config_cc//:cc-compiler-darwin_arm64
```
`--toolchain_resolution_debug=@bazel_tools//tools/cpp:toolchain_type` on 5.1.0
```
INFO: Build option --toolchain_resolution_debug has changed, discarding analysis cache.
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-armeabi-v7a; mismatching values: arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-armeabi-v7a; mismatching values: arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_arm64; mismatching values: arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_arm64; mismatching values: arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_arm64e; mismatching values: arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_arm64e; mismatching values: arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_x86_64; mismatching values: x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_x86_64; mismatching values: x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64; mismatching values: ios, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64; mismatching values: ios, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64e; mismatching values: ios, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64e; mismatching values: ios, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_armv7; mismatching values: ios, armv7
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_armv7; mismatching values: ios, armv7
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_i386; mismatching values: ios, i386
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_i386; mismatching values: ios, i386
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_sim_arm64; mismatching values: ios, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_sim_arm64; mismatching values: ios, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_arm64; mismatching values: tvos, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_arm64; mismatching values: tvos, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_sim_arm64; mismatching values: tvos, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_sim_arm64; mismatching values: tvos, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_x86_64; mismatching values: tvos, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_x86_64; mismatching values: tvos, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64; mismatching values: watchos, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64; mismatching values: watchos, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64_32; mismatching values: watchos, arm64_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64_32; mismatching values: watchos, arm64_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_armv7k; mismatching values: watchos, armv7k
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_armv7k; mismatching values: watchos, armv7k
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_i386; mismatching values: watchos, i386
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_i386; mismatching values: watchos, i386
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_x86_64; mismatching values: watchos, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_x86_64; mismatching values: watchos, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: No toolchains found.
INFO: Repository remote_java_tools instantiated at:
/DEFAULT.WORKSPACE.SUFFIX:392:6: in <toplevel>
/private/var/tmp/.../169773900eda8a9daef572f31c9de082/external/bazel_tools/tools/build_defs/repo/utils.bzl:233:18: in maybe
Repository rule http_archive defined at:
/private/var/tmp/.../169773900eda8a9daef572f31c9de082/external/bazel_tools/tools/build_defs/repo/http.bzl:353:31: in <toplevel>
ERROR: /private/var/tmp/.../169773900eda8a9daef572f31c9de082/external/bazel_tools/tools/jdk/BUILD:425:10: While resolving toolchains for target @bazel_tools//tools/jdk:proguard_whitelister: No matching toolchains found for types @bazel_tools//tools/cpp:toolchain_type. Maybe --incompatible_use_cc_configure_from_rules_cc has been flipped and there is no default C++ toolchain added in the WORKSPACE file? See https://github.com/bazelbuild/bazel/issues/10134 for details and migration instructions.
```
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
_No response_
### Which operating system are you running Bazel on?
macOS Monterey
### What is the output of `bazel info release`?
release 5.2.0
### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel.
_No response_
### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ?
_No response_
### Have you found anything relevant by searching the web?
Found similar issues on other situations but the solutions didn't work in this situation.
### Any other information, logs, or outputs that you want to share?
_No response_
|
1.0
|
Unable to detect cpp toolchain in 5.1.0 on macOS M1 - ### Description of the bug:
After upgrading from Bazel 5.0.0 to 5.1.0 (or 5.2.0), Bazel is not able to detect the cpp toolchain. This is on a macOS Monterey on M1 trying to compile a `java_library`. Works fine on x86 or Linux machines.
`--toolchain_resolution_debug=@bazel_tools//tools/cpp:toolchain_type` on 5.0.0
```
INFO: Build option --toolchain_resolution_debug has changed, discarding analysis cache.
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-armeabi-v7a; mismatching values: arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-armeabi-v7a; mismatching values: arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: execution @local_config_platform//:host: Selected toolchain @local_config_cc//:cc-compiler-darwin_arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_x86_64; mismatching values: x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_x86_64; mismatching values: x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64e; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64e; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_armv7; mismatching values: ios, arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_armv7; mismatching values: ios, arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_i386; mismatching values: ios, x86_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_i386; mismatching values: ios, x86_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_sim_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_sim_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64_32; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64_32; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_armv7k; mismatching values: ios, arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_armv7k; mismatching values: ios, arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_i386; mismatching values: ios, x86_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_i386; mismatching values: ios, x86_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Target platform @local_config_platform//:host: Selected execution platform @local_config_platform//:host, type @bazel_tools//tools/cpp:toolchain_type -> toolchain @local_config_cc//:cc-compiler-darwin_arm64
INFO: ToolchainResolution: Target platform @local_config_platform//:host: Selected execution platform @local_config_platform//:host, type @bazel_tools//tools/cpp:toolchain_type -> toolchain @local_config_cc//:cc-compiler-darwin_arm64, type @bazel_tools//tools/python:toolchain_type -> toolchain @bazel_tools//tools/python:_autodetecting_py_runtime_pair
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-armeabi-v7a; mismatching values: arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-armeabi-v7a; mismatching values: arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: execution @local_config_platform//:host: Selected toolchain @local_config_cc//:cc-compiler-darwin_arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_x86_64; mismatching values: x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_x86_64; mismatching values: x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64e; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64e; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_armv7; mismatching values: ios, arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_armv7; mismatching values: ios, arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_i386; mismatching values: ios, x86_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_i386; mismatching values: ios, x86_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_sim_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_sim_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64_32; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64_32; mismatching values: ios
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_armv7k; mismatching values: ios, arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_armv7k; mismatching values: ios, arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_i386; mismatching values: ios, x86_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_i386; mismatching values: ios, x86_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Target platform @local_config_platform//:host: Selected execution platform @local_config_platform//:host, type @bazel_tools//tools/cpp:toolchain_type -> toolchain @local_config_cc//:cc-compiler-darwin_arm64
```
`--toolchain_resolution_debug=@bazel_tools//tools/cpp:toolchain_type` on 5.1.0
```
INFO: Build option --toolchain_resolution_debug has changed, discarding analysis cache.
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-armeabi-v7a; mismatching values: arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-armeabi-v7a; mismatching values: arm
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_arm64; mismatching values: arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_arm64; mismatching values: arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_arm64e; mismatching values: arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_arm64e; mismatching values: arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_x86_64; mismatching values: x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-darwin_x86_64; mismatching values: x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64; mismatching values: ios, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64; mismatching values: ios, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64e; mismatching values: ios, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_arm64e; mismatching values: ios, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_armv7; mismatching values: ios, armv7
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_armv7; mismatching values: ios, armv7
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_i386; mismatching values: ios, i386
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_i386; mismatching values: ios, i386
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_sim_arm64; mismatching values: ios, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_sim_arm64; mismatching values: ios, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-ios_x86_64; mismatching values: ios, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_arm64; mismatching values: tvos, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_arm64; mismatching values: tvos, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_sim_arm64; mismatching values: tvos, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_sim_arm64; mismatching values: tvos, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_x86_64; mismatching values: tvos, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-tvos_x86_64; mismatching values: tvos, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64; mismatching values: watchos, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64; mismatching values: watchos, arm64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64_32; mismatching values: watchos, arm64_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_arm64_32; mismatching values: watchos, arm64_32
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_armv7k; mismatching values: watchos, armv7k
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_armv7k; mismatching values: watchos, armv7k
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_i386; mismatching values: watchos, i386
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_i386; mismatching values: watchos, i386
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_x86_64; mismatching values: watchos, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: Rejected toolchain @local_config_cc//:cc-compiler-watchos_x86_64; mismatching values: watchos, x86_64
INFO: ToolchainResolution: Type @bazel_tools//tools/cpp:toolchain_type: target platform @local_config_platform//:host: No toolchains found.
INFO: Repository remote_java_tools instantiated at:
/DEFAULT.WORKSPACE.SUFFIX:392:6: in <toplevel>
/private/var/tmp/.../169773900eda8a9daef572f31c9de082/external/bazel_tools/tools/build_defs/repo/utils.bzl:233:18: in maybe
Repository rule http_archive defined at:
/private/var/tmp/.../169773900eda8a9daef572f31c9de082/external/bazel_tools/tools/build_defs/repo/http.bzl:353:31: in <toplevel>
ERROR: /private/var/tmp/.../169773900eda8a9daef572f31c9de082/external/bazel_tools/tools/jdk/BUILD:425:10: While resolving toolchains for target @bazel_tools//tools/jdk:proguard_whitelister: No matching toolchains found for types @bazel_tools//tools/cpp:toolchain_type. Maybe --incompatible_use_cc_configure_from_rules_cc has been flipped and there is no default C++ toolchain added in the WORKSPACE file? See https://github.com/bazelbuild/bazel/issues/10134 for details and migration instructions.
```
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
_No response_
### Which operating system are you running Bazel on?
macOS Monterey
### What is the output of `bazel info release`?
release 5.2.0
### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel.
_No response_
### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ?
_No response_
### Have you found anything relevant by searching the web?
Found similar issues on other situations but the solutions didn't work in this situation.
### Any other information, logs, or outputs that you want to share?
_No response_
|
process
|
unable to detect cpp toolchain in on macos description of the bug after upgrading from bazel to or bazel is not able to detect the cpp toolchain this is on a macos monterey on trying to compile a java library works fine on or linux machines toolchain resolution debug bazel tools tools cpp toolchain type on info build option toolchain resolution debug has changed discarding analysis cache info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler armeabi mismatching values arm info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler armeabi mismatching values arm info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host execution local config platform host selected toolchain local config cc cc compiler darwin info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler darwin mismatching values info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler darwin mismatching values info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios arm info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios arm info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios sim mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios sim mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler tvos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler tvos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler tvos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler tvos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios arm info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios arm info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios info toolchainresolution target platform local config platform host selected execution platform local config platform host type bazel tools tools cpp toolchain type toolchain local config cc cc compiler darwin info toolchainresolution target platform local config platform host selected execution platform local config platform host type bazel tools tools cpp toolchain type toolchain local config cc cc compiler darwin type bazel tools tools python toolchain type toolchain bazel tools tools python autodetecting py runtime pair info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler armeabi mismatching values arm info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler armeabi mismatching values arm info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host execution local config platform host selected toolchain local config cc cc compiler darwin info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler darwin mismatching values info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler darwin mismatching values info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios arm info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios arm info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios sim mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios sim mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler tvos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler tvos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler tvos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler tvos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios arm info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios arm info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values ios info toolchainresolution target platform local config platform host selected execution platform local config platform host type bazel tools tools cpp toolchain type toolchain local config cc cc compiler darwin toolchain resolution debug bazel tools tools cpp toolchain type on info build option toolchain resolution debug has changed discarding analysis cache info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler armeabi mismatching values arm info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler armeabi mismatching values arm info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler darwin mismatching values info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler darwin mismatching values info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler darwin mismatching values info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler darwin mismatching values info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler darwin mismatching values info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler darwin mismatching values info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios sim mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios sim mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler ios mismatching values ios info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler tvos mismatching values tvos info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler tvos mismatching values tvos info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler tvos sim mismatching values tvos info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler tvos sim mismatching values tvos info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler tvos mismatching values tvos info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler tvos mismatching values tvos info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values watchos info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values watchos info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values watchos info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values watchos info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values watchos info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values watchos info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values watchos info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values watchos info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values watchos info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host rejected toolchain local config cc cc compiler watchos mismatching values watchos info toolchainresolution type bazel tools tools cpp toolchain type target platform local config platform host no toolchains found info repository remote java tools instantiated at default workspace suffix in private var tmp external bazel tools tools build defs repo utils bzl in maybe repository rule http archive defined at private var tmp external bazel tools tools build defs repo http bzl in error private var tmp external bazel tools tools jdk build while resolving toolchains for target bazel tools tools jdk proguard whitelister no matching toolchains found for types bazel tools tools cpp toolchain type maybe incompatible use cc configure from rules cc has been flipped and there is no default c toolchain added in the workspace file see for details and migration instructions what s the simplest easiest way to reproduce this bug please provide a minimal example if possible no response which operating system are you running bazel on macos monterey what is the output of bazel info release release if bazel info release returns development version or non git tell us how you built bazel no response what s the output of git remote get url origin git rev parse master git rev parse head no response have you found anything relevant by searching the web found similar issues on other situations but the solutions didn t work in this situation any other information logs or outputs that you want to share no response
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.