Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
4
112
repo_url
stringlengths
33
141
action
stringclasses
3 values
title
stringlengths
1
1.02k
labels
stringlengths
4
1.54k
body
stringlengths
1
262k
index
stringclasses
17 values
text_combine
stringlengths
95
262k
label
stringclasses
2 values
text
stringlengths
96
252k
binary_label
int64
0
1
250,018
21,259,180,984
IssuesEvent
2022-04-13 00:54:49
RamiMustafa/WAF_Sec_Test
https://api.github.com/repos/RamiMustafa/WAF_Sec_Test
opened
Classify your data at rest and use encryption
WARP-Import WAF_Sec_Test Security Security & Compliance Encryption
<a href="https://docs.microsoft.com/azure/architecture/framework/security/design-storage-encryption#data-at-rest">Classify your data at rest and use encryption</a> <p><b>Why Consider This?</b></p> All important data should be classified and encrypted with an encryption standard. Data at rest is encrypted by default in Azure, but is your critical data classified and tagged/labelled so that it can be audited? <p><b>Context</b></p> <p><span>Your most sensitive data might include business, financial, healthcare, or personal information. Discovering and classifying this data can play a pivotal role in your organization's information protection approach. It can serve as infrastructure for:</span></p><ul style="list-style-type:disc"><li value="1" style="text-indent: 0px;"><span>Helping to meet standards for data privacy and requirements for regulatory compliance</span></li><li value="2" style="margin-right: 0px;text-indent: 0px;"><span>Various security scenarios, such as monitoring (auditing) and alerting on anomalous access to sensitive data</span></li><li value="3" style="margin-right: 0px;text-indent: 0px;"><span>Controlling access to and hardening the security of databases that contain highly sensitive data</span></li></ul> <p><b>Suggested Actions</b></p> <p><span>Classify your data. Consider using Data Discovery "amp; Classification in Azure SQL Database.</span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/azure-sql/database/data-discovery-and-classification-overview" target="_blank"><span>https://docs.microsoft.com/en-us/azure/azure-sql/database/data-discovery-and-classification-overview</span></a><span/></p><p><a href="https://docs.microsoft.com/en-us/azure/purview/overview" target="_blank"><span>Azure Purview</span></a><span /></p>
1.0
Classify your data at rest and use encryption - <a href="https://docs.microsoft.com/azure/architecture/framework/security/design-storage-encryption#data-at-rest">Classify your data at rest and use encryption</a> <p><b>Why Consider This?</b></p> All important data should be classified and encrypted with an encryption standard. Data at rest is encrypted by default in Azure, but is your critical data classified and tagged/labelled so that it can be audited? <p><b>Context</b></p> <p><span>Your most sensitive data might include business, financial, healthcare, or personal information. Discovering and classifying this data can play a pivotal role in your organization's information protection approach. It can serve as infrastructure for:</span></p><ul style="list-style-type:disc"><li value="1" style="text-indent: 0px;"><span>Helping to meet standards for data privacy and requirements for regulatory compliance</span></li><li value="2" style="margin-right: 0px;text-indent: 0px;"><span>Various security scenarios, such as monitoring (auditing) and alerting on anomalous access to sensitive data</span></li><li value="3" style="margin-right: 0px;text-indent: 0px;"><span>Controlling access to and hardening the security of databases that contain highly sensitive data</span></li></ul> <p><b>Suggested Actions</b></p> <p><span>Classify your data. Consider using Data Discovery "amp; Classification in Azure SQL Database.</span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/azure-sql/database/data-discovery-and-classification-overview" target="_blank"><span>https://docs.microsoft.com/en-us/azure/azure-sql/database/data-discovery-and-classification-overview</span></a><span/></p><p><a href="https://docs.microsoft.com/en-us/azure/purview/overview" target="_blank"><span>Azure Purview</span></a><span /></p>
test
classify your data at rest and use encryption why consider this all important data should be classified and encrypted with an encryption standard data at rest is encrypted by default in azure but is your critical data classified and tagged labelled so that it can be audited context your most sensitive data might include business financial healthcare or personal information discovering and classifying this data can play a pivotal role in your organization s information protection approach it can serve as infrastructure for helping to meet standards for data privacy and requirements for regulatory compliance various security scenarios such as monitoring auditing and alerting on anomalous access to sensitive data controlling access to and hardening the security of databases that contain highly sensitive data suggested actions classify your data consider using data discovery amp classification in azure sql database learn more href target blank azure purview
1
179,688
13,895,240,228
IssuesEvent
2020-10-19 15:37:03
Hurence/historian
https://api.github.com/repos/Hurence/historian
closed
fix QueryEndPointFocusOnSamplingWithPreAggCurrentVersionIT test that have a methode that returns null
tests
the class historian/historian-server/src/integration-test/java/com/hurence/webapiservice/http/api/grafana/hurence/QueryEndPointFocusOnSamplingWithPreAggCurrentVersionIT.java has a methode that returns null : `private static List<Chunk> buildChunks(String metric_10_chunk, List<List<Measure>> pointsByChunk10Chunks) { return null; }`
1.0
fix QueryEndPointFocusOnSamplingWithPreAggCurrentVersionIT test that have a methode that returns null - the class historian/historian-server/src/integration-test/java/com/hurence/webapiservice/http/api/grafana/hurence/QueryEndPointFocusOnSamplingWithPreAggCurrentVersionIT.java has a methode that returns null : `private static List<Chunk> buildChunks(String metric_10_chunk, List<List<Measure>> pointsByChunk10Chunks) { return null; }`
test
fix queryendpointfocusonsamplingwithpreaggcurrentversionit test that have a methode that returns null the class historian historian server src integration test java com hurence webapiservice http api grafana hurence queryendpointfocusonsamplingwithpreaggcurrentversionit java has a methode that returns null private static list buildchunks string metric chunk list return null
1
649,330
21,264,298,101
IssuesEvent
2022-04-13 08:25:45
Kukuza/mobile-wallet
https://api.github.com/repos/Kukuza/mobile-wallet
opened
After I made a request and click ok, I should be sent back to the home page.
bug High Priority
Right now, I an stuck on the Top up Request page.
1.0
After I made a request and click ok, I should be sent back to the home page. - Right now, I an stuck on the Top up Request page.
non_test
after i made a request and click ok i should be sent back to the home page right now i an stuck on the top up request page
0
219,429
17,091,399,028
IssuesEvent
2021-07-08 17:59:01
Pavangbhat/Sign-up-Flow
https://api.github.com/repos/Pavangbhat/Sign-up-Flow
opened
Write test cases for Email field
test
### Write the following test cases for the Email field: - [ ] Check if the email is valid or not - [ ] Check if the email is invalid or valid then the proper class is applied to the input box or not - [ ] Check when the user stops for 2 seconds email is valid or not and check respective classes are applied or not
1.0
Write test cases for Email field - ### Write the following test cases for the Email field: - [ ] Check if the email is valid or not - [ ] Check if the email is invalid or valid then the proper class is applied to the input box or not - [ ] Check when the user stops for 2 seconds email is valid or not and check respective classes are applied or not
test
write test cases for email field write the following test cases for the email field check if the email is valid or not check if the email is invalid or valid then the proper class is applied to the input box or not check when the user stops for seconds email is valid or not and check respective classes are applied or not
1
60,144
8,406,017,740
IssuesEvent
2018-10-11 16:43:22
ampproject/amphtml
https://api.github.com/repos/ampproject/amphtml
closed
Disclose employer in Governance.md
P3: When Possible Related to: Documentation
We should prepare for outside core committers and add transparency to GOVERNANCE.md by disclosing the employer after each committer name. This was an idea from a third party, and I think it's a small, but important change. /cc @cramforce
1.0
Disclose employer in Governance.md - We should prepare for outside core committers and add transparency to GOVERNANCE.md by disclosing the employer after each committer name. This was an idea from a third party, and I think it's a small, but important change. /cc @cramforce
non_test
disclose employer in governance md we should prepare for outside core committers and add transparency to governance md by disclosing the employer after each committer name this was an idea from a third party and i think it s a small but important change cc cramforce
0
134,335
10,894,804,106
IssuesEvent
2019-11-19 09:26:27
elastic/elasticsearch
https://api.github.com/repos/elastic/elasticsearch
closed
MlDistributedFailureIT.testCloseUnassignedJobAndDatafeed fails with NodeNotConnectedException
:ml >test-failure v8.0.0
## Example build failure https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+intake/370/console https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-unix-compatibility/os=oraclelinux-6/87/console And quite a few PR checks. https://scans.gradle.com/s/bsbkz6io7ysno/tests/lf2lfu4ufazso-jxctggmo7ue4i ## Reproduction line does not reproduce locally ``` ./gradlew :x-pack:plugin:ml:internalClusterTest --tests "org.elasticsearch.xpack.ml.integration.MlDistributedFailureIT.testCloseUnassignedJobAndDatafeed" -Dtests.seed=8DE3FEE00F9B4146 -Dtests.security.manager=true -Dtests.locale=bem-ZM -Dtests.timezone=Australia/Sydney -Dcompiler.java=12 -Druntime.java=12 ``` ## Example relevant log: ``` org.elasticsearch.action.FailedNodeException: Failed node [itGQaU1qT_mC05foNcu0qA]Close stacktrace at org.elasticsearch.action.support.tasks.TransportTasksAction$AsyncAction.onFailure(TransportTasksAction.java:308) at org.elasticsearch.action.support.tasks.TransportTasksAction$AsyncAction$1.handleException(TransportTasksAction.java:280) at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:535) at org.elasticsearch.action.support.tasks.TransportTasksAction$AsyncAction.start(TransportTasksAction.java:264) at org.elasticsearch.action.support.tasks.TransportTasksAction.doExecute(TransportTasksAction.java:96) at org.elasticsearch.xpack.ml.action.TransportStopDatafeedAction.normalStopDatafeed(TransportStopDatafeedAction.java:175) at org.elasticsearch.xpack.ml.action.TransportStopDatafeedAction.lambda$doExecute$0(TransportStopDatafeedAction.java:130) at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:62) at org.elasticsearch.xpack.ml.datafeed.persistence.DatafeedConfigProvider.lambda$expandDatafeedIds$3(DatafeedConfigProvider.java:387) at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:62) at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:43) at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:68) at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:64) at org.elasticsearch.action.search.AbstractSearchAsyncAction.sendSearchResponse(AbstractSearchAsyncAction.java:300) at org.elasticsearch.action.search.FetchSearchPhase$3.run(FetchSearchPhase.java:213) at org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:171) at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:165) at org.elasticsearch.action.search.ExpandSearchPhase.run(ExpandSearchPhase.java:119) at org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:171) at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:165) at org.elasticsearch.action.search.FetchSearchPhase.moveToNextPhase(FetchSearchPhase.java:206) at org.elasticsearch.action.search.FetchSearchPhase.lambda$innerRun$2(FetchSearchPhase.java:104) at org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:110) at org.elasticsearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:86) at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:44) at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:757) at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.lang.Thread.run(Thread.java:835) Caused by: org.elasticsearch.transport.NodeNotConnectedException: [node_t2][127.0.0.1:45727] Node not connectedClose stacktrace at org.elasticsearch.transport.ConnectionManager.getConnection(ConnectionManager.java:151) at org.elasticsearch.transport.TransportService.getConnection(TransportService.java:559) at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:531) at org.elasticsearch.action.support.tasks.TransportTasksAction$AsyncAction.start(TransportTasksAction.java:264) at org.elasticsearch.action.support.tasks.TransportTasksAction.doExecute(TransportTasksAction.java:96) at org.elasticsearch.xpack.ml.action.TransportStopDatafeedAction.normalStopDatafeed(TransportStopDatafeedAction.java:175) at org.elasticsearch.xpack.ml.action.TransportStopDatafeedAction.lambda$doExecute$0(TransportStopDatafeedAction.java:130) at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:62) at org.elasticsearch.xpack.ml.datafeed.persistence.DatafeedConfigProvider.lambda$expandDatafeedIds$3(DatafeedConfigProvider.java:387) at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:62) at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:43) at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:68) at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:64) at org.elasticsearch.action.search.AbstractSearchAsyncAction.sendSearchResponse(AbstractSearchAsyncAction.java:300) at org.elasticsearch.action.search.FetchSearchPhase$3.run(FetchSearchPhase.java:213) at org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:171) at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:165) at org.elasticsearch.action.search.ExpandSearchPhase.run(ExpandSearchPhase.java:119) at org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:171) at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:165) at org.elasticsearch.action.search.FetchSearchPhase.moveToNextPhase(FetchSearchPhase.java:206) at org.elasticsearch.action.search.FetchSearchPhase.lambda$innerRun$2(FetchSearchPhase.java:104) at org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:110) at org.elasticsearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:86) at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:44) at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:757) at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.lang.Thread.run(Thread.java:835) ```
1.0
MlDistributedFailureIT.testCloseUnassignedJobAndDatafeed fails with NodeNotConnectedException - ## Example build failure https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+intake/370/console https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-unix-compatibility/os=oraclelinux-6/87/console And quite a few PR checks. https://scans.gradle.com/s/bsbkz6io7ysno/tests/lf2lfu4ufazso-jxctggmo7ue4i ## Reproduction line does not reproduce locally ``` ./gradlew :x-pack:plugin:ml:internalClusterTest --tests "org.elasticsearch.xpack.ml.integration.MlDistributedFailureIT.testCloseUnassignedJobAndDatafeed" -Dtests.seed=8DE3FEE00F9B4146 -Dtests.security.manager=true -Dtests.locale=bem-ZM -Dtests.timezone=Australia/Sydney -Dcompiler.java=12 -Druntime.java=12 ``` ## Example relevant log: ``` org.elasticsearch.action.FailedNodeException: Failed node [itGQaU1qT_mC05foNcu0qA]Close stacktrace at org.elasticsearch.action.support.tasks.TransportTasksAction$AsyncAction.onFailure(TransportTasksAction.java:308) at org.elasticsearch.action.support.tasks.TransportTasksAction$AsyncAction$1.handleException(TransportTasksAction.java:280) at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:535) at org.elasticsearch.action.support.tasks.TransportTasksAction$AsyncAction.start(TransportTasksAction.java:264) at org.elasticsearch.action.support.tasks.TransportTasksAction.doExecute(TransportTasksAction.java:96) at org.elasticsearch.xpack.ml.action.TransportStopDatafeedAction.normalStopDatafeed(TransportStopDatafeedAction.java:175) at org.elasticsearch.xpack.ml.action.TransportStopDatafeedAction.lambda$doExecute$0(TransportStopDatafeedAction.java:130) at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:62) at org.elasticsearch.xpack.ml.datafeed.persistence.DatafeedConfigProvider.lambda$expandDatafeedIds$3(DatafeedConfigProvider.java:387) at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:62) at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:43) at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:68) at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:64) at org.elasticsearch.action.search.AbstractSearchAsyncAction.sendSearchResponse(AbstractSearchAsyncAction.java:300) at org.elasticsearch.action.search.FetchSearchPhase$3.run(FetchSearchPhase.java:213) at org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:171) at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:165) at org.elasticsearch.action.search.ExpandSearchPhase.run(ExpandSearchPhase.java:119) at org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:171) at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:165) at org.elasticsearch.action.search.FetchSearchPhase.moveToNextPhase(FetchSearchPhase.java:206) at org.elasticsearch.action.search.FetchSearchPhase.lambda$innerRun$2(FetchSearchPhase.java:104) at org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:110) at org.elasticsearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:86) at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:44) at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:757) at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.lang.Thread.run(Thread.java:835) Caused by: org.elasticsearch.transport.NodeNotConnectedException: [node_t2][127.0.0.1:45727] Node not connectedClose stacktrace at org.elasticsearch.transport.ConnectionManager.getConnection(ConnectionManager.java:151) at org.elasticsearch.transport.TransportService.getConnection(TransportService.java:559) at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:531) at org.elasticsearch.action.support.tasks.TransportTasksAction$AsyncAction.start(TransportTasksAction.java:264) at org.elasticsearch.action.support.tasks.TransportTasksAction.doExecute(TransportTasksAction.java:96) at org.elasticsearch.xpack.ml.action.TransportStopDatafeedAction.normalStopDatafeed(TransportStopDatafeedAction.java:175) at org.elasticsearch.xpack.ml.action.TransportStopDatafeedAction.lambda$doExecute$0(TransportStopDatafeedAction.java:130) at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:62) at org.elasticsearch.xpack.ml.datafeed.persistence.DatafeedConfigProvider.lambda$expandDatafeedIds$3(DatafeedConfigProvider.java:387) at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:62) at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:43) at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:68) at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:64) at org.elasticsearch.action.search.AbstractSearchAsyncAction.sendSearchResponse(AbstractSearchAsyncAction.java:300) at org.elasticsearch.action.search.FetchSearchPhase$3.run(FetchSearchPhase.java:213) at org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:171) at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:165) at org.elasticsearch.action.search.ExpandSearchPhase.run(ExpandSearchPhase.java:119) at org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:171) at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:165) at org.elasticsearch.action.search.FetchSearchPhase.moveToNextPhase(FetchSearchPhase.java:206) at org.elasticsearch.action.search.FetchSearchPhase.lambda$innerRun$2(FetchSearchPhase.java:104) at org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:110) at org.elasticsearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:86) at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:44) at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:757) at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.lang.Thread.run(Thread.java:835) ```
test
mldistributedfailureit testcloseunassignedjobanddatafeed fails with nodenotconnectedexception example build failure and quite a few pr checks reproduction line does not reproduce locally gradlew x pack plugin ml internalclustertest tests org elasticsearch xpack ml integration mldistributedfailureit testcloseunassignedjobanddatafeed dtests seed dtests security manager true dtests locale bem zm dtests timezone australia sydney dcompiler java druntime java example relevant log org elasticsearch action failednodeexception failed node close stacktrace at org elasticsearch action support tasks transporttasksaction asyncaction onfailure transporttasksaction java at org elasticsearch action support tasks transporttasksaction asyncaction handleexception transporttasksaction java at org elasticsearch transport transportservice sendrequest transportservice java at org elasticsearch action support tasks transporttasksaction asyncaction start transporttasksaction java at org elasticsearch action support tasks transporttasksaction doexecute transporttasksaction java at org elasticsearch xpack ml action transportstopdatafeedaction normalstopdatafeed transportstopdatafeedaction java at org elasticsearch xpack ml action transportstopdatafeedaction lambda doexecute transportstopdatafeedaction java at org elasticsearch action actionlistener onresponse actionlistener java at org elasticsearch xpack ml datafeed persistence datafeedconfigprovider lambda expanddatafeedids datafeedconfigprovider java at org elasticsearch action actionlistener onresponse actionlistener java at org elasticsearch action support contextpreservingactionlistener onresponse contextpreservingactionlistener java at org elasticsearch action support transportaction onresponse transportaction java at org elasticsearch action support transportaction onresponse transportaction java at org elasticsearch action search abstractsearchasyncaction sendsearchresponse abstractsearchasyncaction java at org elasticsearch action search fetchsearchphase run fetchsearchphase java at org elasticsearch action search abstractsearchasyncaction executephase abstractsearchasyncaction java at org elasticsearch action search abstractsearchasyncaction executenextphase abstractsearchasyncaction java at org elasticsearch action search expandsearchphase run expandsearchphase java at org elasticsearch action search abstractsearchasyncaction executephase abstractsearchasyncaction java at org elasticsearch action search abstractsearchasyncaction executenextphase abstractsearchasyncaction java at org elasticsearch action search fetchsearchphase movetonextphase fetchsearchphase java at org elasticsearch action search fetchsearchphase lambda innerrun fetchsearchphase java at org elasticsearch action search fetchsearchphase innerrun fetchsearchphase java at org elasticsearch action search fetchsearchphase dorun fetchsearchphase java at org elasticsearch common util concurrent abstractrunnable run abstractrunnable java at org elasticsearch common util concurrent timedrunnable dorun timedrunnable java at org elasticsearch common util concurrent threadcontext contextpreservingabstractrunnable dorun threadcontext java at org elasticsearch common util concurrent abstractrunnable run abstractrunnable java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by org elasticsearch transport nodenotconnectedexception node not connectedclose stacktrace at org elasticsearch transport connectionmanager getconnection connectionmanager java at org elasticsearch transport transportservice getconnection transportservice java at org elasticsearch transport transportservice sendrequest transportservice java at org elasticsearch action support tasks transporttasksaction asyncaction start transporttasksaction java at org elasticsearch action support tasks transporttasksaction doexecute transporttasksaction java at org elasticsearch xpack ml action transportstopdatafeedaction normalstopdatafeed transportstopdatafeedaction java at org elasticsearch xpack ml action transportstopdatafeedaction lambda doexecute transportstopdatafeedaction java at org elasticsearch action actionlistener onresponse actionlistener java at org elasticsearch xpack ml datafeed persistence datafeedconfigprovider lambda expanddatafeedids datafeedconfigprovider java at org elasticsearch action actionlistener onresponse actionlistener java at org elasticsearch action support contextpreservingactionlistener onresponse contextpreservingactionlistener java at org elasticsearch action support transportaction onresponse transportaction java at org elasticsearch action support transportaction onresponse transportaction java at org elasticsearch action search abstractsearchasyncaction sendsearchresponse abstractsearchasyncaction java at org elasticsearch action search fetchsearchphase run fetchsearchphase java at org elasticsearch action search abstractsearchasyncaction executephase abstractsearchasyncaction java at org elasticsearch action search abstractsearchasyncaction executenextphase abstractsearchasyncaction java at org elasticsearch action search expandsearchphase run expandsearchphase java at org elasticsearch action search abstractsearchasyncaction executephase abstractsearchasyncaction java at org elasticsearch action search abstractsearchasyncaction executenextphase abstractsearchasyncaction java at org elasticsearch action search fetchsearchphase movetonextphase fetchsearchphase java at org elasticsearch action search fetchsearchphase lambda innerrun fetchsearchphase java at org elasticsearch action search fetchsearchphase innerrun fetchsearchphase java at org elasticsearch action search fetchsearchphase dorun fetchsearchphase java at org elasticsearch common util concurrent abstractrunnable run abstractrunnable java at org elasticsearch common util concurrent timedrunnable dorun timedrunnable java at org elasticsearch common util concurrent threadcontext contextpreservingabstractrunnable dorun threadcontext java at org elasticsearch common util concurrent abstractrunnable run abstractrunnable java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java
1
87,814
8,123,002,270
IssuesEvent
2018-08-16 13:27:47
galenframework/galen
https://api.github.com/repos/galenframework/galen
closed
Spec metadata in report
c1 enhancement p3 ready for test
It would be nice to see in the resulting html report what is exactly verified. So in the end we could actually see some kind of arrows and line markers. Therefore there should be a good way of structuring such information in json report ## Examples The following are examples of how it could be rendered in json report ### For spec inside: ``` "name" : "cancel_button", "specs" : [ { "status" : "error", "name" : "inside login_box 0px bottom", "errors" : [ "\"cancel_button\" is 1px bottom instead of 0px" ], "highlight" : [ "cancel_button", "login_box" ], "meta": [{ "type": "edge-distance", "from": { "object": "login_box", "edge": "bottom" }, "to": { "object": "cancel_button", "edge": "bottom" }, "expectedDistance": "0px", "realDistance": "1px" }] }] ``` ### For spec aligned ``` "objects" : [ { "name" : "password_textfield", "specs" : [ { "status" : "info", "name" : "aligned vertically all error_message", "highlight" : [ "password_textfield", "error_message" ], "meta": [{ "type": "edge-distance", "from": { "object": "password_textfield", "edge": "left" }, "to": { "object": "error_message", "edge": "left" }, "expectedDistance": "0px", "realDistance": "0px" },{ "type": "edge-distance", "from": { "object": "password_textfield", "edge": "right" }, "to": { "object": "error_message", "edge": "right" }, "expectedDistance": "0px", "realDistance": "0px" }] } ] } ] ```
1.0
Spec metadata in report - It would be nice to see in the resulting html report what is exactly verified. So in the end we could actually see some kind of arrows and line markers. Therefore there should be a good way of structuring such information in json report ## Examples The following are examples of how it could be rendered in json report ### For spec inside: ``` "name" : "cancel_button", "specs" : [ { "status" : "error", "name" : "inside login_box 0px bottom", "errors" : [ "\"cancel_button\" is 1px bottom instead of 0px" ], "highlight" : [ "cancel_button", "login_box" ], "meta": [{ "type": "edge-distance", "from": { "object": "login_box", "edge": "bottom" }, "to": { "object": "cancel_button", "edge": "bottom" }, "expectedDistance": "0px", "realDistance": "1px" }] }] ``` ### For spec aligned ``` "objects" : [ { "name" : "password_textfield", "specs" : [ { "status" : "info", "name" : "aligned vertically all error_message", "highlight" : [ "password_textfield", "error_message" ], "meta": [{ "type": "edge-distance", "from": { "object": "password_textfield", "edge": "left" }, "to": { "object": "error_message", "edge": "left" }, "expectedDistance": "0px", "realDistance": "0px" },{ "type": "edge-distance", "from": { "object": "password_textfield", "edge": "right" }, "to": { "object": "error_message", "edge": "right" }, "expectedDistance": "0px", "realDistance": "0px" }] } ] } ] ```
test
spec metadata in report it would be nice to see in the resulting html report what is exactly verified so in the end we could actually see some kind of arrows and line markers therefore there should be a good way of structuring such information in json report examples the following are examples of how it could be rendered in json report for spec inside name cancel button specs status error name inside login box bottom errors highlight meta type edge distance from object login box edge bottom to object cancel button edge bottom expecteddistance realdistance for spec aligned objects name password textfield specs status info name aligned vertically all error message highlight meta type edge distance from object password textfield edge left to object error message edge left expecteddistance realdistance type edge distance from object password textfield edge right to object error message edge right expecteddistance realdistance
1
818,322
30,683,732,542
IssuesEvent
2023-07-26 10:52:43
wso2/product-is
https://api.github.com/repos/wso2/product-is
closed
Unify the behaviour of OTP connectors
improvement Priority/Normal
**Is your suggestion related to an experience ? Please describe.** 1. EmailOTP Authenticator returns generic error "authentication.fail.message" for expired OTPs except in case user account is locked whereas SMSOTP authenticator checks and returns "token.expired" for expired OTPs. In this case it's better we can decide what is the correct error message that we need to display to the customer and proceed with that. 3. In case the user's account is locked, the SMSOTP authenticator prompts to enter the OTP and after the OTP is submitted only, the error message "User's account is locked. Please try again later" will be displayed, while the EmailOTP shows the error message "user account is locked. please try again later" without prompting to enter the OTP. In this case we need to define what is the correct flow that we need to follow and proceed with that. **Describe the improvement** - Improve the error message with the common flows
1.0
Unify the behaviour of OTP connectors - **Is your suggestion related to an experience ? Please describe.** 1. EmailOTP Authenticator returns generic error "authentication.fail.message" for expired OTPs except in case user account is locked whereas SMSOTP authenticator checks and returns "token.expired" for expired OTPs. In this case it's better we can decide what is the correct error message that we need to display to the customer and proceed with that. 3. In case the user's account is locked, the SMSOTP authenticator prompts to enter the OTP and after the OTP is submitted only, the error message "User's account is locked. Please try again later" will be displayed, while the EmailOTP shows the error message "user account is locked. please try again later" without prompting to enter the OTP. In this case we need to define what is the correct flow that we need to follow and proceed with that. **Describe the improvement** - Improve the error message with the common flows
non_test
unify the behaviour of otp connectors is your suggestion related to an experience please describe emailotp authenticator returns generic error authentication fail message for expired otps except in case user account is locked whereas smsotp authenticator checks and returns token expired for expired otps in this case it s better we can decide what is the correct error message that we need to display to the customer and proceed with that in case the user s account is locked the smsotp authenticator prompts to enter the otp and after the otp is submitted only the error message user s account is locked please try again later will be displayed while the emailotp shows the error message user account is locked please try again later without prompting to enter the otp in this case we need to define what is the correct flow that we need to follow and proceed with that describe the improvement improve the error message with the common flows
0
420,124
12,233,574,411
IssuesEvent
2020-05-04 11:54:40
hotosm/tasking-manager
https://api.github.com/repos/hotosm/tasking-manager
closed
Browser testing and fixing
Component: Frontend Difficulty: Medium Priority: Medium Status: Needs implementation Type: Bug
Lets test the frontend with the browsers we want to support and lets write down what TM4 supports on least browser versions for: * Firefox * Chrome * Safari * Edge * ...
1.0
Browser testing and fixing - Lets test the frontend with the browsers we want to support and lets write down what TM4 supports on least browser versions for: * Firefox * Chrome * Safari * Edge * ...
non_test
browser testing and fixing lets test the frontend with the browsers we want to support and lets write down what supports on least browser versions for firefox chrome safari edge
0
640
9,312,399,004
IssuesEvent
2019-03-26 01:02:54
Microsoft/calculator
https://api.github.com/repos/Microsoft/calculator
closed
[Watson Failure] caused by CPP_EXCEPTION_e06d7363_Calculator.exe!Windows::UI::Xaml::PropertyMetadata::PropertyMetadata
Area: Reliability Bug Pri: 3 won't fix
| |symbol | offset | filename | line | |---|-------|--------|----------|-----:| |0|kernelbase!RaiseException|0x0000000000000068|xcpt.c|904| |1|vcruntime140_app!_CxxThrowException|0x00000000000000AD|throw.cpp|133| |2|vccorlib140_app!__abi_WinRTraiseCOMException|0x0000000000000035|exceptions.cpp|553| |3|calculator!__abi_WinRTraiseException|0x00000000000000D8|vccorlib.h|1134| |4|calculator!Windows::UI::Xaml::PropertyMetadata::PropertyMetadata|0x0000000000000107||0| |5|calculator!Utils::RegisterDependencyPropertyAttachedWithCallback_CalculatorApp::Common::KeyboardShortcutManager,Platform::String ^,void (__cdecl*)|0x0000000000000076|utils.h|241| |6|calculator!`dynamic initializer for 'CalculatorApp::Common::KeyboardShortcutManager::s_CharacterProperty''|0x0000000000000014|keyboardshortcutmanager.cpp|21| |7|ucrtbase!_initterm|0x000000000000008E|initterm.cpp|21| |8|calculator!__scrt_common_main_seh|0x000000000000007C|exe_common.inl|258| |9|kernel32!BaseThreadInitThunk|0x0000000000000014|thread.c|64| |10|ntdll!RtlUserThreadStart|0x0000000000000021|rtlstrt.c|997|
True
[Watson Failure] caused by CPP_EXCEPTION_e06d7363_Calculator.exe!Windows::UI::Xaml::PropertyMetadata::PropertyMetadata - | |symbol | offset | filename | line | |---|-------|--------|----------|-----:| |0|kernelbase!RaiseException|0x0000000000000068|xcpt.c|904| |1|vcruntime140_app!_CxxThrowException|0x00000000000000AD|throw.cpp|133| |2|vccorlib140_app!__abi_WinRTraiseCOMException|0x0000000000000035|exceptions.cpp|553| |3|calculator!__abi_WinRTraiseException|0x00000000000000D8|vccorlib.h|1134| |4|calculator!Windows::UI::Xaml::PropertyMetadata::PropertyMetadata|0x0000000000000107||0| |5|calculator!Utils::RegisterDependencyPropertyAttachedWithCallback_CalculatorApp::Common::KeyboardShortcutManager,Platform::String ^,void (__cdecl*)|0x0000000000000076|utils.h|241| |6|calculator!`dynamic initializer for 'CalculatorApp::Common::KeyboardShortcutManager::s_CharacterProperty''|0x0000000000000014|keyboardshortcutmanager.cpp|21| |7|ucrtbase!_initterm|0x000000000000008E|initterm.cpp|21| |8|calculator!__scrt_common_main_seh|0x000000000000007C|exe_common.inl|258| |9|kernel32!BaseThreadInitThunk|0x0000000000000014|thread.c|64| |10|ntdll!RtlUserThreadStart|0x0000000000000021|rtlstrt.c|997|
non_test
caused by cpp exception calculator exe windows ui xaml propertymetadata propertymetadata symbol offset filename line kernelbase raiseexception xcpt c app cxxthrowexception throw cpp app abi winrtraisecomexception exceptions cpp calculator abi winrtraiseexception vccorlib h calculator windows ui xaml propertymetadata propertymetadata calculator utils registerdependencypropertyattachedwithcallback calculatorapp common keyboardshortcutmanager platform string void cdecl utils h calculator dynamic initializer for calculatorapp common keyboardshortcutmanager s characterproperty keyboardshortcutmanager cpp ucrtbase initterm initterm cpp calculator scrt common main seh exe common inl basethreadinitthunk thread c ntdll rtluserthreadstart rtlstrt c
0
89,764
8,213,537,095
IssuesEvent
2018-09-04 19:53:05
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
opened
index_assign_operator_infer_return_type_test needs triage
area-language area-test
This test needs to be changed to assert that the return type is void rather than test inheriting two operators with different return types. > It is a compile-time error if the return type of a user-declared operator \code{[]=} is explicitly declared and not \VOID{}.
1.0
index_assign_operator_infer_return_type_test needs triage - This test needs to be changed to assert that the return type is void rather than test inheriting two operators with different return types. > It is a compile-time error if the return type of a user-declared operator \code{[]=} is explicitly declared and not \VOID{}.
test
index assign operator infer return type test needs triage this test needs to be changed to assert that the return type is void rather than test inheriting two operators with different return types it is a compile time error if the return type of a user declared operator code is explicitly declared and not void
1
27,742
8,031,854,518
IssuesEvent
2018-07-28 07:40:03
envoyproxy/envoy
https://api.github.com/repos/envoyproxy/envoy
closed
Build failed on newest master
build help wanted
I cloned the newest master on my Ubuntu17.04, and I did ./ci/run_envoy_docker.sh './ci/do_ci.sh bazel.dev' it is seems downloaded a docker image, and after that, it is build fail, my env is behind a proxy. the log is below: `root@ubuntu:/home/mark/envoy# ./ci/run_envoy_docker.sh './ci/do_ci.sh bazel.dev' ENVOY_SRCDIR=/source HEAD is now at 3e5b733... cleanup: match NOT_IMPLEMENTED_GCOVR_EXCL_LINE change (#53) building using 4 CPUs clang-5.0/clang++-5.0 toolchain configured bazel fastbuild build with tests... Building... $TEST_TMPDIR defined: output root default is '/build/tmp' and max_idle_secs default is '15'. WARNING: ignoring http_proxy in environment. WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown". DEBUG: /build/tmp/_bazel_bazel/436badd4919a15958fa3800a4e21074a/external/io_bazel_rules_go/proto/def.bzl:138:3: You no longer need to call proto_register_toolchains(), it does nothing Unhandled exception thrown during build; message: Unrecoverable error while evaluating node 'REPOSITORY_DIRECTORY:@com_github_fmtlib_fmt' (requested by nodes 'REPOSITORY:@com_github_fmtlib_fmt') INFO: Elapsed time: 3.204s INFO: 0 processes. FAILED: Build did NOT complete successfully (68 packages loaded) currently loading: @envoy//ci/prebuilt ... (2 packages) java.lang.RuntimeException: Unrecoverable error while evaluating node 'REPOSITORY_DIRECTORY:@com_github_fmtlib_fmt' (requested by nodes 'REPOSITORY:@com_github_fmtlib_fmt') at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:460) at com.google.devtools.build.lib.concurrent.AbstractQueueVisitor$WrappedRunnable.run(AbstractQueueVisitor.java:355) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.NullPointerException at com.sun.security.ntlm.Client.type3(Client.java:161) at sun.net.www.protocol.http.ntlm.NTLMAuthentication.buildType3Msg(NTLMAuthentication.java:250) at sun.net.www.protocol.http.ntlm.NTLMAuthentication.setHeaders(NTLMAuthentication.java:225) at sun.net.www.protocol.http.HttpURLConnection.doTunneling(HttpURLConnection.java:2114) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:183) at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:162) at com.google.devtools.build.lib.bazel.repository.downloader.HttpConnector.connect(HttpConnector.java:113) at com.google.devtools.build.lib.bazel.repository.downloader.HttpConnectorMultiplexer.establishConnection(HttpConnectorMultiplexer.java:301) at com.google.devtools.build.lib.bazel.repository.downloader.HttpConnectorMultiplexer.connect(HttpConnectorMultiplexer.java:126) at com.google.devtools.build.lib.bazel.repository.downloader.HttpDownloader.download(HttpDownloader.java:221) at com.google.devtools.build.lib.bazel.repository.downloader.HttpDownloader.download(HttpDownloader.java:131) at com.google.devtools.build.lib.bazel.repository.NewHttpArchiveFunction.fetch(NewHttpArchiveFunction.java:81) at com.google.devtools.build.lib.rules.repository.RepositoryDelegatorFunction.compute(RepositoryDelegatorFunction.java:188) at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:382) ... 4 more java.lang.RuntimeException: Unrecoverable error while evaluating node 'REPOSITORY_DIRECTORY:@com_github_fmtlib_fmt' (requested by nodes 'REPOSITORY:@com_github_fmtlib_fmt') at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:460) at com.google.devtools.build.lib.concurrent.AbstractQueueVisitor$WrappedRunnable.run(AbstractQueueVisitor.java:355) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.NullPointerException at com.sun.security.ntlm.Client.type3(Client.java:161) at sun.net.www.protocol.http.ntlm.NTLMAuthentication.buildType3Msg(NTLMAuthentication.java:250) at sun.net.www.protocol.http.ntlm.NTLMAuthentication.setHeaders(NTLMAuthentication.java:225) at sun.net.www.protocol.http.HttpURLConnection.doTunneling(HttpURLConnection.java:2114) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:183) at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:162) at com.google.devtools.build.lib.bazel.repository.downloader.HttpConnector.connect(HttpConnector.java:113) at com.google.devtools.build.lib.bazel.repository.downloader.HttpConnectorMultiplexer.establishConnection(HttpConnectorMultiplexer.java:301) at com.google.devtools.build.lib.bazel.repository.downloader.HttpConnectorMultiplexer.connect(HttpConnectorMultiplexer.java:126) at com.google.devtools.build.lib.bazel.repository.downloader.HttpDownloader.download(HttpDownloader.java:221) at com.google.devtools.build.lib.bazel.repository.downloader.HttpDownloader.download(HttpDownloader.java:131) at com.google.devtools.build.lib.bazel.repository.NewHttpArchiveFunction.fetch(NewHttpArchiveFunction.java:81) at com.google.devtools.build.lib.rules.repository.RepositoryDelegatorFunction.compute(RepositoryDelegatorFunction.java:188) at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:382) ... 4 more Remote logging disabled for testing, forcing abrupt shutdown. com.google.devtools.build.lib.util.LoggingUtil#logToRemote: bazel crashed with args: build --startup_time=9 --binary_path=/usr/bin/bazel --rc_source=client --default_override=0:common=--isatty=1 --default_override=0:common=--terminal_columns=207 --rc_source=/source/ci/tools/bazel.rc --rc_source=/etc/bazel.bazelrc --default_override=1:build:clang-msan=--define --default_override=1:build:clang-msan=ENVOY_CONFIG_MSAN=1 --default_override=1:build:clang-msan=--copt --default_override=1:build:clang-msan=-fsanitize=memory --default_override=1:build:clang-msan=--linkopt --default_override=1:build:clang-msan=-fsanitize=memory --default_override=1:build:clang-msan=--define --default_override=1:build:clang-msan=tcmalloc=disabled --default_override=1:build:clang-msan=--copt --default_override=1:build:clang-msan=-fsanitize-memory-track-origins=2 --default_override=1:test=--test_env=HEAPCHECK=normal --default_override=1:test=--test_env=PPROF_PATH --default_override=1:build:clang-asan=--define --default_override=1:build:clang-asan=ENVOY_CONFIG_ASAN=1 --default_override=1:build:clang-asan=--copt --default_override=1:build:clang-asan=-D__SANITIZE_ADDRESS__ --default_override=1:build:clang-asan=--copt --default_override=1:build:clang-asan=-fsanitize=address,undefined --default_override=1:build:clang-asan=--linkopt --default_override=1:build:clang-asan=-fsanitize=address,undefined --default_override=1:build:clang-asan=--copt --default_override=1:build:clang-asan=-fno-sanitize=vptr --default_override=1:build:clang-asan=--linkopt --default_override=1:build:clang-asan=-fno-sanitize=vptr --default_override=1:build:clang-asan=--copt --default_override=1:build:clang-asan=-fno-sanitize-recover=all --default_override=1:build:clang-asan=--linkopt --default_override=1:build:clang-asan=-ldl --default_override=1:build:clang-asan=--define --default_override=1:build:clang-asan=tcmalloc=disabled --default_override=1:build:clang-asan=--build_tag_filters=-no_asan --default_override=1:build:clang-asan=--test_tag_filters=-no_asan --default_override=1:build:clang-asan=--define --default_override=1:build:clang-asan=signal_trace=disabled --default_override=1:build:clang-asan=--copt --default_override=1:build:clang-asan=-DADDRESS_SANITIZER=1 --default_override=1:build:clang-asan=--test_env=ASAN_SYMBOLIZER_PATH --default_override=1:build:clang-tsan=--define --default_override=1:build:clang-tsan=ENVOY_CONFIG_TSAN=1 --default_override=1:build:clang-tsan=--copt --default_override=1:build:clang-tsan=-fsanitize=thread --default_override=1:build:clang-tsan=--linkopt --default_override=1:build:clang-tsan=-fsanitize=thread --default_override=1:build:clang-tsan=--define --default_override=1:build:clang-tsan=tcmalloc=disabled --default_override=1:build:asan=--define --default_override=1:build:asan=ENVOY_CONFIG_ASAN=1 --default_override=1:build:asan=--copt --default_override=1:build:asan=-fsanitize=address,undefined --default_override=1:build:asan=--linkopt --default_override=1:build:asan=-fsanitize=address,undefined --default_override=1:build:asan=--copt --default_override=1:build:asan=-fno-sanitize=vptr --default_override=1:build:asan=--linkopt --default_override=1:build:asan=-fno-sanitize=vptr --default_override=1:build:asan=--linkopt --default_override=1:build:asan=-ldl --default_override=1:build:asan=--define --default_override=1:build:asan=tcmalloc=disabled --default_override=1:build:asan=--build_tag_filters=-no_asan --default_override=1:build:asan=--test_tag_filters=-no_asan --default_override=1:build:asan=--define --default_override=1:build:asan=signal_trace=disabled --default_override=1:build:asan=--copt --default_override=1:build:asan=-DADDRESS_SANITIZER=1 --default_override=1:build=--workspace_status_command=bazel/get_workspace_status --client_cwd=/source/ci --strategy=Genrule=standalone --spawn_strategy=standalone --verbose_failures --package_path %workspace%:/source --action_env=HOME --action_env=PYTHONUSERBASE --jobs=4 --show_task_finish -c fastbuild //source/exe:envoy-static java.lang.RuntimeException: Unrecoverable error while evaluating node 'REPOSITORY_DIRECTORY:@com_github_fmtlib_fmt' (requested by nodes 'REPOSITORY:@com_github_fmtlib_fmt') at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:460) at com.google.devtools.build.lib.concurrent.AbstractQueueVisitor$WrappedRunnable.run(AbstractQueueVisitor.java:355) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.NullPointerException at com.sun.security.ntlm.Client.type3(Client.java:161) at sun.net.www.protocol.http.ntlm.NTLMAuthentication.buildType3Msg(NTLMAuthentication.java:250) at sun.net.www.protocol.http.ntlm.NTLMAuthentication.setHeaders(NTLMAuthentication.java:225) at sun.net.www.protocol.http.HttpURLConnection.doTunneling(HttpURLConnection.java:2114) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:183) at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:162) at com.google.devtools.build.lib.bazel.repository.downloader.HttpConnector.connect(HttpConnector.java:113) at com.google.devtools.build.lib.bazel.repository.downloader.HttpConnectorMultiplexer.establishConnection(HttpConnectorMultiplexer.java:301) at com.google.devtools.build.lib.bazel.repository.downloader.HttpConnectorMultiplexer.connect(HttpConnectorMultiplexer.java:126) at com.google.devtools.build.lib.bazel.repository.downloader.HttpDownloader.download(HttpDownloader.java:221) at com.google.devtools.build.lib.bazel.repository.downloader.HttpDownloader.download(HttpDownloader.java:131) at com.google.devtools.build.lib.bazel.repository.NewHttpArchiveFunction.fetch(NewHttpArchiveFunction.java:81) at com.google.devtools.build.lib.rules.repository.RepositoryDelegatorFunction.compute(RepositoryDelegatorFunction.java:188) at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:382) ... 4 more root@ubuntu:/home/mark/envoy# ` the evn: `root@ubuntu:/home/mark/envoy# uname -a Linux ubuntu 4.10.0-19-generic #21-Ubuntu SMP Thu Apr 6 17:04:57 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux` `root@ubuntu:/home/mark/envoy# docker --version Docker version 1.6.2, build 7c8fca2`
1.0
Build failed on newest master - I cloned the newest master on my Ubuntu17.04, and I did ./ci/run_envoy_docker.sh './ci/do_ci.sh bazel.dev' it is seems downloaded a docker image, and after that, it is build fail, my env is behind a proxy. the log is below: `root@ubuntu:/home/mark/envoy# ./ci/run_envoy_docker.sh './ci/do_ci.sh bazel.dev' ENVOY_SRCDIR=/source HEAD is now at 3e5b733... cleanup: match NOT_IMPLEMENTED_GCOVR_EXCL_LINE change (#53) building using 4 CPUs clang-5.0/clang++-5.0 toolchain configured bazel fastbuild build with tests... Building... $TEST_TMPDIR defined: output root default is '/build/tmp' and max_idle_secs default is '15'. WARNING: ignoring http_proxy in environment. WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown". DEBUG: /build/tmp/_bazel_bazel/436badd4919a15958fa3800a4e21074a/external/io_bazel_rules_go/proto/def.bzl:138:3: You no longer need to call proto_register_toolchains(), it does nothing Unhandled exception thrown during build; message: Unrecoverable error while evaluating node 'REPOSITORY_DIRECTORY:@com_github_fmtlib_fmt' (requested by nodes 'REPOSITORY:@com_github_fmtlib_fmt') INFO: Elapsed time: 3.204s INFO: 0 processes. FAILED: Build did NOT complete successfully (68 packages loaded) currently loading: @envoy//ci/prebuilt ... (2 packages) java.lang.RuntimeException: Unrecoverable error while evaluating node 'REPOSITORY_DIRECTORY:@com_github_fmtlib_fmt' (requested by nodes 'REPOSITORY:@com_github_fmtlib_fmt') at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:460) at com.google.devtools.build.lib.concurrent.AbstractQueueVisitor$WrappedRunnable.run(AbstractQueueVisitor.java:355) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.NullPointerException at com.sun.security.ntlm.Client.type3(Client.java:161) at sun.net.www.protocol.http.ntlm.NTLMAuthentication.buildType3Msg(NTLMAuthentication.java:250) at sun.net.www.protocol.http.ntlm.NTLMAuthentication.setHeaders(NTLMAuthentication.java:225) at sun.net.www.protocol.http.HttpURLConnection.doTunneling(HttpURLConnection.java:2114) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:183) at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:162) at com.google.devtools.build.lib.bazel.repository.downloader.HttpConnector.connect(HttpConnector.java:113) at com.google.devtools.build.lib.bazel.repository.downloader.HttpConnectorMultiplexer.establishConnection(HttpConnectorMultiplexer.java:301) at com.google.devtools.build.lib.bazel.repository.downloader.HttpConnectorMultiplexer.connect(HttpConnectorMultiplexer.java:126) at com.google.devtools.build.lib.bazel.repository.downloader.HttpDownloader.download(HttpDownloader.java:221) at com.google.devtools.build.lib.bazel.repository.downloader.HttpDownloader.download(HttpDownloader.java:131) at com.google.devtools.build.lib.bazel.repository.NewHttpArchiveFunction.fetch(NewHttpArchiveFunction.java:81) at com.google.devtools.build.lib.rules.repository.RepositoryDelegatorFunction.compute(RepositoryDelegatorFunction.java:188) at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:382) ... 4 more java.lang.RuntimeException: Unrecoverable error while evaluating node 'REPOSITORY_DIRECTORY:@com_github_fmtlib_fmt' (requested by nodes 'REPOSITORY:@com_github_fmtlib_fmt') at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:460) at com.google.devtools.build.lib.concurrent.AbstractQueueVisitor$WrappedRunnable.run(AbstractQueueVisitor.java:355) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.NullPointerException at com.sun.security.ntlm.Client.type3(Client.java:161) at sun.net.www.protocol.http.ntlm.NTLMAuthentication.buildType3Msg(NTLMAuthentication.java:250) at sun.net.www.protocol.http.ntlm.NTLMAuthentication.setHeaders(NTLMAuthentication.java:225) at sun.net.www.protocol.http.HttpURLConnection.doTunneling(HttpURLConnection.java:2114) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:183) at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:162) at com.google.devtools.build.lib.bazel.repository.downloader.HttpConnector.connect(HttpConnector.java:113) at com.google.devtools.build.lib.bazel.repository.downloader.HttpConnectorMultiplexer.establishConnection(HttpConnectorMultiplexer.java:301) at com.google.devtools.build.lib.bazel.repository.downloader.HttpConnectorMultiplexer.connect(HttpConnectorMultiplexer.java:126) at com.google.devtools.build.lib.bazel.repository.downloader.HttpDownloader.download(HttpDownloader.java:221) at com.google.devtools.build.lib.bazel.repository.downloader.HttpDownloader.download(HttpDownloader.java:131) at com.google.devtools.build.lib.bazel.repository.NewHttpArchiveFunction.fetch(NewHttpArchiveFunction.java:81) at com.google.devtools.build.lib.rules.repository.RepositoryDelegatorFunction.compute(RepositoryDelegatorFunction.java:188) at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:382) ... 4 more Remote logging disabled for testing, forcing abrupt shutdown. com.google.devtools.build.lib.util.LoggingUtil#logToRemote: bazel crashed with args: build --startup_time=9 --binary_path=/usr/bin/bazel --rc_source=client --default_override=0:common=--isatty=1 --default_override=0:common=--terminal_columns=207 --rc_source=/source/ci/tools/bazel.rc --rc_source=/etc/bazel.bazelrc --default_override=1:build:clang-msan=--define --default_override=1:build:clang-msan=ENVOY_CONFIG_MSAN=1 --default_override=1:build:clang-msan=--copt --default_override=1:build:clang-msan=-fsanitize=memory --default_override=1:build:clang-msan=--linkopt --default_override=1:build:clang-msan=-fsanitize=memory --default_override=1:build:clang-msan=--define --default_override=1:build:clang-msan=tcmalloc=disabled --default_override=1:build:clang-msan=--copt --default_override=1:build:clang-msan=-fsanitize-memory-track-origins=2 --default_override=1:test=--test_env=HEAPCHECK=normal --default_override=1:test=--test_env=PPROF_PATH --default_override=1:build:clang-asan=--define --default_override=1:build:clang-asan=ENVOY_CONFIG_ASAN=1 --default_override=1:build:clang-asan=--copt --default_override=1:build:clang-asan=-D__SANITIZE_ADDRESS__ --default_override=1:build:clang-asan=--copt --default_override=1:build:clang-asan=-fsanitize=address,undefined --default_override=1:build:clang-asan=--linkopt --default_override=1:build:clang-asan=-fsanitize=address,undefined --default_override=1:build:clang-asan=--copt --default_override=1:build:clang-asan=-fno-sanitize=vptr --default_override=1:build:clang-asan=--linkopt --default_override=1:build:clang-asan=-fno-sanitize=vptr --default_override=1:build:clang-asan=--copt --default_override=1:build:clang-asan=-fno-sanitize-recover=all --default_override=1:build:clang-asan=--linkopt --default_override=1:build:clang-asan=-ldl --default_override=1:build:clang-asan=--define --default_override=1:build:clang-asan=tcmalloc=disabled --default_override=1:build:clang-asan=--build_tag_filters=-no_asan --default_override=1:build:clang-asan=--test_tag_filters=-no_asan --default_override=1:build:clang-asan=--define --default_override=1:build:clang-asan=signal_trace=disabled --default_override=1:build:clang-asan=--copt --default_override=1:build:clang-asan=-DADDRESS_SANITIZER=1 --default_override=1:build:clang-asan=--test_env=ASAN_SYMBOLIZER_PATH --default_override=1:build:clang-tsan=--define --default_override=1:build:clang-tsan=ENVOY_CONFIG_TSAN=1 --default_override=1:build:clang-tsan=--copt --default_override=1:build:clang-tsan=-fsanitize=thread --default_override=1:build:clang-tsan=--linkopt --default_override=1:build:clang-tsan=-fsanitize=thread --default_override=1:build:clang-tsan=--define --default_override=1:build:clang-tsan=tcmalloc=disabled --default_override=1:build:asan=--define --default_override=1:build:asan=ENVOY_CONFIG_ASAN=1 --default_override=1:build:asan=--copt --default_override=1:build:asan=-fsanitize=address,undefined --default_override=1:build:asan=--linkopt --default_override=1:build:asan=-fsanitize=address,undefined --default_override=1:build:asan=--copt --default_override=1:build:asan=-fno-sanitize=vptr --default_override=1:build:asan=--linkopt --default_override=1:build:asan=-fno-sanitize=vptr --default_override=1:build:asan=--linkopt --default_override=1:build:asan=-ldl --default_override=1:build:asan=--define --default_override=1:build:asan=tcmalloc=disabled --default_override=1:build:asan=--build_tag_filters=-no_asan --default_override=1:build:asan=--test_tag_filters=-no_asan --default_override=1:build:asan=--define --default_override=1:build:asan=signal_trace=disabled --default_override=1:build:asan=--copt --default_override=1:build:asan=-DADDRESS_SANITIZER=1 --default_override=1:build=--workspace_status_command=bazel/get_workspace_status --client_cwd=/source/ci --strategy=Genrule=standalone --spawn_strategy=standalone --verbose_failures --package_path %workspace%:/source --action_env=HOME --action_env=PYTHONUSERBASE --jobs=4 --show_task_finish -c fastbuild //source/exe:envoy-static java.lang.RuntimeException: Unrecoverable error while evaluating node 'REPOSITORY_DIRECTORY:@com_github_fmtlib_fmt' (requested by nodes 'REPOSITORY:@com_github_fmtlib_fmt') at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:460) at com.google.devtools.build.lib.concurrent.AbstractQueueVisitor$WrappedRunnable.run(AbstractQueueVisitor.java:355) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.NullPointerException at com.sun.security.ntlm.Client.type3(Client.java:161) at sun.net.www.protocol.http.ntlm.NTLMAuthentication.buildType3Msg(NTLMAuthentication.java:250) at sun.net.www.protocol.http.ntlm.NTLMAuthentication.setHeaders(NTLMAuthentication.java:225) at sun.net.www.protocol.http.HttpURLConnection.doTunneling(HttpURLConnection.java:2114) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:183) at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:162) at com.google.devtools.build.lib.bazel.repository.downloader.HttpConnector.connect(HttpConnector.java:113) at com.google.devtools.build.lib.bazel.repository.downloader.HttpConnectorMultiplexer.establishConnection(HttpConnectorMultiplexer.java:301) at com.google.devtools.build.lib.bazel.repository.downloader.HttpConnectorMultiplexer.connect(HttpConnectorMultiplexer.java:126) at com.google.devtools.build.lib.bazel.repository.downloader.HttpDownloader.download(HttpDownloader.java:221) at com.google.devtools.build.lib.bazel.repository.downloader.HttpDownloader.download(HttpDownloader.java:131) at com.google.devtools.build.lib.bazel.repository.NewHttpArchiveFunction.fetch(NewHttpArchiveFunction.java:81) at com.google.devtools.build.lib.rules.repository.RepositoryDelegatorFunction.compute(RepositoryDelegatorFunction.java:188) at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:382) ... 4 more root@ubuntu:/home/mark/envoy# ` the evn: `root@ubuntu:/home/mark/envoy# uname -a Linux ubuntu 4.10.0-19-generic #21-Ubuntu SMP Thu Apr 6 17:04:57 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux` `root@ubuntu:/home/mark/envoy# docker --version Docker version 1.6.2, build 7c8fca2`
non_test
build failed on newest master i cloned the newest master on my and i did ci run envoy docker sh ci do ci sh bazel dev it is seems downloaded a docker image and after that it is build fail my env is behind a proxy the log is below root ubuntu home mark envoy ci run envoy docker sh ci do ci sh bazel dev envoy srcdir source head is now at cleanup match not implemented gcovr excl line change building using cpus clang clang toolchain configured bazel fastbuild build with tests building test tmpdir defined output root default is build tmp and max idle secs default is warning ignoring http proxy in environment warning batch mode is deprecated please instead explicitly shut down your bazel server using the command bazel shutdown debug build tmp bazel bazel external io bazel rules go proto def bzl you no longer need to call proto register toolchains it does nothing unhandled exception thrown during build message unrecoverable error while evaluating node repository directory com github fmtlib fmt requested by nodes repository com github fmtlib fmt info elapsed time info processes failed build did not complete successfully packages loaded currently loading envoy ci prebuilt packages java lang runtimeexception unrecoverable error while evaluating node repository directory com github fmtlib fmt requested by nodes repository com github fmtlib fmt at com google devtools build skyframe abstractparallelevaluator evaluate run abstractparallelevaluator java at com google devtools build lib concurrent abstractqueuevisitor wrappedrunnable run abstractqueuevisitor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by java lang nullpointerexception at com sun security ntlm client client java at sun net at sun net at sun net at sun net at sun net at com google devtools build lib bazel repository downloader httpconnector connect httpconnector java at com google devtools build lib bazel repository downloader httpconnectormultiplexer establishconnection httpconnectormultiplexer java at com google devtools build lib bazel repository downloader httpconnectormultiplexer connect httpconnectormultiplexer java at com google devtools build lib bazel repository downloader httpdownloader download httpdownloader java at com google devtools build lib bazel repository downloader httpdownloader download httpdownloader java at com google devtools build lib bazel repository newhttparchivefunction fetch newhttparchivefunction java at com google devtools build lib rules repository repositorydelegatorfunction compute repositorydelegatorfunction java at com google devtools build skyframe abstractparallelevaluator evaluate run abstractparallelevaluator java more java lang runtimeexception unrecoverable error while evaluating node repository directory com github fmtlib fmt requested by nodes repository com github fmtlib fmt at com google devtools build skyframe abstractparallelevaluator evaluate run abstractparallelevaluator java at com google devtools build lib concurrent abstractqueuevisitor wrappedrunnable run abstractqueuevisitor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by java lang nullpointerexception at com sun security ntlm client client java at sun net at sun net at sun net at sun net at sun net at com google devtools build lib bazel repository downloader httpconnector connect httpconnector java at com google devtools build lib bazel repository downloader httpconnectormultiplexer establishconnection httpconnectormultiplexer java at com google devtools build lib bazel repository downloader httpconnectormultiplexer connect httpconnectormultiplexer java at com google devtools build lib bazel repository downloader httpdownloader download httpdownloader java at com google devtools build lib bazel repository downloader httpdownloader download httpdownloader java at com google devtools build lib bazel repository newhttparchivefunction fetch newhttparchivefunction java at com google devtools build lib rules repository repositorydelegatorfunction compute repositorydelegatorfunction java at com google devtools build skyframe abstractparallelevaluator evaluate run abstractparallelevaluator java more remote logging disabled for testing forcing abrupt shutdown com google devtools build lib util loggingutil logtoremote bazel crashed with args build startup time binary path usr bin bazel rc source client default override common isatty default override common terminal columns rc source source ci tools bazel rc rc source etc bazel bazelrc default override build clang msan define default override build clang msan envoy config msan default override build clang msan copt default override build clang msan fsanitize memory default override build clang msan linkopt default override build clang msan fsanitize memory default override build clang msan define default override build clang msan tcmalloc disabled default override build clang msan copt default override build clang msan fsanitize memory track origins default override test test env heapcheck normal default override test test env pprof path default override build clang asan define default override build clang asan envoy config asan default override build clang asan copt default override build clang asan d sanitize address default override build clang asan copt default override build clang asan fsanitize address undefined default override build clang asan linkopt default override build clang asan fsanitize address undefined default override build clang asan copt default override build clang asan fno sanitize vptr default override build clang asan linkopt default override build clang asan fno sanitize vptr default override build clang asan copt default override build clang asan fno sanitize recover all default override build clang asan linkopt default override build clang asan ldl default override build clang asan define default override build clang asan tcmalloc disabled default override build clang asan build tag filters no asan default override build clang asan test tag filters no asan default override build clang asan define default override build clang asan signal trace disabled default override build clang asan copt default override build clang asan daddress sanitizer default override build clang asan test env asan symbolizer path default override build clang tsan define default override build clang tsan envoy config tsan default override build clang tsan copt default override build clang tsan fsanitize thread default override build clang tsan linkopt default override build clang tsan fsanitize thread default override build clang tsan define default override build clang tsan tcmalloc disabled default override build asan define default override build asan envoy config asan default override build asan copt default override build asan fsanitize address undefined default override build asan linkopt default override build asan fsanitize address undefined default override build asan copt default override build asan fno sanitize vptr default override build asan linkopt default override build asan fno sanitize vptr default override build asan linkopt default override build asan ldl default override build asan define default override build asan tcmalloc disabled default override build asan build tag filters no asan default override build asan test tag filters no asan default override build asan define default override build asan signal trace disabled default override build asan copt default override build asan daddress sanitizer default override build workspace status command bazel get workspace status client cwd source ci strategy genrule standalone spawn strategy standalone verbose failures package path workspace source action env home action env pythonuserbase jobs show task finish c fastbuild source exe envoy static java lang runtimeexception unrecoverable error while evaluating node repository directory com github fmtlib fmt requested by nodes repository com github fmtlib fmt at com google devtools build skyframe abstractparallelevaluator evaluate run abstractparallelevaluator java at com google devtools build lib concurrent abstractqueuevisitor wrappedrunnable run abstractqueuevisitor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by java lang nullpointerexception at com sun security ntlm client client java at sun net at sun net at sun net at sun net at sun net at com google devtools build lib bazel repository downloader httpconnector connect httpconnector java at com google devtools build lib bazel repository downloader httpconnectormultiplexer establishconnection httpconnectormultiplexer java at com google devtools build lib bazel repository downloader httpconnectormultiplexer connect httpconnectormultiplexer java at com google devtools build lib bazel repository downloader httpdownloader download httpdownloader java at com google devtools build lib bazel repository downloader httpdownloader download httpdownloader java at com google devtools build lib bazel repository newhttparchivefunction fetch newhttparchivefunction java at com google devtools build lib rules repository repositorydelegatorfunction compute repositorydelegatorfunction java at com google devtools build skyframe abstractparallelevaluator evaluate run abstractparallelevaluator java more root ubuntu home mark envoy the evn root ubuntu home mark envoy uname a linux ubuntu generic ubuntu smp thu apr utc gnu linux root ubuntu home mark envoy docker version docker version build
0
213,285
16,508,260,209
IssuesEvent
2021-05-25 22:31:39
brave/brave-browser
https://api.github.com/repos/brave/brave-browser
opened
Allow comma as decimal point for custom tipping
OS/Desktop QA/Test-Plan-Specified QA/Yes enhancement feature/rewards
Follow up to `Add Custom Tipping Amount feature (Phase 1)` #15006 In some countries decimal point is represented by a comma rather by a period. Example: Poland ## Steps to Reproduce <!--Please add a series of steps to reproduce the issue--> 1. clean profile on staging 1. enable rewards 1. claim grant 1. Open `https://laurenwags.github.io` 1. Use custom tip amount 1. Enter `3,5` using a comma ## Actual result: <!--Please add screenshots if needed--> Unable to use comma as decimal point when entering tip amount ## Expected result: Able to use comma as decimal point when entering tip amount ## Reproduces how often: <!--[Easily reproduced/Intermittent issue/No steps to reproduce]--> Easily reproduced ## Brave version (brave://version info) <!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details--> Brave | 1.25.66 Chromium: 91.0.4472.70&nbsp;(Official Build)&nbsp;(64-bit) -- | -- Revision | fe095368270a32c92959403754bf6fd357dd9953-refs/branch-heads/4472@{#1172} OS | Ubuntu 18.04 LTS cc @brave/legacy_qa @rebron @zenparsing @Miyayes
1.0
Allow comma as decimal point for custom tipping - Follow up to `Add Custom Tipping Amount feature (Phase 1)` #15006 In some countries decimal point is represented by a comma rather by a period. Example: Poland ## Steps to Reproduce <!--Please add a series of steps to reproduce the issue--> 1. clean profile on staging 1. enable rewards 1. claim grant 1. Open `https://laurenwags.github.io` 1. Use custom tip amount 1. Enter `3,5` using a comma ## Actual result: <!--Please add screenshots if needed--> Unable to use comma as decimal point when entering tip amount ## Expected result: Able to use comma as decimal point when entering tip amount ## Reproduces how often: <!--[Easily reproduced/Intermittent issue/No steps to reproduce]--> Easily reproduced ## Brave version (brave://version info) <!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details--> Brave | 1.25.66 Chromium: 91.0.4472.70&nbsp;(Official Build)&nbsp;(64-bit) -- | -- Revision | fe095368270a32c92959403754bf6fd357dd9953-refs/branch-heads/4472@{#1172} OS | Ubuntu 18.04 LTS cc @brave/legacy_qa @rebron @zenparsing @Miyayes
test
allow comma as decimal point for custom tipping follow up to add custom tipping amount feature phase in some countries decimal point is represented by a comma rather by a period example poland steps to reproduce clean profile on staging enable rewards claim grant open use custom tip amount enter using a comma actual result unable to use comma as decimal point when entering tip amount expected result able to use comma as decimal point when entering tip amount reproduces how often easily reproduced brave version brave version info brave chromium nbsp official build nbsp bit revision refs branch heads os ubuntu lts cc brave legacy qa rebron zenparsing miyayes
1
104,049
8,959,136,876
IssuesEvent
2019-01-27 19:59:03
moment/moment
https://api.github.com/repos/moment/moment
closed
2 tests failed. locale:el:calendar last week (462.7) locale:zh-cn:calendar last week (2019.1)
DST Unit Test Failed
### Client info ``` Date String : Wed Oct 21 2015 19:55:54 GMT-0200 (E. South America Daylight Time) Locale String : 10/21/2015, 7:55:54 PM Offset : 120 User Agent : Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36 ``` # ### locale:el:calendar last week (462.7) Today - 3 days beginning of day ``` javascript // Expected την προηγούμενη Κυριακή στις 1:00 ΠΜ // Actual την προηγούμενη Κυριακή στη 1:00 ΠΜ "την προηγούμενη Κυριακή στη 1:00 ΠΜ" === "την προηγούμενη Κυριακή στις 1:00 ΠΜ" ``` # ### locale:zh-cn:calendar last week (2019.1) Monday - 1 days next week ``` javascript // Expected 上周日凌晨12点整 // Actual 上周日凌晨1点整 "上周日凌晨1点整" === "上周日凌晨12点整" ```
1.0
2 tests failed. locale:el:calendar last week (462.7) locale:zh-cn:calendar last week (2019.1) - ### Client info ``` Date String : Wed Oct 21 2015 19:55:54 GMT-0200 (E. South America Daylight Time) Locale String : 10/21/2015, 7:55:54 PM Offset : 120 User Agent : Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36 ``` # ### locale:el:calendar last week (462.7) Today - 3 days beginning of day ``` javascript // Expected την προηγούμενη Κυριακή στις 1:00 ΠΜ // Actual την προηγούμενη Κυριακή στη 1:00 ΠΜ "την προηγούμενη Κυριακή στη 1:00 ΠΜ" === "την προηγούμενη Κυριακή στις 1:00 ΠΜ" ``` # ### locale:zh-cn:calendar last week (2019.1) Monday - 1 days next week ``` javascript // Expected 上周日凌晨12点整 // Actual 上周日凌晨1点整 "上周日凌晨1点整" === "上周日凌晨12点整" ```
test
tests failed locale el calendar last week locale zh cn calendar last week client info date string wed oct gmt e south america daylight time locale string pm offset user agent mozilla windows nt applewebkit khtml like gecko chrome safari locale el calendar last week today days beginning of day javascript expected την προηγούμενη κυριακή στις πμ actual την προηγούμενη κυριακή στη πμ την προηγούμενη κυριακή στη πμ την προηγούμενη κυριακή στις πμ locale zh cn calendar last week monday days next week javascript expected actual
1
4,616
2,610,134,193
IssuesEvent
2015-02-26 18:42:16
chrsmith/hedgewars
https://api.github.com/repos/chrsmith/hedgewars
closed
Hedgehog gets trapped inside the explosive
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Set number of explosives to max and run Hedgewars. 2. Find out if there are three explosives on one level and if there is a hedgehog between them. 3. Shoot the barrel in the middle. What is the expected output? What do you see instead? Sometimes, the hedgehog doesn't bounce of the explosive, but he gets inside it. You can't move him out of the explosive (I haven't tried to fire a weapon in this state). What version of the product are you using? On what operating system? 0.9.13 on Windows XP SP 2 Please provide any additional information below. ``` ----- Original issue reported on code.google.com by `adibiaz...@gmail.com` on 27 Aug 2010 at 12:57 Attachments: * [Schowek01.bmp](https://storage.googleapis.com/google-code-attachments/hedgewars/issue-18/comment-0/Schowek01.bmp)
1.0
Hedgehog gets trapped inside the explosive - ``` What steps will reproduce the problem? 1. Set number of explosives to max and run Hedgewars. 2. Find out if there are three explosives on one level and if there is a hedgehog between them. 3. Shoot the barrel in the middle. What is the expected output? What do you see instead? Sometimes, the hedgehog doesn't bounce of the explosive, but he gets inside it. You can't move him out of the explosive (I haven't tried to fire a weapon in this state). What version of the product are you using? On what operating system? 0.9.13 on Windows XP SP 2 Please provide any additional information below. ``` ----- Original issue reported on code.google.com by `adibiaz...@gmail.com` on 27 Aug 2010 at 12:57 Attachments: * [Schowek01.bmp](https://storage.googleapis.com/google-code-attachments/hedgewars/issue-18/comment-0/Schowek01.bmp)
non_test
hedgehog gets trapped inside the explosive what steps will reproduce the problem set number of explosives to max and run hedgewars find out if there are three explosives on one level and if there is a hedgehog between them shoot the barrel in the middle what is the expected output what do you see instead sometimes the hedgehog doesn t bounce of the explosive but he gets inside it you can t move him out of the explosive i haven t tried to fire a weapon in this state what version of the product are you using on what operating system on windows xp sp please provide any additional information below original issue reported on code google com by adibiaz gmail com on aug at attachments
0
40,244
5,283,222,139
IssuesEvent
2017-02-07 20:51:04
institutotim/timtec
https://api.github.com/repos/institutotim/timtec
closed
bug: inscrição no curso
bug waiting test
Botões de "inscrever no curso" e "sair do curso" não estão funcionando. Comportamento esperado ao clicar no botão "continuar curso" (usuário deve ser levado as unidades quando clica nesse botão. Atualmente está bugado para usuários com permissão de admin. Como aluno funciona, mas como admin está bugado): https://youtu.be/77Txev66o-8 O botão "sair" deve remover o aluno do curso e voltar a página original, como se ele não estivesse inscrito. Logado como admin, ao clicar em "continuar curso" ou "sair" nada acontece: https://www.youtube.com/watch?v=queMuFuB74o&feature=youtu.be
1.0
bug: inscrição no curso - Botões de "inscrever no curso" e "sair do curso" não estão funcionando. Comportamento esperado ao clicar no botão "continuar curso" (usuário deve ser levado as unidades quando clica nesse botão. Atualmente está bugado para usuários com permissão de admin. Como aluno funciona, mas como admin está bugado): https://youtu.be/77Txev66o-8 O botão "sair" deve remover o aluno do curso e voltar a página original, como se ele não estivesse inscrito. Logado como admin, ao clicar em "continuar curso" ou "sair" nada acontece: https://www.youtube.com/watch?v=queMuFuB74o&feature=youtu.be
test
bug inscrição no curso botões de inscrever no curso e sair do curso não estão funcionando comportamento esperado ao clicar no botão continuar curso usuário deve ser levado as unidades quando clica nesse botão atualmente está bugado para usuários com permissão de admin como aluno funciona mas como admin está bugado o botão sair deve remover o aluno do curso e voltar a página original como se ele não estivesse inscrito logado como admin ao clicar em continuar curso ou sair nada acontece
1
206,379
15,728,535,397
IssuesEvent
2021-03-29 13:55:30
yugabyte/yugabyte-db
https://api.github.com/repos/yugabyte/yugabyte-db
closed
[YCQL][Test] Make sure to close a Java driver-side cluster
area/ycql kind/bug kind/enhancement kind/failing-test
A `Cluster` object in our Java driver represents what a driver knows about a cluster, and its instances are used to construct `Session`s to be used for communicating with an actual cluster. However, what's not obvious is that `Cluster` is actually stateful and needs to be managed and closed along with `Session` - otherwise it'll survive between test, maintaining control connections. Most places were fixed in #5922, but `BaseAuthenticationCQLTest` remains.
1.0
[YCQL][Test] Make sure to close a Java driver-side cluster - A `Cluster` object in our Java driver represents what a driver knows about a cluster, and its instances are used to construct `Session`s to be used for communicating with an actual cluster. However, what's not obvious is that `Cluster` is actually stateful and needs to be managed and closed along with `Session` - otherwise it'll survive between test, maintaining control connections. Most places were fixed in #5922, but `BaseAuthenticationCQLTest` remains.
test
make sure to close a java driver side cluster a cluster object in our java driver represents what a driver knows about a cluster and its instances are used to construct session s to be used for communicating with an actual cluster however what s not obvious is that cluster is actually stateful and needs to be managed and closed along with session otherwise it ll survive between test maintaining control connections most places were fixed in but baseauthenticationcqltest remains
1
132,752
12,517,035,818
IssuesEvent
2020-06-03 10:24:31
youseedk/dna
https://api.github.com/repos/youseedk/dna
closed
UX Guideline suggestion to DNA
documentation
Ideas to where a spinner can be used: - Spinner in Button (Daniel send you design) - Spinner with system information (Below) - Spinner inside input field - Heavy lookup - ( !Needs design ) Best practices: Unless the spinner is placed in a button then add a describtive text telling the user what is happening (See example Automatisk fejlsøgning) Try to always keep spinners in page and not in overlays If a spinner is triggered by a button, place the spinner in the button, and disable the button while the spinner is visible. If only a portion of a page is displaying new content or being updated, place the spinner in that part of the page. If you are unsure where to place the spinner, place it where you want the user's attention to be when loading is finished. Only show a spinner if the expected wait time is more than a second. Add a minimum of 100s of delay to mitigate lots of unnecessary spinners showing up at the same time. Spinner with system information example: Image
1.0
UX Guideline suggestion to DNA - Ideas to where a spinner can be used: - Spinner in Button (Daniel send you design) - Spinner with system information (Below) - Spinner inside input field - Heavy lookup - ( !Needs design ) Best practices: Unless the spinner is placed in a button then add a describtive text telling the user what is happening (See example Automatisk fejlsøgning) Try to always keep spinners in page and not in overlays If a spinner is triggered by a button, place the spinner in the button, and disable the button while the spinner is visible. If only a portion of a page is displaying new content or being updated, place the spinner in that part of the page. If you are unsure where to place the spinner, place it where you want the user's attention to be when loading is finished. Only show a spinner if the expected wait time is more than a second. Add a minimum of 100s of delay to mitigate lots of unnecessary spinners showing up at the same time. Spinner with system information example: Image
non_test
ux guideline suggestion to dna ideas to where a spinner can be used spinner in button daniel send you design spinner with system information below spinner inside input field heavy lookup needs design best practices unless the spinner is placed in a button then add a describtive text telling the user what is happening see example automatisk fejlsøgning try to always keep spinners in page and not in overlays if a spinner is triggered by a button place the spinner in the button and disable the button while the spinner is visible if only a portion of a page is displaying new content or being updated place the spinner in that part of the page if you are unsure where to place the spinner place it where you want the user s attention to be when loading is finished only show a spinner if the expected wait time is more than a second add a minimum of of delay to mitigate lots of unnecessary spinners showing up at the same time spinner with system information example image
0
93,566
8,434,394,386
IssuesEvent
2018-10-17 10:03:33
DanRic/Bonus-idrico
https://api.github.com/repos/DanRic/Bonus-idrico
closed
Errore controllo CF
Test Interno
Ciao, sto controllando l'RdA 24232. Mi restituisce KO quando controlla lo stato della fornitura. Non dovrebbe essere così perchè: - data inizio agevolazione = 27-JUL-18 - data inizio fornitura = 28-FEB-96 Credo che l'errore sia in questa parte del codice: ```sql if p_rda.data_ini_agev <= sysdate then -- se allineamento non presente oppure N if nvl(p_rda.ALLINEAMENTO, 'N') = 'N' then -- si esegue il controllo data inizio fornitura (valorizzata) <= dataInizioAgevolazione if v_data_ini <= p_rda.data_ini_agev then cod_Err_SIU := 102; return 'KO'; end if; end if; end if; ```
1.0
Errore controllo CF - Ciao, sto controllando l'RdA 24232. Mi restituisce KO quando controlla lo stato della fornitura. Non dovrebbe essere così perchè: - data inizio agevolazione = 27-JUL-18 - data inizio fornitura = 28-FEB-96 Credo che l'errore sia in questa parte del codice: ```sql if p_rda.data_ini_agev <= sysdate then -- se allineamento non presente oppure N if nvl(p_rda.ALLINEAMENTO, 'N') = 'N' then -- si esegue il controllo data inizio fornitura (valorizzata) <= dataInizioAgevolazione if v_data_ini <= p_rda.data_ini_agev then cod_Err_SIU := 102; return 'KO'; end if; end if; end if; ```
test
errore controllo cf ciao sto controllando l rda mi restituisce ko quando controlla lo stato della fornitura non dovrebbe essere così perchè data inizio agevolazione jul data inizio fornitura feb credo che l errore sia in questa parte del codice sql if p rda data ini agev sysdate then se allineamento non presente oppure n if nvl p rda allineamento n n then si esegue il controllo data inizio fornitura valorizzata datainizioagevolazione if v data ini p rda data ini agev then cod err siu return ko end if end if end if
1
131,747
10,708,720,983
IssuesEvent
2019-10-24 20:20:28
flutter/flutter
https://api.github.com/repos/flutter/flutter
opened
text_field_test.dart force-press handling leaks state between tests.
a: tests framework
If you run the test "tap on non-force-press-supported devices work (iOS)" in `material/text_field_test.dart`, twice in a row (i.e. just duplicate the `testWidgets` block), it will fail the second time with: ``` The following TestFailure object was thrown running a test: Expected: TextSelection:<TextSelection(baseOffset: 8, extentOffset: 8, affinity: TextAffinity.downstream, isDirectional: false)> Actual: TextSelection:<TextSelection(baseOffset: -1, extentOffset: -1, affinity: TextAffinity.downstream, isDirectional: false)> ``` indicating to me that there is some global state somewhere that isn't being reset between tests. (I looked into it a bit with no luck, but I didn't want to be distracted from what I was working on, so I decided to file this issue instead and add a TODO).
1.0
text_field_test.dart force-press handling leaks state between tests. - If you run the test "tap on non-force-press-supported devices work (iOS)" in `material/text_field_test.dart`, twice in a row (i.e. just duplicate the `testWidgets` block), it will fail the second time with: ``` The following TestFailure object was thrown running a test: Expected: TextSelection:<TextSelection(baseOffset: 8, extentOffset: 8, affinity: TextAffinity.downstream, isDirectional: false)> Actual: TextSelection:<TextSelection(baseOffset: -1, extentOffset: -1, affinity: TextAffinity.downstream, isDirectional: false)> ``` indicating to me that there is some global state somewhere that isn't being reset between tests. (I looked into it a bit with no luck, but I didn't want to be distracted from what I was working on, so I decided to file this issue instead and add a TODO).
test
text field test dart force press handling leaks state between tests if you run the test tap on non force press supported devices work ios in material text field test dart twice in a row i e just duplicate the testwidgets block it will fail the second time with the following testfailure object was thrown running a test expected textselection textselection baseoffset extentoffset affinity textaffinity downstream isdirectional false actual textselection textselection baseoffset extentoffset affinity textaffinity downstream isdirectional false indicating to me that there is some global state somewhere that isn t being reset between tests i looked into it a bit with no luck but i didn t want to be distracted from what i was working on so i decided to file this issue instead and add a todo
1
30,691
4,646,637,309
IssuesEvent
2016-10-01 01:40:18
coreos/etcd
https://api.github.com/repos/coreos/etcd
closed
TestPublishRetry: len(action) = 1, want >= 2
area/testing
via semaphore (https://semaphoreci.com/coreos/etcd/branches/master/builds/1016) ``` --- FAIL: TestPublishRetry (0.01s) server_test.go:1245: len(action) = 1, want >= 2 2016-09-28 05:00:33.432745 I | etcdserver: setting up the initial cluster version to 2.0 2016-09-28 05:00:33.432786 I | etcdserver: skipped leadership transfer for single member cluster FAIL ```
1.0
TestPublishRetry: len(action) = 1, want >= 2 - via semaphore (https://semaphoreci.com/coreos/etcd/branches/master/builds/1016) ``` --- FAIL: TestPublishRetry (0.01s) server_test.go:1245: len(action) = 1, want >= 2 2016-09-28 05:00:33.432745 I | etcdserver: setting up the initial cluster version to 2.0 2016-09-28 05:00:33.432786 I | etcdserver: skipped leadership transfer for single member cluster FAIL ```
test
testpublishretry len action want via semaphore fail testpublishretry server test go len action want i etcdserver setting up the initial cluster version to i etcdserver skipped leadership transfer for single member cluster fail
1
9,561
29,810,887,208
IssuesEvent
2023-06-16 14:58:44
awslabs/aws-lambda-powertools-typescript
https://api.github.com/repos/awslabs/aws-lambda-powertools-typescript
closed
Maintenance: fix post-release workflow
area/automation type/internal status/confirmed
### Summary The post-release workflow is executed after a new GitHub release has been published. Part of the workflow responsibility is to get all the closed issues labeled as `status/pending-release`, remove the label and announce the new release. The workflow however [is failing](https://github.com/awslabs/aws-lambda-powertools-typescript/actions/runs/5224710370/jobs/9433250666) and we need to fix it. ### Why is this needed? So that the maintainers have less manual tasks after each release. ### Which area does this relate to? Automation ### Solution _No response_ ### Acknowledgment - [X] This request meets [Powertools for AWS Lambda (TypeScript) Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets) - [ ] Should this be considered in other Powertools for AWS Lambda languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/), and [.NET](https://github.com/awslabs/aws-lambda-powertools-dotnet/) ### Future readers Please react with 👍 and your use case to help us understand customer demand.
1.0
Maintenance: fix post-release workflow - ### Summary The post-release workflow is executed after a new GitHub release has been published. Part of the workflow responsibility is to get all the closed issues labeled as `status/pending-release`, remove the label and announce the new release. The workflow however [is failing](https://github.com/awslabs/aws-lambda-powertools-typescript/actions/runs/5224710370/jobs/9433250666) and we need to fix it. ### Why is this needed? So that the maintainers have less manual tasks after each release. ### Which area does this relate to? Automation ### Solution _No response_ ### Acknowledgment - [X] This request meets [Powertools for AWS Lambda (TypeScript) Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets) - [ ] Should this be considered in other Powertools for AWS Lambda languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/), and [.NET](https://github.com/awslabs/aws-lambda-powertools-dotnet/) ### Future readers Please react with 👍 and your use case to help us understand customer demand.
non_test
maintenance fix post release workflow summary the post release workflow is executed after a new github release has been published part of the workflow responsibility is to get all the closed issues labeled as status pending release remove the label and announce the new release the workflow however and we need to fix it why is this needed so that the maintainers have less manual tasks after each release which area does this relate to automation solution no response acknowledgment this request meets should this be considered in other powertools for aws lambda languages i e and future readers please react with 👍 and your use case to help us understand customer demand
0
223,964
24,759,922,176
IssuesEvent
2022-10-21 22:12:04
ilan-WS/m3
https://api.github.com/repos/ilan-WS/m3
opened
CVE-2022-37598 (High) detected in multiple libraries
security vulnerability
## CVE-2022-37598 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>uglify-js-3.4.10.tgz</b>, <b>uglify-js-3.6.0.tgz</b>, <b>uglify-js-3.7.3.tgz</b></p></summary> <p> <details><summary><b>uglify-js-3.4.10.tgz</b></p></summary> <p>JavaScript parser, mangler/compressor and beautifier toolkit</p> <p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.10.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.10.tgz</a></p> <p>Path to dependency file: /src/ctl/ui/package.json</p> <p>Path to vulnerable library: /src/ctl/ui/node_modules/uglify-js</p> <p> Dependency Hierarchy: - react-scripts-1.0.10.tgz (Root Library) - html-webpack-plugin-2.29.0.tgz - html-minifier-3.5.21.tgz - :x: **uglify-js-3.4.10.tgz** (Vulnerable Library) </details> <details><summary><b>uglify-js-3.6.0.tgz</b></p></summary> <p>JavaScript parser, mangler/compressor and beautifier toolkit</p> <p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-3.6.0.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-3.6.0.tgz</a></p> <p>Path to dependency file: /src/ctl/ui/package.json</p> <p>Path to vulnerable library: /src/ctl/ui/node_modules/uglify-js</p> <p> Dependency Hierarchy: - react-scripts-1.0.10.tgz (Root Library) - sw-precache-webpack-plugin-0.11.3.tgz - :x: **uglify-js-3.6.0.tgz** (Vulnerable Library) </details> <details><summary><b>uglify-js-3.7.3.tgz</b></p></summary> <p>JavaScript parser, mangler/compressor and beautifier toolkit</p> <p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-3.7.3.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-3.7.3.tgz</a></p> <p>Path to dependency file: /src/ctl/ui/package.json</p> <p>Path to vulnerable library: /src/ctl/ui/node_modules/uglify-js</p> <p> Dependency Hierarchy: - react-scripts-1.0.10.tgz (Root Library) - jest-20.0.4.tgz - jest-cli-20.0.4.tgz - istanbul-api-1.3.7.tgz - istanbul-reports-1.5.1.tgz - handlebars-4.5.3.tgz - :x: **uglify-js-3.7.3.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/ilan-WS/m3/commit/a62d2ead44380e2c1668bbbf026d5385b98d56ec">a62d2ead44380e2c1668bbbf026d5385b98d56ec</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Prototype pollution vulnerability in function DEFNODE in ast.js in mishoo UglifyJS 3.13.2 via the name variable in ast.js. <p>Publish Date: 2022-10-20 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37598>CVE-2022-37598</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-10-20</p> <p>Fix Resolution (uglify-js): 3.13.10</p> <p>Direct dependency fix Resolution (react-scripts): 3.3.1</p><p>Fix Resolution (uglify-js): 3.13.10</p> <p>Direct dependency fix Resolution (react-scripts): 3.3.1</p><p>Fix Resolution (uglify-js): 3.13.10</p> <p>Direct dependency fix Resolution (react-scripts): 3.3.1</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
True
CVE-2022-37598 (High) detected in multiple libraries - ## CVE-2022-37598 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>uglify-js-3.4.10.tgz</b>, <b>uglify-js-3.6.0.tgz</b>, <b>uglify-js-3.7.3.tgz</b></p></summary> <p> <details><summary><b>uglify-js-3.4.10.tgz</b></p></summary> <p>JavaScript parser, mangler/compressor and beautifier toolkit</p> <p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.10.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.10.tgz</a></p> <p>Path to dependency file: /src/ctl/ui/package.json</p> <p>Path to vulnerable library: /src/ctl/ui/node_modules/uglify-js</p> <p> Dependency Hierarchy: - react-scripts-1.0.10.tgz (Root Library) - html-webpack-plugin-2.29.0.tgz - html-minifier-3.5.21.tgz - :x: **uglify-js-3.4.10.tgz** (Vulnerable Library) </details> <details><summary><b>uglify-js-3.6.0.tgz</b></p></summary> <p>JavaScript parser, mangler/compressor and beautifier toolkit</p> <p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-3.6.0.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-3.6.0.tgz</a></p> <p>Path to dependency file: /src/ctl/ui/package.json</p> <p>Path to vulnerable library: /src/ctl/ui/node_modules/uglify-js</p> <p> Dependency Hierarchy: - react-scripts-1.0.10.tgz (Root Library) - sw-precache-webpack-plugin-0.11.3.tgz - :x: **uglify-js-3.6.0.tgz** (Vulnerable Library) </details> <details><summary><b>uglify-js-3.7.3.tgz</b></p></summary> <p>JavaScript parser, mangler/compressor and beautifier toolkit</p> <p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-3.7.3.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-3.7.3.tgz</a></p> <p>Path to dependency file: /src/ctl/ui/package.json</p> <p>Path to vulnerable library: /src/ctl/ui/node_modules/uglify-js</p> <p> Dependency Hierarchy: - react-scripts-1.0.10.tgz (Root Library) - jest-20.0.4.tgz - jest-cli-20.0.4.tgz - istanbul-api-1.3.7.tgz - istanbul-reports-1.5.1.tgz - handlebars-4.5.3.tgz - :x: **uglify-js-3.7.3.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/ilan-WS/m3/commit/a62d2ead44380e2c1668bbbf026d5385b98d56ec">a62d2ead44380e2c1668bbbf026d5385b98d56ec</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Prototype pollution vulnerability in function DEFNODE in ast.js in mishoo UglifyJS 3.13.2 via the name variable in ast.js. <p>Publish Date: 2022-10-20 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37598>CVE-2022-37598</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-10-20</p> <p>Fix Resolution (uglify-js): 3.13.10</p> <p>Direct dependency fix Resolution (react-scripts): 3.3.1</p><p>Fix Resolution (uglify-js): 3.13.10</p> <p>Direct dependency fix Resolution (react-scripts): 3.3.1</p><p>Fix Resolution (uglify-js): 3.13.10</p> <p>Direct dependency fix Resolution (react-scripts): 3.3.1</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
non_test
cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries uglify js tgz uglify js tgz uglify js tgz uglify js tgz javascript parser mangler compressor and beautifier toolkit library home page a href path to dependency file src ctl ui package json path to vulnerable library src ctl ui node modules uglify js dependency hierarchy react scripts tgz root library html webpack plugin tgz html minifier tgz x uglify js tgz vulnerable library uglify js tgz javascript parser mangler compressor and beautifier toolkit library home page a href path to dependency file src ctl ui package json path to vulnerable library src ctl ui node modules uglify js dependency hierarchy react scripts tgz root library sw precache webpack plugin tgz x uglify js tgz vulnerable library uglify js tgz javascript parser mangler compressor and beautifier toolkit library home page a href path to dependency file src ctl ui package json path to vulnerable library src ctl ui node modules uglify js dependency hierarchy react scripts tgz root library jest tgz jest cli tgz istanbul api tgz istanbul reports tgz handlebars tgz x uglify js tgz vulnerable library found in head commit a href found in base branch master vulnerability details prototype pollution vulnerability in function defnode in ast js in mishoo uglifyjs via the name variable in ast js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution uglify js direct dependency fix resolution react scripts fix resolution uglify js direct dependency fix resolution react scripts fix resolution uglify js direct dependency fix resolution react scripts check this box to open an automated fix pr
0
746,077
26,014,391,605
IssuesEvent
2022-12-21 06:52:29
Rehachoudhary0/hotel_testing
https://api.github.com/repos/Rehachoudhary0/hotel_testing
closed
🐛 Bug Report:Traveler > booking has dates issue
bug app High priority
### 👟 Reproduction steps When user going to select location from privious search list then this is auto selection is checked in date or checkout date are same as shows in video becouse of same date there is nothing showing. https://user-images.githubusercontent.com/85510636/208237712-885bba27-b7a8-4fb6-a308-4f11be48b501.mp4 ### 👍 Expected behavior Should be selected date from recent search is atleast one night . ### 👎 Actual Behavior . ### ☎️ Log-in number For all ### 📲 User Type Traveller - Primary ### 🎲 App version Version 22.12.12+01 ### 💻 Operating system Android ### 👀 Have you spent some time to check if this issue has been raised before? - [X] I checked and didn't find similar issue ### 🏢 Have you read the Code of Conduct? - [X] I have read the [Code of Conduct](https://github.com/Rehachoudhary0/hotel_testing/blob/HEAD/CODE_OF_CONDUCT.md)
1.0
🐛 Bug Report:Traveler > booking has dates issue - ### 👟 Reproduction steps When user going to select location from privious search list then this is auto selection is checked in date or checkout date are same as shows in video becouse of same date there is nothing showing. https://user-images.githubusercontent.com/85510636/208237712-885bba27-b7a8-4fb6-a308-4f11be48b501.mp4 ### 👍 Expected behavior Should be selected date from recent search is atleast one night . ### 👎 Actual Behavior . ### ☎️ Log-in number For all ### 📲 User Type Traveller - Primary ### 🎲 App version Version 22.12.12+01 ### 💻 Operating system Android ### 👀 Have you spent some time to check if this issue has been raised before? - [X] I checked and didn't find similar issue ### 🏢 Have you read the Code of Conduct? - [X] I have read the [Code of Conduct](https://github.com/Rehachoudhary0/hotel_testing/blob/HEAD/CODE_OF_CONDUCT.md)
non_test
🐛 bug report traveler booking has dates issue 👟 reproduction steps when user going to select location from privious search list then this is auto selection is checked in date or checkout date are same as shows in video becouse of same date there is nothing showing 👍 expected behavior should be selected date from recent search is atleast one night 👎 actual behavior ☎️ log in number for all 📲 user type traveller primary 🎲 app version version 💻 operating system android 👀 have you spent some time to check if this issue has been raised before i checked and didn t find similar issue 🏢 have you read the code of conduct i have read the
0
27,714
12,694,771,400
IssuesEvent
2020-06-22 07:14:19
microsoft/azure-tools-for-java
https://api.github.com/repos/microsoft/azure-tools-for-java
closed
[IntelliJ] stuck during deployment of Java web app
IntelliJ app-service need more info try-to-reproduce
Azure Toolkit for IntelliJ Version: v3.22.0-2019.1 IntelliJ version: IDEA 2019.1.2 (Community Edition) Build #IC-191.7141.44, built on May 7, 2019 JRE: 1.8.0_202-release-1483-b49 amd64 JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o Windows 10 10.0 Repro steps: - Run Azure deployment from IntelliJ ![image](https://user-images.githubusercontent.com/6827784/58329129-1c514c80-7e34-11e9-9fe8-cb40ac2ce36e.png) - Deployment is stuck ![image](https://user-images.githubusercontent.com/6827784/58329186-3e4acf00-7e34-11e9-8c7c-9cc87896551b.png) - By checking via FTP, the file is created with size 0 bytes ![image](https://user-images.githubusercontent.com/6827784/58329220-53bff900-7e34-11e9-869a-7c6885bed8a4.png)
1.0
[IntelliJ] stuck during deployment of Java web app - Azure Toolkit for IntelliJ Version: v3.22.0-2019.1 IntelliJ version: IDEA 2019.1.2 (Community Edition) Build #IC-191.7141.44, built on May 7, 2019 JRE: 1.8.0_202-release-1483-b49 amd64 JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o Windows 10 10.0 Repro steps: - Run Azure deployment from IntelliJ ![image](https://user-images.githubusercontent.com/6827784/58329129-1c514c80-7e34-11e9-9fe8-cb40ac2ce36e.png) - Deployment is stuck ![image](https://user-images.githubusercontent.com/6827784/58329186-3e4acf00-7e34-11e9-8c7c-9cc87896551b.png) - By checking via FTP, the file is created with size 0 bytes ![image](https://user-images.githubusercontent.com/6827784/58329220-53bff900-7e34-11e9-869a-7c6885bed8a4.png)
non_test
stuck during deployment of java web app azure toolkit for intellij version intellij version idea community edition build ic built on may jre release jvm openjdk bit server vm by jetbrains s r o windows repro steps run azure deployment from intellij deployment is stuck by checking via ftp the file is created with size bytes
0
27,557
12,641,948,062
IssuesEvent
2020-06-16 07:17:07
elastic/kibana
https://api.github.com/repos/elastic/kibana
opened
[APM] Service maps fails to load, displays errors when there are no ML jobs for apm
Feature:Service Maps Team:apm [zube]: In Progress bug v7.8.1 v7.9.0
Service maps is failing to load because it doesn’t handle the case where there are no ML jobs for APM. The request for 'apm' ml jobs throws with a 404, and is caught at the router level, returning a 500 error for the service maps api call. This should be handled by returning an empty anomalies array instead of throwing. One workaround is to enable ML anomaly detection on at least 1 service in APM.
1.0
[APM] Service maps fails to load, displays errors when there are no ML jobs for apm - Service maps is failing to load because it doesn’t handle the case where there are no ML jobs for APM. The request for 'apm' ml jobs throws with a 404, and is caught at the router level, returning a 500 error for the service maps api call. This should be handled by returning an empty anomalies array instead of throwing. One workaround is to enable ML anomaly detection on at least 1 service in APM.
non_test
service maps fails to load displays errors when there are no ml jobs for apm service maps is failing to load because it doesn’t handle the case where there are no ml jobs for apm the request for apm ml jobs throws with a and is caught at the router level returning a error for the service maps api call this should be handled by returning an empty anomalies array instead of throwing one workaround is to enable ml anomaly detection on at least service in apm
0
320,262
23,805,409,500
IssuesEvent
2022-09-04 00:32:12
Ant-Shell/tainted-peaches
https://api.github.com/repos/Ant-Shell/tainted-peaches
closed
[Documentation ] Add README page
documentation
**Please Describe The Problem To Be Solved** Need to add a robust README page that describes the overall project, technologies used, learning goals, wins, challenges, etc.
1.0
[Documentation ] Add README page - **Please Describe The Problem To Be Solved** Need to add a robust README page that describes the overall project, technologies used, learning goals, wins, challenges, etc.
non_test
add readme page please describe the problem to be solved need to add a robust readme page that describes the overall project technologies used learning goals wins challenges etc
0
124,380
10,310,366,793
IssuesEvent
2019-08-29 15:02:02
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
opened
"Precondition failed: UID in precondition" flakes
kind/failing-test kind/flake
**Which jobs are failing**: pull-kubernetes-integration **Which test(s) are failing**: Not showing up in triage, oddly, but seen in individual failures: https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/80485/pull-kubernetes-integration/1167014135116861443/ https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-integration/1167057042045669376 https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-integration/1167053904014217218 https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-integration/1167014135116861443 https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-integration/1166986078263971840 not sure if this specific to custom resources, but that's where I'm seeing the flakes /sig api-machinery /priority important-soon /area custom-resources
1.0
"Precondition failed: UID in precondition" flakes - **Which jobs are failing**: pull-kubernetes-integration **Which test(s) are failing**: Not showing up in triage, oddly, but seen in individual failures: https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/80485/pull-kubernetes-integration/1167014135116861443/ https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-integration/1167057042045669376 https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-integration/1167053904014217218 https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-integration/1167014135116861443 https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-integration/1166986078263971840 not sure if this specific to custom resources, but that's where I'm seeing the flakes /sig api-machinery /priority important-soon /area custom-resources
test
precondition failed uid in precondition flakes which jobs are failing pull kubernetes integration which test s are failing not showing up in triage oddly but seen in individual failures not sure if this specific to custom resources but that s where i m seeing the flakes sig api machinery priority important soon area custom resources
1
171,424
14,289,919,155
IssuesEvent
2020-11-23 20:02:14
cornellius-gp/gpytorch
https://api.github.com/repos/cornellius-gp/gpytorch
opened
Learning independent length scales for n-D input in Approximate GP
documentation
Hi Folks! After looking around in the documentation and trying various different combinations of Kernel configurations, I simply have to ask :heart: Setup: - 3D training data with significantly different length scales in the individual dimensions - Approximate GP a la ```python distribution = CholeskyVariationalDistribution(inducing.shape[0]) strategy = VariationalStrategy(self, inducing, distribution, learn_inducing_locations=True) ``` - Kernel configuration ```python self.mean_module = gpytorch.means.ConstantMean() # also tried without the wrapping ScaleKernel and passing as well as various different length scale priors. self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel(ard_num_dims=3)) ``` In my experiments the lengthscale vector always changed simultaneousliy in all 3 dimensions. Also in some of the experiments it converged towards `ln(2)=0.69314718` which seemed very suspicious to me. I saw many tutorials where the length scale of the nested base kernel converges during training iteration. However I think none of those examples where using the approximate GP setup. - Can anyone point me to an approximate GP setup with independent lengthscales? - Am I completely wrong and this is simply not supported in approximate GP? Do I have to define them in advance? - What (probably obvious) other thing could I do wrong? Any hints are welcome :+1: - If this is in fact supposed to work, I will gladly provide a minimal example with my "non-working" setup... Kind regards and keep up your awesome project! Martin _BTW: I did not find any mailing list or slack link where I could have asked this. If there is anything like this, maybe it would be worth putting it to the readme :-)_
1.0
Learning independent length scales for n-D input in Approximate GP - Hi Folks! After looking around in the documentation and trying various different combinations of Kernel configurations, I simply have to ask :heart: Setup: - 3D training data with significantly different length scales in the individual dimensions - Approximate GP a la ```python distribution = CholeskyVariationalDistribution(inducing.shape[0]) strategy = VariationalStrategy(self, inducing, distribution, learn_inducing_locations=True) ``` - Kernel configuration ```python self.mean_module = gpytorch.means.ConstantMean() # also tried without the wrapping ScaleKernel and passing as well as various different length scale priors. self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel(ard_num_dims=3)) ``` In my experiments the lengthscale vector always changed simultaneousliy in all 3 dimensions. Also in some of the experiments it converged towards `ln(2)=0.69314718` which seemed very suspicious to me. I saw many tutorials where the length scale of the nested base kernel converges during training iteration. However I think none of those examples where using the approximate GP setup. - Can anyone point me to an approximate GP setup with independent lengthscales? - Am I completely wrong and this is simply not supported in approximate GP? Do I have to define them in advance? - What (probably obvious) other thing could I do wrong? Any hints are welcome :+1: - If this is in fact supposed to work, I will gladly provide a minimal example with my "non-working" setup... Kind regards and keep up your awesome project! Martin _BTW: I did not find any mailing list or slack link where I could have asked this. If there is anything like this, maybe it would be worth putting it to the readme :-)_
non_test
learning independent length scales for n d input in approximate gp hi folks after looking around in the documentation and trying various different combinations of kernel configurations i simply have to ask heart setup training data with significantly different length scales in the individual dimensions approximate gp a la python distribution choleskyvariationaldistribution inducing shape strategy variationalstrategy self inducing distribution learn inducing locations true kernel configuration python self mean module gpytorch means constantmean also tried without the wrapping scalekernel and passing as well as various different length scale priors self covar module gpytorch kernels scalekernel gpytorch kernels rbfkernel ard num dims in my experiments the lengthscale vector always changed simultaneousliy in all dimensions also in some of the experiments it converged towards ln which seemed very suspicious to me i saw many tutorials where the length scale of the nested base kernel converges during training iteration however i think none of those examples where using the approximate gp setup can anyone point me to an approximate gp setup with independent lengthscales am i completely wrong and this is simply not supported in approximate gp do i have to define them in advance what probably obvious other thing could i do wrong any hints are welcome if this is in fact supposed to work i will gladly provide a minimal example with my non working setup kind regards and keep up your awesome project martin btw i did not find any mailing list or slack link where i could have asked this if there is anything like this maybe it would be worth putting it to the readme
0
832,097
32,072,174,581
IssuesEvent
2023-09-25 08:43:58
ImbueNetwork/imbue
https://api.github.com/repos/ImbueNetwork/imbue
opened
Abstaining votes
enhancement Priority | Medium
Include an abstain vote into pallet-proposals. How should an abstain vote affect the system? ```rust // Changing the boolean we use to vote into something as simple as: enum Vote { Yay, Nay, Abstain, } ```
1.0
Abstaining votes - Include an abstain vote into pallet-proposals. How should an abstain vote affect the system? ```rust // Changing the boolean we use to vote into something as simple as: enum Vote { Yay, Nay, Abstain, } ```
non_test
abstaining votes include an abstain vote into pallet proposals how should an abstain vote affect the system rust changing the boolean we use to vote into something as simple as enum vote yay nay abstain
0
103,154
4,164,952,340
IssuesEvent
2016-06-19 05:51:44
fossasia/open-event-orga-server
https://api.github.com/repos/fossasia/open-event-orga-server
closed
Create Event Wizard Step 3: Fields Should not be automatically created/Cannot delete
bug Priority: High UI Wizard
Ater creating the event, I want to edit it. When I go to Step 3 of the wizard, it shows a new row, which I cannot delete. Expected behaviour: * [x] Only show new item, when I click `+` button * [x] Show a `+` and `-` button on every item, so user can delete it if desired ![screenshot from 2016-06-19 02 52 16](https://cloud.githubusercontent.com/assets/1583873/16174605/5232a116-35c9-11e6-80f5-1f37ad7e9812.png)
1.0
Create Event Wizard Step 3: Fields Should not be automatically created/Cannot delete - Ater creating the event, I want to edit it. When I go to Step 3 of the wizard, it shows a new row, which I cannot delete. Expected behaviour: * [x] Only show new item, when I click `+` button * [x] Show a `+` and `-` button on every item, so user can delete it if desired ![screenshot from 2016-06-19 02 52 16](https://cloud.githubusercontent.com/assets/1583873/16174605/5232a116-35c9-11e6-80f5-1f37ad7e9812.png)
non_test
create event wizard step fields should not be automatically created cannot delete ater creating the event i want to edit it when i go to step of the wizard it shows a new row which i cannot delete expected behaviour only show new item when i click button show a and button on every item so user can delete it if desired
0
70,647
7,195,053,377
IssuesEvent
2018-02-04 13:13:13
JoramD0/JSDF_Mission_Files
https://api.github.com/repos/JoramD0/JSDF_Mission_Files
closed
Replace vehicleService
Enhancement Mission template Needs testing
Remove vehicleService in favour of the already existing ammo/repair/fuel containers. - [x] Remove vehicleService - [x] See if all ace logi stuff reaches entire pad - [x] Make ace containers indestructable - [x] Remove cargo spaces from ace containers - [x] Make area repair pad area a repair facility - [x] Extend fuel hoses - [x] Change cba_userconfig with new values - [x] Make objects on repair pad unable to be slingloaded - [x] Figure out a way of extending rearm radius for ammocontainer // THIS IS NOT POSSIBLE - [x] Test all of the above in MP - [x] Write in-game documentation for it - [x] Apply to all missionfiles
1.0
Replace vehicleService - Remove vehicleService in favour of the already existing ammo/repair/fuel containers. - [x] Remove vehicleService - [x] See if all ace logi stuff reaches entire pad - [x] Make ace containers indestructable - [x] Remove cargo spaces from ace containers - [x] Make area repair pad area a repair facility - [x] Extend fuel hoses - [x] Change cba_userconfig with new values - [x] Make objects on repair pad unable to be slingloaded - [x] Figure out a way of extending rearm radius for ammocontainer // THIS IS NOT POSSIBLE - [x] Test all of the above in MP - [x] Write in-game documentation for it - [x] Apply to all missionfiles
test
replace vehicleservice remove vehicleservice in favour of the already existing ammo repair fuel containers remove vehicleservice see if all ace logi stuff reaches entire pad make ace containers indestructable remove cargo spaces from ace containers make area repair pad area a repair facility extend fuel hoses change cba userconfig with new values make objects on repair pad unable to be slingloaded figure out a way of extending rearm radius for ammocontainer this is not possible test all of the above in mp write in game documentation for it apply to all missionfiles
1
112,359
9,561,369,522
IssuesEvent
2019-05-03 22:57:02
kubeflow/testing
https://api.github.com/repos/kubeflow/testing
opened
Do shallow clones of Kubeflow repos in checkout.sh
area/testing help wanted priority/p1
We clone Kubeflow repos in these scripts https://github.com/kubeflow/testing/blob/665fda22af4aa72639efdfd62a4fe2e440cfad7b/images/checkout.sh#L28 https://github.com/kubeflow/testing/blob/665fda22af4aa72639efdfd62a4fe2e440cfad7b/images/checkout_repos.sh#L82 Right now we are cloning the entire history which uses a lot of space. Doing ``` git clone --depth=1 git@github.com:kubeflow/kubeflow.git ``` Appears to cut the size from 286 M to 150M
1.0
Do shallow clones of Kubeflow repos in checkout.sh - We clone Kubeflow repos in these scripts https://github.com/kubeflow/testing/blob/665fda22af4aa72639efdfd62a4fe2e440cfad7b/images/checkout.sh#L28 https://github.com/kubeflow/testing/blob/665fda22af4aa72639efdfd62a4fe2e440cfad7b/images/checkout_repos.sh#L82 Right now we are cloning the entire history which uses a lot of space. Doing ``` git clone --depth=1 git@github.com:kubeflow/kubeflow.git ``` Appears to cut the size from 286 M to 150M
test
do shallow clones of kubeflow repos in checkout sh we clone kubeflow repos in these scripts right now we are cloning the entire history which uses a lot of space doing git clone depth git github com kubeflow kubeflow git appears to cut the size from m to
1
749,578
26,169,945,612
IssuesEvent
2023-01-01 19:49:44
Greenstand/treetracker-admin-client
https://api.github.com/repos/Greenstand/treetracker-admin-client
closed
Grower Tool - Incorrect different types of captures count
type: bug tool: Growers priority
In grower tool, the Grower Detail dialog of each grower shows similar number of captures count for each status (Approved, Awaiting, Rejected), which I believe this is incorrect. See the attached image. ![image](https://user-images.githubusercontent.com/29462498/193455175-66cde9ad-527f-42c3-9a4a-3099a5f58ecb.png)
1.0
Grower Tool - Incorrect different types of captures count - In grower tool, the Grower Detail dialog of each grower shows similar number of captures count for each status (Approved, Awaiting, Rejected), which I believe this is incorrect. See the attached image. ![image](https://user-images.githubusercontent.com/29462498/193455175-66cde9ad-527f-42c3-9a4a-3099a5f58ecb.png)
non_test
grower tool incorrect different types of captures count in grower tool the grower detail dialog of each grower shows similar number of captures count for each status approved awaiting rejected which i believe this is incorrect see the attached image
0
138,354
30,854,683,337
IssuesEvent
2023-08-02 19:33:57
vectordotdev/vector
https://api.github.com/repos/vectordotdev/vector
closed
Incorrectly configuring the protobuf codec causes a panic rather than printing friendly error messages
type: bug domain: codecs
### A note for the community <!-- Please keep this note for the community --> * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * If you are interested in working on this issue or have submitted a pull request, please leave a comment <!-- Thank you for keeping this note for the community --> ### Problem To fill in ### Configuration _No response_ ### Version master (a06c711028) ### Debug Output _No response_ ### Example Data _No response_ ### Additional Context _No response_ ### References _No response_
1.0
Incorrectly configuring the protobuf codec causes a panic rather than printing friendly error messages - ### A note for the community <!-- Please keep this note for the community --> * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * If you are interested in working on this issue or have submitted a pull request, please leave a comment <!-- Thank you for keeping this note for the community --> ### Problem To fill in ### Configuration _No response_ ### Version master (a06c711028) ### Debug Output _No response_ ### Example Data _No response_ ### Additional Context _No response_ ### References _No response_
non_test
incorrectly configuring the protobuf codec causes a panic rather than printing friendly error messages a note for the community please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request if you are interested in working on this issue or have submitted a pull request please leave a comment problem to fill in configuration no response version master debug output no response example data no response additional context no response references no response
0
98,982
8,687,161,019
IssuesEvent
2018-12-03 12:59:30
telstra/open-kilda
https://api.github.com/repos/telstra/open-kilda
opened
Tests: introduce basic performance monitoring
area/testing priority/3-normal
As a QA engineer, I want to have some kind of performance monitoring tool, so that I can asses the performance degradation/improvement for certain operations during general functional-tests runs. AC: 1. Measure performance per "operation". Probably per each method call for all classes in `org.openkilda.testing.service` package. 2. Ability to get average statistics per operation among all calls during test run (min/max/average/percentile etc.) 3. Give preference to box solutions, probably from Spring (spring-aop or similar)
1.0
Tests: introduce basic performance monitoring - As a QA engineer, I want to have some kind of performance monitoring tool, so that I can asses the performance degradation/improvement for certain operations during general functional-tests runs. AC: 1. Measure performance per "operation". Probably per each method call for all classes in `org.openkilda.testing.service` package. 2. Ability to get average statistics per operation among all calls during test run (min/max/average/percentile etc.) 3. Give preference to box solutions, probably from Spring (spring-aop or similar)
test
tests introduce basic performance monitoring as a qa engineer i want to have some kind of performance monitoring tool so that i can asses the performance degradation improvement for certain operations during general functional tests runs ac measure performance per operation probably per each method call for all classes in org openkilda testing service package ability to get average statistics per operation among all calls during test run min max average percentile etc give preference to box solutions probably from spring spring aop or similar
1
157,761
12,390,324,857
IssuesEvent
2020-05-20 10:28:20
kubernetes-sigs/cluster-api-provider-digitalocean
https://api.github.com/repos/kubernetes-sigs/cluster-api-provider-digitalocean
closed
Add e2e test for MachineDeployment api
area/testing priority/important-longterm
**Detailed Description** We have added an initial e2e test as in this PR https://github.com/kubernetes-sigs/cluster-api-provider-digitalocean/pull/148. The PR is creating a cluster with a single controlplane and one worker that both of them only use `Machine` api. So the `MachineDeployment` api is not covered yet. We need to add e2e test for `MachineDeployment` api by creating a cluster with a single controlplane use `Machine` api and multiple worker use `MachineDeployment`. It would be better if we also add a scaling scenario.
1.0
Add e2e test for MachineDeployment api - **Detailed Description** We have added an initial e2e test as in this PR https://github.com/kubernetes-sigs/cluster-api-provider-digitalocean/pull/148. The PR is creating a cluster with a single controlplane and one worker that both of them only use `Machine` api. So the `MachineDeployment` api is not covered yet. We need to add e2e test for `MachineDeployment` api by creating a cluster with a single controlplane use `Machine` api and multiple worker use `MachineDeployment`. It would be better if we also add a scaling scenario.
test
add test for machinedeployment api detailed description we have added an initial test as in this pr the pr is creating a cluster with a single controlplane and one worker that both of them only use machine api so the machinedeployment api is not covered yet we need to add test for machinedeployment api by creating a cluster with a single controlplane use machine api and multiple worker use machinedeployment it would be better if we also add a scaling scenario
1
20,614
11,484,441,027
IssuesEvent
2020-02-11 03:42:42
microsoft/vscode-cpptools
https://api.github.com/repos/microsoft/vscode-cpptools
closed
I need help finding out what the issue with my c_cpp_properties.json file is, lots of context in post
Feature: Configuration Language Service investigate
**Type: LanguageService** <!----- Input information below -----> Please forgive any formatting issues: first Github issue submission. <!-- **Prior to filing an issue, please review:** - Existing issues at https://github.com/Microsoft/vscode-cpptools/issues - Our documentation at https://code.visualstudio.com/docs/languages/cpp - FAQs at https://code.visualstudio.com/docs/cpp/faq-cpp --> **Describe the bug** VSCode Details Version: 1.41.1 Commit: 26076a4de974ead31f97692a0d32f90d735645c0 Date: 2019-12-18T15:04:31.999Z Electron: 6.1.5 Chrome: 76.0.3809.146 Node.js: 12.4.0 V8: 7.6.303.31-electron.0 OS: Linux x64 4.13.0-43-generic (Mint 18.3) C/C++ Extension Version: 0.26.3 Other extensions installed: * Cortex-Debug * Default Dark+ Contrast * Material Icon Theme All errors persist with all above extensions disabled Description of Bug: I just tried swapping over to VsCode as an IDE and I'm trying to get intellisense to work with the C/C++ extension. Originally I tried it with some nRF52 projects and after a day or two gave up and decided to try on something I could understand a bit better. So now I'm trying to get a simple example esp32 Arduino project to not have any "problems". Unfortunately I'm consistently getting "#include errors detected. Please update your includePath. Squiggles are disabled for this translation unit" problems occur (and often also "cannot open source file "XXX.h" (dependency of "YYY.h")" problems). This example project I'm using is being compiled using a clone of this repo (https://github.com/espressif/arduino-esp32) and I am compiling it using a modified version of this makefile (https://github.com/plerup/makeEspArduino/blob/master/makeEspArduino.mk - I can provide my modified version if needed). The main information to take from this is that the project is being compiled with a bin from that directory. Every project I'm testing with compiles without any errors. So my first step was to locate all the files that needed to be included for the project. I actually started with auto-generating the c_cpp_properties.json file from the makefile, directly using the directories used in the '-Idir' flags of the compilation. But I've also now started testing with recursive searches to higher directories for simplicity. So my includePath was as below (please note that these are definitely containing all libraries used by my project [but not necessarily the compiler command] because they are the only directories searched for includes in my makefile too). "/home/mike/Arduino/libraries/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/**", "/home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/cores/esp32/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/variants/m5stack_core_esp32/**", "/home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/build_files/test_m5stack-core-esp32/**" NOTE: in this example I have cloned the repo from https://github.com/espressif/arduino-esp32 to '/home/mike/softwarePackageStorage/arduino_ESP/esp32' and my example test project is in '/home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting'. The next part I learned about was making sure to include all of the files included by your compiler itself and the latest advice seemed to be to achieve this by setting your compilerPath. So given my makefile is compiling with a binary from within the esp32 repo, I put that command there as shown below: "compilerPath": "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/xtensa-esp32-elf-g++" This still resulted in the same errors, so then I went to include any definitions that my makefile was putting into the compilation command in case that was making a difference. So I set up my makefile to automatically set these in the c_cpp_properties.json file along with the includePaths and compilerPath. This resulted in my defines section looking as below: "defines" : [ "-DESP_PLATFORM", "-DMBEDTLS_CONFIG_FILE=\"mbedtls/esp_config.h\"", "-DHAVE_CONFIG_H", "-DGCC_NOT_5_2_0=0", "-DWITH_POSIX", "-DF_CPU=240000000L", "-DARDUINO=10605", "-DARDUINO_M5Stack_Core_ESP32", "-DARDUINO_ARCH_ESP32", "-DARDUINO_BOARD=\"M5Stack_Core_ESP32\"", "-DARDUINO_VARIANT=\"m5stack_core_esp32\"", "-DESP32", "-DCORE_DEBUG_LEVEL=0" ], This still resulted in the same errors, so after a couple days of banging my head against the wall and searching github issues, I saw bobbrow on several github issues repeatedly recommending to run: g++ -Wp,-v -E -xc -x c++ /dev/null Then "The "includePath" in your c_cpp_properties.json file should match that". If I'm understanding this correctly, this was accounted for with the compilerPath setting but I tried this anyway. So from my understanding: According to https://linux.die.net/man/1/g++ (because the man page is annoying to traverse): -Wp,option passes all options after the ',' directly onto the pre-processor -v Print (on standard error output) the commands executed to run the stages of compilation. Also print the version number of the compiler driver program and of the preprocessor and the compiler proper. -E Stop after the preprocessing stage; do not run the compiler proper. The output is in the form of preprocessed source code, which is sent to the standard output. -xc Specify explicitly the language for the following input files as c -x c++ Specify explicitly the language for the following input files as c++ /dev/null I'm guessing means just supply no actual files so no actual compilation occurs past the bare minimum. So I'm presuming this should be printing out the steps in the process of compiling a null project which would mean printing include files as it finds them to bring in (because of -v) to compile c and c++ files. Then it stops after pre-processing and doesn't go further. For me, because I'm using the compiler in the esp32 build repo mentioned above, I've been running this command to try and achieve the same as above: /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/xtensa-esp32-elf-g++ -Wp,-v -E -xc -x c++ /dev/null and I get the following output: ignoring duplicate directory "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/../../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/include/c++/5.2.0" ignoring duplicate directory "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/../../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/include/c++/5.2.0/xtensa-esp32-elf" ignoring duplicate directory "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/../../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/include/c++/5.2.0/backward" ignoring duplicate directory "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/../../lib/gcc/xtensa-esp32-elf/5.2.0/include" ignoring nonexistent directory "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../xtensa-esp32-elf/sysroot/builds/idf/crosstool-NG/builds/xtensa-esp32-elf/xtensa-esp32-elf/sysroot/include" ignoring duplicate directory "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/../../lib/gcc/xtensa-esp32-elf/5.2.0/include-fixed" ignoring duplicate directory "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/../../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/include" #include "..." search starts here: #include <...> search starts here: /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/include/c++/5.2.0 /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/include/c++/5.2.0/xtensa-esp32-elf /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/include/c++/5.2.0/backward /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/include /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/include-fixed /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/include /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../xtensa-esp32-elf/sysroot/usr/include End of search list. # 1 "/dev/null" # 1 "<built-in>" # 1 "<command-line>" # 1 "/dev/null" Note that I also did the same with "xtensa-esp32-elf-gcc" in the same directory and got the same results. Because all of the file names I see are definitely in this directory, I just include it with a recursive search using ** /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/** But also of note: I have tested including each individual directory both as listed with the relative jumps and as a realpath. Next, to make sure I was covering all bases, I also ran the command using standard g++ in case that gets called by the esp32 compiler or something that I don't understant. This yields the following output: ignoring duplicate directory "/usr/include/x86_64-linux-gnu/c++/5" ignoring nonexistent directory "/usr/local/include/x86_64-linux-gnu" ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/5/../../../../x86_64-linux-gnu/include" #include "..." search starts here: #include <...> search starts here: /usr/include/c++/5 /usr/include/x86_64-linux-gnu/c++/5 /usr/include/c++/5/backward /usr/lib/gcc/x86_64-linux-gnu/5/include /usr/local/include /usr/lib/gcc/x86_64-linux-gnu/5/include-fixed /usr/include/x86_64-linux-gnu /usr/include End of search list. # 1 "/dev/null" # 1 "<built-in>" # 1 "<command-line>" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 1 "<command-line>" 2 # 1 "/dev/null" So from this, I also tried including these directories: "/usr/include/c++/5", "/usr/include/x86_64-linux-gnu/c++/5", "/usr/include/c++/5/backward", "/usr/lib/gcc/x86_64-linux-gnu/5/include", "/usr/local/include", "/usr/lib/gcc/x86_64-linux-gnu/5/include-fixed", "/usr/include/x86_64-linux-gnu", "/usr/include" From all of these attempts, I consistently always had at least "#include errors detected. Please update your includePath. Squiggles are disabled for this translation unit" problems and usually also "cannot open source file "XXX.h" (dependency of "YYY.h")" problems occur. So after more headbashing and reading github issues, I wanted to verify that the correct directories were actually being included by vscode/the extension with the diagnostics logging. So first, my current c_cpp_properties.json file is as below: { "version": 4, "env": {}, "configurations": [ { "defines": [ "ESP_PLATFORM", "MBEDTLS_CONFIG_FILE=\"mbedtls/esp_config.h\"", "HAVE_CONFIG_H", "GCC_NOT_5_2_0=0", "WITH_POSIX", "F_CPU=240000000L", "ARDUINO=10605", "ARDUINO_M5Stack_Core_ESP32", "ARDUINO_ARCH_ESP32", "ARDUINO_BOARD=\"M5Stack_Core_ESP32\"", "ARDUINO_VARIANT=\"m5stack_core_esp32\"", "ESP32", "CORE_DEBUG_LEVEL=0" ], "cStandard": "c11", "browse": { "limitSymbolsToIncludedHeaders": false }, "includePath": [ "/home/mike/Arduino/libraries/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/**", "/home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/cores/esp32/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/variants/m5stack_core_esp32/**", "/home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/build_files/test_m5stack-core-esp32/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/**" ], "intelliSenseMode": "gcc-x64", "cppStandard": "c++17", "compilerPath": "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/xtensa-esp32-elf-g++", "name": "test" } ] } (Note that I only set limitSymbolsToIncludedHeaders to false after I noticed in the Log Diagnostics that it was defaulting to true and I wanted to make sure that wasn't causing issues either.) I then opened the C/C++ log Diagnostics panel through the command input and saw the output below. Note that the the directories that are included in the 'Includes' are correctly identifying the locations of the source files that are directly and indirectly linked into my project. -------- Diagnostics - 09/02/2020, 3:46:27 pm Version: 0.26.3 Current Configuration: { "defines": [ "ESP_PLATFORM", "MBEDTLS_CONFIG_FILE=\"mbedtls/esp_config.h\"", "HAVE_CONFIG_H", "GCC_NOT_5_2_0=0", "WITH_POSIX", "F_CPU=240000000L", "ARDUINO=10605", "ARDUINO_M5Stack_Core_ESP32", "ARDUINO_ARCH_ESP32", "ARDUINO_BOARD=\"M5Stack_Core_ESP32\"", "ARDUINO_VARIANT=\"m5stack_core_esp32\"", "ESP32", "CORE_DEBUG_LEVEL=0" ], "cStandard": "c11", "browse": { "limitSymbolsToIncludedHeaders": false, "path": [ "/home/mike/Arduino/libraries/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/**", "/home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/cores/esp32/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/variants/m5stack_core_esp32/**", "/home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/build_files/test_m5stack-core-esp32/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/**", "${workspaceFolder}" ] }, "includePath": [ "/home/mike/Arduino/libraries/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/**", "/home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/cores/esp32/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/variants/m5stack_core_esp32/**", "/home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/build_files/test_m5stack-core-esp32/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/**" ], "intelliSenseMode": "gcc-x64", "cppStandard": "c++17", "compilerPath": "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/xtensa-esp32-elf-g++", "name": "test", "compilerArgs": [] } Translation Unit Mappings: [ /home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/test.ino ]: /home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/test.ino Translation Unit Configurations: [ /home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/test.ino ]: Process ID: 22224 Memory Usage: 177 MB Compiler Path: /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/xtensa-esp32-elf-g++ Includes: /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0 /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/xtensa-esp32-elf /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/backward /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/lib/gcc/xtensa-esp32-elf/5.2.0/include /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/lib/gcc/xtensa-esp32-elf/5.2.0/include-fixed /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/xtensa-esp32-elf/include /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/xtensa-esp32-elf/sysroot/usr/include /home/mike/Arduino/libraries/M5ez/src /home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/WiFi/src /home/mike/Arduino/libraries/M5Stack/src /home/mike/Arduino/libraries/ezTime/src /home/mike/softwarePackageStorage/arduino_ESP/esp32/cores/esp32 /home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/Wire/src /home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/SPI/src /home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/FS/src /home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/SD/src /home/mike/softwarePackageStorage/arduino_ESP/esp32/variants/m5stack_core_esp32 /home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/SPIFFS/src Defines: ESP_PLATFORM MBEDTLS_CONFIG_FILE="mbedtls/esp_config.h" HAVE_CONFIG_H GCC_NOT_5_2_0=0 WITH_POSIX F_CPU=240000000L ARDUINO=10605 ARDUINO_M5Stack_Core_ESP32 ARDUINO_ARCH_ESP32 ARDUINO_BOARD="M5Stack_Core_ESP32" ARDUINO_VARIANT="m5stack_core_esp32" ESP32 CORE_DEBUG_LEVEL=0 Standard Version: c++17 IntelliSense Mode: gcc-x64 Other Flags: --g++ --gnu_version=50200 --header_only_fallback Total Memory Usage: 177 MB So I'm now at a loss as to what I do next. I still have the same "problems", click-through references still take me to the correct files though. The only thing I can think to do is to also set the compilerArgs in case that has any significant effect but I've already set the -I and -D flags with the includePath and defines so I don't really see what difference it could make. Any advice that takes into account the information I've provided here would be great. **To Reproduce** <!-- Steps to reproduce the behavior: --> <!-- *The most actionable issue reports include a code sample including configuration files such as c_cpp_properties.json* --> It might be a bit frustrating to setup the full system to reproduce exactly, if someone can walk me through an example that I can setup to see if the issue persists there I can do so to try and generate a minimum system that repros the issue. Projects I'm using in my example: * https://github.com/espressif/arduino-esp32 (compiler and core tools) * https://github.com/plerup/makeEspArduino (makefile - I'm using a modified version, can be provided if necessary) * https://github.com/ropg/M5ez (library I'm using - example I'm using for this is the 'Hello World' example in the example subdirectory of this repo) * https://github.com/m5stack/M5Stack (dependency of the above library) **Expected behavior** <!-- A clear and concise description of what you expected to happen. -- I expect there to not be any "problems" being raised at all. **Screenshots** <!-- If applicable, add screenshots to help explain your problem. --> ![Screenshot_20200209_162310](https://user-images.githubusercontent.com/1224148/74099117-2ac6cc00-4b5b-11ea-80fc-b7248b956bf4.png) ![Screenshot_20200209_162256](https://user-images.githubusercontent.com/1224148/74099120-3914e800-4b5b-11ea-8b7f-0397661200c5.png) **Additional context** <!-- * Call Stacks: For bugs like crashes, deadlocks, infinite loops, etc. that we are not able to repro and for which the call stack may be useful, please attach a debugger and/or create a dmp and provide the call stacks. Windows binaries have symbols available in VS Code by setting your "symbolSearchPath" to "https://msdl.microsoft.com/download/symbols". * Add any other context about the problem here including log messages in your Output window ("C_Cpp.loggingLevel": "Debug" in settings.json). -->
1.0
I need help finding out what the issue with my c_cpp_properties.json file is, lots of context in post - **Type: LanguageService** <!----- Input information below -----> Please forgive any formatting issues: first Github issue submission. <!-- **Prior to filing an issue, please review:** - Existing issues at https://github.com/Microsoft/vscode-cpptools/issues - Our documentation at https://code.visualstudio.com/docs/languages/cpp - FAQs at https://code.visualstudio.com/docs/cpp/faq-cpp --> **Describe the bug** VSCode Details Version: 1.41.1 Commit: 26076a4de974ead31f97692a0d32f90d735645c0 Date: 2019-12-18T15:04:31.999Z Electron: 6.1.5 Chrome: 76.0.3809.146 Node.js: 12.4.0 V8: 7.6.303.31-electron.0 OS: Linux x64 4.13.0-43-generic (Mint 18.3) C/C++ Extension Version: 0.26.3 Other extensions installed: * Cortex-Debug * Default Dark+ Contrast * Material Icon Theme All errors persist with all above extensions disabled Description of Bug: I just tried swapping over to VsCode as an IDE and I'm trying to get intellisense to work with the C/C++ extension. Originally I tried it with some nRF52 projects and after a day or two gave up and decided to try on something I could understand a bit better. So now I'm trying to get a simple example esp32 Arduino project to not have any "problems". Unfortunately I'm consistently getting "#include errors detected. Please update your includePath. Squiggles are disabled for this translation unit" problems occur (and often also "cannot open source file "XXX.h" (dependency of "YYY.h")" problems). This example project I'm using is being compiled using a clone of this repo (https://github.com/espressif/arduino-esp32) and I am compiling it using a modified version of this makefile (https://github.com/plerup/makeEspArduino/blob/master/makeEspArduino.mk - I can provide my modified version if needed). The main information to take from this is that the project is being compiled with a bin from that directory. Every project I'm testing with compiles without any errors. So my first step was to locate all the files that needed to be included for the project. I actually started with auto-generating the c_cpp_properties.json file from the makefile, directly using the directories used in the '-Idir' flags of the compilation. But I've also now started testing with recursive searches to higher directories for simplicity. So my includePath was as below (please note that these are definitely containing all libraries used by my project [but not necessarily the compiler command] because they are the only directories searched for includes in my makefile too). "/home/mike/Arduino/libraries/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/**", "/home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/cores/esp32/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/variants/m5stack_core_esp32/**", "/home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/build_files/test_m5stack-core-esp32/**" NOTE: in this example I have cloned the repo from https://github.com/espressif/arduino-esp32 to '/home/mike/softwarePackageStorage/arduino_ESP/esp32' and my example test project is in '/home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting'. The next part I learned about was making sure to include all of the files included by your compiler itself and the latest advice seemed to be to achieve this by setting your compilerPath. So given my makefile is compiling with a binary from within the esp32 repo, I put that command there as shown below: "compilerPath": "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/xtensa-esp32-elf-g++" This still resulted in the same errors, so then I went to include any definitions that my makefile was putting into the compilation command in case that was making a difference. So I set up my makefile to automatically set these in the c_cpp_properties.json file along with the includePaths and compilerPath. This resulted in my defines section looking as below: "defines" : [ "-DESP_PLATFORM", "-DMBEDTLS_CONFIG_FILE=\"mbedtls/esp_config.h\"", "-DHAVE_CONFIG_H", "-DGCC_NOT_5_2_0=0", "-DWITH_POSIX", "-DF_CPU=240000000L", "-DARDUINO=10605", "-DARDUINO_M5Stack_Core_ESP32", "-DARDUINO_ARCH_ESP32", "-DARDUINO_BOARD=\"M5Stack_Core_ESP32\"", "-DARDUINO_VARIANT=\"m5stack_core_esp32\"", "-DESP32", "-DCORE_DEBUG_LEVEL=0" ], This still resulted in the same errors, so after a couple days of banging my head against the wall and searching github issues, I saw bobbrow on several github issues repeatedly recommending to run: g++ -Wp,-v -E -xc -x c++ /dev/null Then "The "includePath" in your c_cpp_properties.json file should match that". If I'm understanding this correctly, this was accounted for with the compilerPath setting but I tried this anyway. So from my understanding: According to https://linux.die.net/man/1/g++ (because the man page is annoying to traverse): -Wp,option passes all options after the ',' directly onto the pre-processor -v Print (on standard error output) the commands executed to run the stages of compilation. Also print the version number of the compiler driver program and of the preprocessor and the compiler proper. -E Stop after the preprocessing stage; do not run the compiler proper. The output is in the form of preprocessed source code, which is sent to the standard output. -xc Specify explicitly the language for the following input files as c -x c++ Specify explicitly the language for the following input files as c++ /dev/null I'm guessing means just supply no actual files so no actual compilation occurs past the bare minimum. So I'm presuming this should be printing out the steps in the process of compiling a null project which would mean printing include files as it finds them to bring in (because of -v) to compile c and c++ files. Then it stops after pre-processing and doesn't go further. For me, because I'm using the compiler in the esp32 build repo mentioned above, I've been running this command to try and achieve the same as above: /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/xtensa-esp32-elf-g++ -Wp,-v -E -xc -x c++ /dev/null and I get the following output: ignoring duplicate directory "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/../../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/include/c++/5.2.0" ignoring duplicate directory "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/../../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/include/c++/5.2.0/xtensa-esp32-elf" ignoring duplicate directory "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/../../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/include/c++/5.2.0/backward" ignoring duplicate directory "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/../../lib/gcc/xtensa-esp32-elf/5.2.0/include" ignoring nonexistent directory "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../xtensa-esp32-elf/sysroot/builds/idf/crosstool-NG/builds/xtensa-esp32-elf/xtensa-esp32-elf/sysroot/include" ignoring duplicate directory "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/../../lib/gcc/xtensa-esp32-elf/5.2.0/include-fixed" ignoring duplicate directory "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/../../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/include" #include "..." search starts here: #include <...> search starts here: /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/include/c++/5.2.0 /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/include/c++/5.2.0/xtensa-esp32-elf /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/include/c++/5.2.0/backward /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/include /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/include-fixed /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/include /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/../xtensa-esp32-elf/sysroot/usr/include End of search list. # 1 "/dev/null" # 1 "<built-in>" # 1 "<command-line>" # 1 "/dev/null" Note that I also did the same with "xtensa-esp32-elf-gcc" in the same directory and got the same results. Because all of the file names I see are definitely in this directory, I just include it with a recursive search using ** /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/** But also of note: I have tested including each individual directory both as listed with the relative jumps and as a realpath. Next, to make sure I was covering all bases, I also ran the command using standard g++ in case that gets called by the esp32 compiler or something that I don't understant. This yields the following output: ignoring duplicate directory "/usr/include/x86_64-linux-gnu/c++/5" ignoring nonexistent directory "/usr/local/include/x86_64-linux-gnu" ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/5/../../../../x86_64-linux-gnu/include" #include "..." search starts here: #include <...> search starts here: /usr/include/c++/5 /usr/include/x86_64-linux-gnu/c++/5 /usr/include/c++/5/backward /usr/lib/gcc/x86_64-linux-gnu/5/include /usr/local/include /usr/lib/gcc/x86_64-linux-gnu/5/include-fixed /usr/include/x86_64-linux-gnu /usr/include End of search list. # 1 "/dev/null" # 1 "<built-in>" # 1 "<command-line>" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 1 "<command-line>" 2 # 1 "/dev/null" So from this, I also tried including these directories: "/usr/include/c++/5", "/usr/include/x86_64-linux-gnu/c++/5", "/usr/include/c++/5/backward", "/usr/lib/gcc/x86_64-linux-gnu/5/include", "/usr/local/include", "/usr/lib/gcc/x86_64-linux-gnu/5/include-fixed", "/usr/include/x86_64-linux-gnu", "/usr/include" From all of these attempts, I consistently always had at least "#include errors detected. Please update your includePath. Squiggles are disabled for this translation unit" problems and usually also "cannot open source file "XXX.h" (dependency of "YYY.h")" problems occur. So after more headbashing and reading github issues, I wanted to verify that the correct directories were actually being included by vscode/the extension with the diagnostics logging. So first, my current c_cpp_properties.json file is as below: { "version": 4, "env": {}, "configurations": [ { "defines": [ "ESP_PLATFORM", "MBEDTLS_CONFIG_FILE=\"mbedtls/esp_config.h\"", "HAVE_CONFIG_H", "GCC_NOT_5_2_0=0", "WITH_POSIX", "F_CPU=240000000L", "ARDUINO=10605", "ARDUINO_M5Stack_Core_ESP32", "ARDUINO_ARCH_ESP32", "ARDUINO_BOARD=\"M5Stack_Core_ESP32\"", "ARDUINO_VARIANT=\"m5stack_core_esp32\"", "ESP32", "CORE_DEBUG_LEVEL=0" ], "cStandard": "c11", "browse": { "limitSymbolsToIncludedHeaders": false }, "includePath": [ "/home/mike/Arduino/libraries/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/**", "/home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/cores/esp32/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/variants/m5stack_core_esp32/**", "/home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/build_files/test_m5stack-core-esp32/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/**" ], "intelliSenseMode": "gcc-x64", "cppStandard": "c++17", "compilerPath": "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/xtensa-esp32-elf-g++", "name": "test" } ] } (Note that I only set limitSymbolsToIncludedHeaders to false after I noticed in the Log Diagnostics that it was defaulting to true and I wanted to make sure that wasn't causing issues either.) I then opened the C/C++ log Diagnostics panel through the command input and saw the output below. Note that the the directories that are included in the 'Includes' are correctly identifying the locations of the source files that are directly and indirectly linked into my project. -------- Diagnostics - 09/02/2020, 3:46:27 pm Version: 0.26.3 Current Configuration: { "defines": [ "ESP_PLATFORM", "MBEDTLS_CONFIG_FILE=\"mbedtls/esp_config.h\"", "HAVE_CONFIG_H", "GCC_NOT_5_2_0=0", "WITH_POSIX", "F_CPU=240000000L", "ARDUINO=10605", "ARDUINO_M5Stack_Core_ESP32", "ARDUINO_ARCH_ESP32", "ARDUINO_BOARD=\"M5Stack_Core_ESP32\"", "ARDUINO_VARIANT=\"m5stack_core_esp32\"", "ESP32", "CORE_DEBUG_LEVEL=0" ], "cStandard": "c11", "browse": { "limitSymbolsToIncludedHeaders": false, "path": [ "/home/mike/Arduino/libraries/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/**", "/home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/cores/esp32/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/variants/m5stack_core_esp32/**", "/home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/build_files/test_m5stack-core-esp32/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/**", "${workspaceFolder}" ] }, "includePath": [ "/home/mike/Arduino/libraries/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/**", "/home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/cores/esp32/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/variants/m5stack_core_esp32/**", "/home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/build_files/test_m5stack-core-esp32/**", "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/**" ], "intelliSenseMode": "gcc-x64", "cppStandard": "c++17", "compilerPath": "/home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/xtensa-esp32-elf-g++", "name": "test", "compilerArgs": [] } Translation Unit Mappings: [ /home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/test.ino ]: /home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/test.ino Translation Unit Configurations: [ /home/mike/Documents/HAX/lockerGui/checkoutDeviceFirmware/intellisenseTesting/test.ino ]: Process ID: 22224 Memory Usage: 177 MB Compiler Path: /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/bin/xtensa-esp32-elf-g++ Includes: /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0 /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/xtensa-esp32-elf /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/backward /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/lib/gcc/xtensa-esp32-elf/5.2.0/include /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/lib/gcc/xtensa-esp32-elf/5.2.0/include-fixed /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/xtensa-esp32-elf/include /home/mike/softwarePackageStorage/arduino_ESP/esp32/tools/xtensa-esp32-elf/xtensa-esp32-elf/sysroot/usr/include /home/mike/Arduino/libraries/M5ez/src /home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/WiFi/src /home/mike/Arduino/libraries/M5Stack/src /home/mike/Arduino/libraries/ezTime/src /home/mike/softwarePackageStorage/arduino_ESP/esp32/cores/esp32 /home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/Wire/src /home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/SPI/src /home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/FS/src /home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/SD/src /home/mike/softwarePackageStorage/arduino_ESP/esp32/variants/m5stack_core_esp32 /home/mike/softwarePackageStorage/arduino_ESP/esp32/libraries/SPIFFS/src Defines: ESP_PLATFORM MBEDTLS_CONFIG_FILE="mbedtls/esp_config.h" HAVE_CONFIG_H GCC_NOT_5_2_0=0 WITH_POSIX F_CPU=240000000L ARDUINO=10605 ARDUINO_M5Stack_Core_ESP32 ARDUINO_ARCH_ESP32 ARDUINO_BOARD="M5Stack_Core_ESP32" ARDUINO_VARIANT="m5stack_core_esp32" ESP32 CORE_DEBUG_LEVEL=0 Standard Version: c++17 IntelliSense Mode: gcc-x64 Other Flags: --g++ --gnu_version=50200 --header_only_fallback Total Memory Usage: 177 MB So I'm now at a loss as to what I do next. I still have the same "problems", click-through references still take me to the correct files though. The only thing I can think to do is to also set the compilerArgs in case that has any significant effect but I've already set the -I and -D flags with the includePath and defines so I don't really see what difference it could make. Any advice that takes into account the information I've provided here would be great. **To Reproduce** <!-- Steps to reproduce the behavior: --> <!-- *The most actionable issue reports include a code sample including configuration files such as c_cpp_properties.json* --> It might be a bit frustrating to setup the full system to reproduce exactly, if someone can walk me through an example that I can setup to see if the issue persists there I can do so to try and generate a minimum system that repros the issue. Projects I'm using in my example: * https://github.com/espressif/arduino-esp32 (compiler and core tools) * https://github.com/plerup/makeEspArduino (makefile - I'm using a modified version, can be provided if necessary) * https://github.com/ropg/M5ez (library I'm using - example I'm using for this is the 'Hello World' example in the example subdirectory of this repo) * https://github.com/m5stack/M5Stack (dependency of the above library) **Expected behavior** <!-- A clear and concise description of what you expected to happen. -- I expect there to not be any "problems" being raised at all. **Screenshots** <!-- If applicable, add screenshots to help explain your problem. --> ![Screenshot_20200209_162310](https://user-images.githubusercontent.com/1224148/74099117-2ac6cc00-4b5b-11ea-80fc-b7248b956bf4.png) ![Screenshot_20200209_162256](https://user-images.githubusercontent.com/1224148/74099120-3914e800-4b5b-11ea-8b7f-0397661200c5.png) **Additional context** <!-- * Call Stacks: For bugs like crashes, deadlocks, infinite loops, etc. that we are not able to repro and for which the call stack may be useful, please attach a debugger and/or create a dmp and provide the call stacks. Windows binaries have symbols available in VS Code by setting your "symbolSearchPath" to "https://msdl.microsoft.com/download/symbols". * Add any other context about the problem here including log messages in your Output window ("C_Cpp.loggingLevel": "Debug" in settings.json). -->
non_test
i need help finding out what the issue with my c cpp properties json file is lots of context in post type languageservice please forgive any formatting issues first github issue submission prior to filing an issue please review existing issues at our documentation at faqs at describe the bug vscode details version commit date electron chrome node js electron os linux generic mint c c extension version other extensions installed cortex debug default dark contrast material icon theme all errors persist with all above extensions disabled description of bug i just tried swapping over to vscode as an ide and i m trying to get intellisense to work with the c c extension originally i tried it with some projects and after a day or two gave up and decided to try on something i could understand a bit better so now i m trying to get a simple example arduino project to not have any problems unfortunately i m consistently getting include errors detected please update your includepath squiggles are disabled for this translation unit problems occur and often also cannot open source file xxx h dependency of yyy h problems this example project i m using is being compiled using a clone of this repo and i am compiling it using a modified version of this makefile i can provide my modified version if needed the main information to take from this is that the project is being compiled with a bin from that directory every project i m testing with compiles without any errors so my first step was to locate all the files that needed to be included for the project i actually started with auto generating the c cpp properties json file from the makefile directly using the directories used in the idir flags of the compilation but i ve also now started testing with recursive searches to higher directories for simplicity so my includepath was as below please note that these are definitely containing all libraries used by my project because they are the only directories searched for includes in my makefile too home mike arduino libraries home mike softwarepackagestorage arduino esp libraries home mike documents hax lockergui checkoutdevicefirmware intellisensetesting home mike softwarepackagestorage arduino esp cores home mike softwarepackagestorage arduino esp variants core home mike documents hax lockergui checkoutdevicefirmware intellisensetesting build files test core note in this example i have cloned the repo from to home mike softwarepackagestorage arduino esp and my example test project is in home mike documents hax lockergui checkoutdevicefirmware intellisensetesting the next part i learned about was making sure to include all of the files included by your compiler itself and the latest advice seemed to be to achieve this by setting your compilerpath so given my makefile is compiling with a binary from within the repo i put that command there as shown below compilerpath home mike softwarepackagestorage arduino esp tools xtensa elf bin xtensa elf g this still resulted in the same errors so then i went to include any definitions that my makefile was putting into the compilation command in case that was making a difference so i set up my makefile to automatically set these in the c cpp properties json file along with the includepaths and compilerpath this resulted in my defines section looking as below defines desp platform dmbedtls config file mbedtls esp config h dhave config h dgcc not dwith posix df cpu darduino darduino core darduino arch darduino board core darduino variant core dcore debug level this still resulted in the same errors so after a couple days of banging my head against the wall and searching github issues i saw bobbrow on several github issues repeatedly recommending to run g wp v e xc x c dev null then the includepath in your c cpp properties json file should match that if i m understanding this correctly this was accounted for with the compilerpath setting but i tried this anyway so from my understanding according to because the man page is annoying to traverse wp option passes all options after the directly onto the pre processor v print on standard error output the commands executed to run the stages of compilation also print the version number of the compiler driver program and of the preprocessor and the compiler proper e stop after the preprocessing stage do not run the compiler proper the output is in the form of preprocessed source code which is sent to the standard output xc specify explicitly the language for the following input files as c x c specify explicitly the language for the following input files as c dev null i m guessing means just supply no actual files so no actual compilation occurs past the bare minimum so i m presuming this should be printing out the steps in the process of compiling a null project which would mean printing include files as it finds them to bring in because of v to compile c and c files then it stops after pre processing and doesn t go further for me because i m using the compiler in the build repo mentioned above i ve been running this command to try and achieve the same as above home mike softwarepackagestorage arduino esp tools xtensa elf bin xtensa elf g wp v e xc x c dev null and i get the following output ignoring duplicate directory home mike softwarepackagestorage arduino esp tools xtensa elf bin lib gcc lib gcc xtensa elf xtensa elf include c ignoring duplicate directory home mike softwarepackagestorage arduino esp tools xtensa elf bin lib gcc lib gcc xtensa elf xtensa elf include c xtensa elf ignoring duplicate directory home mike softwarepackagestorage arduino esp tools xtensa elf bin lib gcc lib gcc xtensa elf xtensa elf include c backward ignoring duplicate directory home mike softwarepackagestorage arduino esp tools xtensa elf bin lib gcc lib gcc xtensa elf include ignoring nonexistent directory home mike softwarepackagestorage arduino esp tools xtensa elf bin xtensa elf sysroot builds idf crosstool ng builds xtensa elf xtensa elf sysroot include ignoring duplicate directory home mike softwarepackagestorage arduino esp tools xtensa elf bin lib gcc lib gcc xtensa elf include fixed ignoring duplicate directory home mike softwarepackagestorage arduino esp tools xtensa elf bin lib gcc lib gcc xtensa elf xtensa elf include include search starts here include search starts here home mike softwarepackagestorage arduino esp tools xtensa elf bin lib gcc xtensa elf xtensa elf include c home mike softwarepackagestorage arduino esp tools xtensa elf bin lib gcc xtensa elf xtensa elf include c xtensa elf home mike softwarepackagestorage arduino esp tools xtensa elf bin lib gcc xtensa elf xtensa elf include c backward home mike softwarepackagestorage arduino esp tools xtensa elf bin lib gcc xtensa elf include home mike softwarepackagestorage arduino esp tools xtensa elf bin lib gcc xtensa elf include fixed home mike softwarepackagestorage arduino esp tools xtensa elf bin lib gcc xtensa elf xtensa elf include home mike softwarepackagestorage arduino esp tools xtensa elf bin xtensa elf sysroot usr include end of search list dev null dev null note that i also did the same with xtensa elf gcc in the same directory and got the same results because all of the file names i see are definitely in this directory i just include it with a recursive search using home mike softwarepackagestorage arduino esp tools xtensa elf but also of note i have tested including each individual directory both as listed with the relative jumps and as a realpath next to make sure i was covering all bases i also ran the command using standard g in case that gets called by the compiler or something that i don t understant this yields the following output ignoring duplicate directory usr include linux gnu c ignoring nonexistent directory usr local include linux gnu ignoring nonexistent directory usr lib gcc linux gnu linux gnu include include search starts here include search starts here usr include c usr include linux gnu c usr include c backward usr lib gcc linux gnu include usr local include usr lib gcc linux gnu include fixed usr include linux gnu usr include end of search list dev null usr include stdc predef h dev null so from this i also tried including these directories usr include c usr include linux gnu c usr include c backward usr lib gcc linux gnu include usr local include usr lib gcc linux gnu include fixed usr include linux gnu usr include from all of these attempts i consistently always had at least include errors detected please update your includepath squiggles are disabled for this translation unit problems and usually also cannot open source file xxx h dependency of yyy h problems occur so after more headbashing and reading github issues i wanted to verify that the correct directories were actually being included by vscode the extension with the diagnostics logging so first my current c cpp properties json file is as below version env configurations defines esp platform mbedtls config file mbedtls esp config h have config h gcc not with posix f cpu arduino arduino core arduino arch arduino board core arduino variant core core debug level cstandard browse limitsymbolstoincludedheaders false includepath home mike arduino libraries home mike softwarepackagestorage arduino esp libraries home mike documents hax lockergui checkoutdevicefirmware intellisensetesting home mike softwarepackagestorage arduino esp cores home mike softwarepackagestorage arduino esp variants core home mike documents hax lockergui checkoutdevicefirmware intellisensetesting build files test core home mike softwarepackagestorage arduino esp tools xtensa elf intellisensemode gcc cppstandard c compilerpath home mike softwarepackagestorage arduino esp tools xtensa elf bin xtensa elf g name test note that i only set limitsymbolstoincludedheaders to false after i noticed in the log diagnostics that it was defaulting to true and i wanted to make sure that wasn t causing issues either i then opened the c c log diagnostics panel through the command input and saw the output below note that the the directories that are included in the includes are correctly identifying the locations of the source files that are directly and indirectly linked into my project diagnostics pm version current configuration defines esp platform mbedtls config file mbedtls esp config h have config h gcc not with posix f cpu arduino arduino core arduino arch arduino board core arduino variant core core debug level cstandard browse limitsymbolstoincludedheaders false path home mike arduino libraries home mike softwarepackagestorage arduino esp libraries home mike documents hax lockergui checkoutdevicefirmware intellisensetesting home mike softwarepackagestorage arduino esp cores home mike softwarepackagestorage arduino esp variants core home mike documents hax lockergui checkoutdevicefirmware intellisensetesting build files test core home mike softwarepackagestorage arduino esp tools xtensa elf workspacefolder includepath home mike arduino libraries home mike softwarepackagestorage arduino esp libraries home mike documents hax lockergui checkoutdevicefirmware intellisensetesting home mike softwarepackagestorage arduino esp cores home mike softwarepackagestorage arduino esp variants core home mike documents hax lockergui checkoutdevicefirmware intellisensetesting build files test core home mike softwarepackagestorage arduino esp tools xtensa elf intellisensemode gcc cppstandard c compilerpath home mike softwarepackagestorage arduino esp tools xtensa elf bin xtensa elf g name test compilerargs translation unit mappings home mike documents hax lockergui checkoutdevicefirmware intellisensetesting test ino translation unit configurations process id memory usage mb compiler path home mike softwarepackagestorage arduino esp tools xtensa elf bin xtensa elf g includes home mike softwarepackagestorage arduino esp tools xtensa elf xtensa elf include c home mike softwarepackagestorage arduino esp tools xtensa elf xtensa elf include c xtensa elf home mike softwarepackagestorage arduino esp tools xtensa elf xtensa elf include c backward home mike softwarepackagestorage arduino esp tools xtensa elf lib gcc xtensa elf include home mike softwarepackagestorage arduino esp tools xtensa elf lib gcc xtensa elf include fixed home mike softwarepackagestorage arduino esp tools xtensa elf xtensa elf include home mike softwarepackagestorage arduino esp tools xtensa elf xtensa elf sysroot usr include home mike arduino libraries src home mike softwarepackagestorage arduino esp libraries wifi src home mike arduino libraries src home mike arduino libraries eztime src home mike softwarepackagestorage arduino esp cores home mike softwarepackagestorage arduino esp libraries wire src home mike softwarepackagestorage arduino esp libraries spi src home mike softwarepackagestorage arduino esp libraries fs src home mike softwarepackagestorage arduino esp libraries sd src home mike softwarepackagestorage arduino esp variants core home mike softwarepackagestorage arduino esp libraries spiffs src defines esp platform mbedtls config file mbedtls esp config h have config h gcc not with posix f cpu arduino arduino core arduino arch arduino board core arduino variant core core debug level standard version c intellisense mode gcc other flags g gnu version header only fallback total memory usage mb so i m now at a loss as to what i do next i still have the same problems click through references still take me to the correct files though the only thing i can think to do is to also set the compilerargs in case that has any significant effect but i ve already set the i and d flags with the includepath and defines so i don t really see what difference it could make any advice that takes into account the information i ve provided here would be great to reproduce it might be a bit frustrating to setup the full system to reproduce exactly if someone can walk me through an example that i can setup to see if the issue persists there i can do so to try and generate a minimum system that repros the issue projects i m using in my example compiler and core tools makefile i m using a modified version can be provided if necessary library i m using example i m using for this is the hello world example in the example subdirectory of this repo dependency of the above library expected behavior a clear and concise description of what you expected to happen i expect there to not be any problems being raised at all screenshots additional context call stacks for bugs like crashes deadlocks infinite loops etc that we are not able to repro and for which the call stack may be useful please attach a debugger and or create a dmp and provide the call stacks windows binaries have symbols available in vs code by setting your symbolsearchpath to add any other context about the problem here including log messages in your output window c cpp logginglevel debug in settings json
0
1,957
2,580,090,986
IssuesEvent
2015-02-13 15:32:49
gabrielfedel/mapaculturacampinas
https://api.github.com/repos/gabrielfedel/mapaculturacampinas
closed
Mapas e roteiros - Texto abaixo do ícone
design mediaup
Poderia ser mais parecido com os outros? Especificamente: 1. Inglês e português poderiam ser opções como na barra da página principal? ![screen shot 2014-12-03 at 10 24 24](https://cloud.githubusercontent.com/assets/3904166/5282707/9ece315c-7ad6-11e4-930d-34f2526911d4.png) 2. Remover "border" 3. Colocar título (direto do "Título" no Neatline) abaixo da imagem. Vamos mudá-las depois. 4. Remover "Mapas e roteiros"
1.0
Mapas e roteiros - Texto abaixo do ícone - Poderia ser mais parecido com os outros? Especificamente: 1. Inglês e português poderiam ser opções como na barra da página principal? ![screen shot 2014-12-03 at 10 24 24](https://cloud.githubusercontent.com/assets/3904166/5282707/9ece315c-7ad6-11e4-930d-34f2526911d4.png) 2. Remover "border" 3. Colocar título (direto do "Título" no Neatline) abaixo da imagem. Vamos mudá-las depois. 4. Remover "Mapas e roteiros"
non_test
mapas e roteiros texto abaixo do ícone poderia ser mais parecido com os outros especificamente inglês e português poderiam ser opções como na barra da página principal remover border colocar título direto do título no neatline abaixo da imagem vamos mudá las depois remover mapas e roteiros
0
61,857
14,643,032,527
IssuesEvent
2020-12-25 14:13:56
fu1771695yongxie/freeCodeCamp
https://api.github.com/repos/fu1771695yongxie/freeCodeCamp
opened
CVE-2019-11358 (Medium) detected in jquery-3.2.1.min.js
security vulnerability
## CVE-2019-11358 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-3.2.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js</a></p> <p>Path to dependency file: freeCodeCamp/api-server/node_modules/superagent/index.html</p> <p>Path to vulnerable library: freeCodeCamp/api-server/node_modules/superagent/index.html,freeCodeCamp/tools/contributor/dashboard-app/server/node_modules/superagent/index.html</p> <p> Dependency Hierarchy: - :x: **jquery-3.2.1.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/freeCodeCamp/commit/94f16dd247ad5d29a6c8a99c82d0c620274be868">94f16dd247ad5d29a6c8a99c82d0c620274be868</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype. <p>Publish Date: 2019-04-20 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p> <p>Release Date: 2019-04-20</p> <p>Fix Resolution: 3.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-11358 (Medium) detected in jquery-3.2.1.min.js - ## CVE-2019-11358 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-3.2.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js</a></p> <p>Path to dependency file: freeCodeCamp/api-server/node_modules/superagent/index.html</p> <p>Path to vulnerable library: freeCodeCamp/api-server/node_modules/superagent/index.html,freeCodeCamp/tools/contributor/dashboard-app/server/node_modules/superagent/index.html</p> <p> Dependency Hierarchy: - :x: **jquery-3.2.1.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/freeCodeCamp/commit/94f16dd247ad5d29a6c8a99c82d0c620274be868">94f16dd247ad5d29a6c8a99c82d0c620274be868</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype. <p>Publish Date: 2019-04-20 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p> <p>Release Date: 2019-04-20</p> <p>Fix Resolution: 3.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file freecodecamp api server node modules superagent index html path to vulnerable library freecodecamp api server node modules superagent index html freecodecamp tools contributor dashboard app server node modules superagent index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
93,774
8,444,208,671
IssuesEvent
2018-10-18 17:46:07
elastic/kibana
https://api.github.com/repos/elastic/kibana
opened
Windows unit test for proc runner is failing
:Operations test test_infra
proc runner passes procs to a function: Error: Command failed: taskkill /pid 5308 /T /F ERROR: The process "5308" not found. at ChildProcess.exithandler (child_process.js:275:12) at maybeClose (internal/child_process.js:925:16) at Process.ChildProcess._handle.onexit (internal/child_process.js:209:5)
2.0
Windows unit test for proc runner is failing - proc runner passes procs to a function: Error: Command failed: taskkill /pid 5308 /T /F ERROR: The process "5308" not found. at ChildProcess.exithandler (child_process.js:275:12) at maybeClose (internal/child_process.js:925:16) at Process.ChildProcess._handle.onexit (internal/child_process.js:209:5)
test
windows unit test for proc runner is failing proc runner passes procs to a function error command failed taskkill pid t f error the process not found at childprocess exithandler child process js at maybeclose internal child process js at process childprocess handle onexit internal child process js
1
248,234
21,004,060,031
IssuesEvent
2022-03-29 20:29:27
Chia-Network/chia-blockchain
https://api.github.com/repos/Chia-Network/chia-blockchain
opened
[Bug] Targeting testnet leads to DB corruption
bug Testnet Blockchain DB
### What happened? This happens on Windows, MacOS, possibly all other OSs. Targeting testnet results in a corrupted DB after the first initial successful run: `full_blocks` missing in the `in_main_chain` column, because DB gets created as a v1 DB How to replicate: Have a working Chia set up, but no testnet DBs present `chia stop all -d` (or close the GUI) Nuke your config.yaml `chia configure -t t` Start a chia full_node (either CLI or GUI in Farmer Mode), and everything will run seemingly fine `chia stop all -d` (or close the GUI) Try to start the full_node again (either CLI or GUI in Farmer Mode), and the full node will not start successfully ### Version 1.3.2.dev72 ### What platform are you using? Windows ### What ui mode are you using? CLI ### Relevant log output ```shell Daemon not started yet Starting daemon chia_harvester: started chia_farmer: started chia_full_node: started chia_wallet: started PS C:\Users\William\AppData\Local\chia-blockchain\app-1.3.272\resources\app.asar.unpacked\daemon> Traceback (most recent call last): File "Lib\site-packages\chia\server\start_full_node.py", line 65, in <module> File "Lib\site-packages\chia\server\start_full_node.py", line 60, in main File "chia\server\start_service.py", line 286, in run_service File "asyncio\runners.py", line 44, in run File "asyncio\base_events.py", line 642, in run_until_complete File "chia\server\start_service.py", line 280, in async_run_service File "chia\server\start_service.py", line 190, in run File "chia\server\start_service.py", line 152, in start File "chia\full_node\full_node.py", line 210, in _start File "chia\full_node\block_store.py", line 73, in create File "aiosqlite\core.py", line 184, in execute File "aiosqlite\core.py", line 129, in _execute File "aiosqlite\core.py", line 102, in run sqlite3.OperationalError: no such column: in_main_chain [12528] Failed to execute script 'start_full_node' due to unhandled exception! ----------- sqlite> .schema CREATE TABLE database_version(version int); CREATE TABLE full_blocks(header_hash text PRIMARY KEY, height bigint, is_block tinyint, is_fully_compactified tinyint, block blob); CREATE TABLE block_records(header_hash text PRIMARY KEY, prev_hash text, height bigint,block blob, sub_epoch_summary blob, is_peak tinyint, is_block tinyint); CREATE TABLE sub_epoch_segments_v3(ses_block_hash text PRIMARY KEY,challenge_segments blob); CREATE INDEX full_block_height on full_blocks(height); CREATE INDEX is_fully_compactified on full_blocks(is_fully_compactified); CREATE INDEX height on block_records(height); CREATE INDEX peak on block_records(is_peak); CREATE TABLE hints(id INTEGER PRIMARY KEY AUTOINCREMENT, coin_id blob, hint blob); CREATE TABLE sqlite_sequence(name,seq); CREATE INDEX hint_index on hints(hint); CREATE TABLE coin_record(coin_name text PRIMARY KEY, confirmed_index bigint, spent_index bigint, spent int, coinbase int, puzzle_hash text, coin_parent text, amount blob, timestamp bigint); CREATE INDEX coin_confirmed_index on coin_record(confirmed_index); CREATE INDEX coin_spent_index on coin_record(spent_index); CREATE INDEX coin_puzzle_hash on coin_record(puzzle_hash); CREATE INDEX coin_parent_index on coin_record(coin_parent); ```
1.0
[Bug] Targeting testnet leads to DB corruption - ### What happened? This happens on Windows, MacOS, possibly all other OSs. Targeting testnet results in a corrupted DB after the first initial successful run: `full_blocks` missing in the `in_main_chain` column, because DB gets created as a v1 DB How to replicate: Have a working Chia set up, but no testnet DBs present `chia stop all -d` (or close the GUI) Nuke your config.yaml `chia configure -t t` Start a chia full_node (either CLI or GUI in Farmer Mode), and everything will run seemingly fine `chia stop all -d` (or close the GUI) Try to start the full_node again (either CLI or GUI in Farmer Mode), and the full node will not start successfully ### Version 1.3.2.dev72 ### What platform are you using? Windows ### What ui mode are you using? CLI ### Relevant log output ```shell Daemon not started yet Starting daemon chia_harvester: started chia_farmer: started chia_full_node: started chia_wallet: started PS C:\Users\William\AppData\Local\chia-blockchain\app-1.3.272\resources\app.asar.unpacked\daemon> Traceback (most recent call last): File "Lib\site-packages\chia\server\start_full_node.py", line 65, in <module> File "Lib\site-packages\chia\server\start_full_node.py", line 60, in main File "chia\server\start_service.py", line 286, in run_service File "asyncio\runners.py", line 44, in run File "asyncio\base_events.py", line 642, in run_until_complete File "chia\server\start_service.py", line 280, in async_run_service File "chia\server\start_service.py", line 190, in run File "chia\server\start_service.py", line 152, in start File "chia\full_node\full_node.py", line 210, in _start File "chia\full_node\block_store.py", line 73, in create File "aiosqlite\core.py", line 184, in execute File "aiosqlite\core.py", line 129, in _execute File "aiosqlite\core.py", line 102, in run sqlite3.OperationalError: no such column: in_main_chain [12528] Failed to execute script 'start_full_node' due to unhandled exception! ----------- sqlite> .schema CREATE TABLE database_version(version int); CREATE TABLE full_blocks(header_hash text PRIMARY KEY, height bigint, is_block tinyint, is_fully_compactified tinyint, block blob); CREATE TABLE block_records(header_hash text PRIMARY KEY, prev_hash text, height bigint,block blob, sub_epoch_summary blob, is_peak tinyint, is_block tinyint); CREATE TABLE sub_epoch_segments_v3(ses_block_hash text PRIMARY KEY,challenge_segments blob); CREATE INDEX full_block_height on full_blocks(height); CREATE INDEX is_fully_compactified on full_blocks(is_fully_compactified); CREATE INDEX height on block_records(height); CREATE INDEX peak on block_records(is_peak); CREATE TABLE hints(id INTEGER PRIMARY KEY AUTOINCREMENT, coin_id blob, hint blob); CREATE TABLE sqlite_sequence(name,seq); CREATE INDEX hint_index on hints(hint); CREATE TABLE coin_record(coin_name text PRIMARY KEY, confirmed_index bigint, spent_index bigint, spent int, coinbase int, puzzle_hash text, coin_parent text, amount blob, timestamp bigint); CREATE INDEX coin_confirmed_index on coin_record(confirmed_index); CREATE INDEX coin_spent_index on coin_record(spent_index); CREATE INDEX coin_puzzle_hash on coin_record(puzzle_hash); CREATE INDEX coin_parent_index on coin_record(coin_parent); ```
test
targeting testnet leads to db corruption what happened this happens on windows macos possibly all other oss targeting testnet results in a corrupted db after the first initial successful run full blocks missing in the in main chain column because db gets created as a db how to replicate have a working chia set up but no testnet dbs present chia stop all d or close the gui nuke your config yaml chia configure t t start a chia full node either cli or gui in farmer mode and everything will run seemingly fine chia stop all d or close the gui try to start the full node again either cli or gui in farmer mode and the full node will not start successfully version what platform are you using windows what ui mode are you using cli relevant log output shell daemon not started yet starting daemon chia harvester started chia farmer started chia full node started chia wallet started ps c users william appdata local chia blockchain app resources app asar unpacked daemon traceback most recent call last file lib site packages chia server start full node py line in file lib site packages chia server start full node py line in main file chia server start service py line in run service file asyncio runners py line in run file asyncio base events py line in run until complete file chia server start service py line in async run service file chia server start service py line in run file chia server start service py line in start file chia full node full node py line in start file chia full node block store py line in create file aiosqlite core py line in execute file aiosqlite core py line in execute file aiosqlite core py line in run operationalerror no such column in main chain failed to execute script start full node due to unhandled exception sqlite schema create table database version version int create table full blocks header hash text primary key height bigint is block tinyint is fully compactified tinyint block blob create table block records header hash text primary key prev hash text height bigint block blob sub epoch summary blob is peak tinyint is block tinyint create table sub epoch segments ses block hash text primary key challenge segments blob create index full block height on full blocks height create index is fully compactified on full blocks is fully compactified create index height on block records height create index peak on block records is peak create table hints id integer primary key autoincrement coin id blob hint blob create table sqlite sequence name seq create index hint index on hints hint create table coin record coin name text primary key confirmed index bigint spent index bigint spent int coinbase int puzzle hash text coin parent text amount blob timestamp bigint create index coin confirmed index on coin record confirmed index create index coin spent index on coin record spent index create index coin puzzle hash on coin record puzzle hash create index coin parent index on coin record coin parent
1
60,852
3,134,953,109
IssuesEvent
2015-09-10 13:11:03
rollerworks/RollerworksSearchBundle
https://api.github.com/repos/rollerworks/RollerworksSearchBundle
closed
Resolve hard dependency on the rollerworks/search-jms-metadata package
Critical priority
The `rollerworks/search-jms-metadata` is currently only installed as 'optional dependency' which goes against the Package design principles! But we can't install it always as the `jms/metadata` is released under the Apache2 license which is incompatible with GNU GPL. And we don't want to cause any license issues for others. So we have two options here: 1. Ask the maintainer(s) of the `jms/metadata` package to switch to the MIT license. 2. If the first option fails, look if the https://github.com/fsi-open/metadata provides what we need. Not installing a metadata loader is no option as this will leave the bundle crippled by design.
1.0
Resolve hard dependency on the rollerworks/search-jms-metadata package - The `rollerworks/search-jms-metadata` is currently only installed as 'optional dependency' which goes against the Package design principles! But we can't install it always as the `jms/metadata` is released under the Apache2 license which is incompatible with GNU GPL. And we don't want to cause any license issues for others. So we have two options here: 1. Ask the maintainer(s) of the `jms/metadata` package to switch to the MIT license. 2. If the first option fails, look if the https://github.com/fsi-open/metadata provides what we need. Not installing a metadata loader is no option as this will leave the bundle crippled by design.
non_test
resolve hard dependency on the rollerworks search jms metadata package the rollerworks search jms metadata is currently only installed as optional dependency which goes against the package design principles but we can t install it always as the jms metadata is released under the license which is incompatible with gnu gpl and we don t want to cause any license issues for others so we have two options here ask the maintainer s of the jms metadata package to switch to the mit license if the first option fails look if the provides what we need not installing a metadata loader is no option as this will leave the bundle crippled by design
0
275,169
20,910,282,256
IssuesEvent
2022-03-24 08:39:49
byzer-org/byzer-lang
https://api.github.com/repos/byzer-org/byzer-lang
closed
[Feature] Byzer should support AES encryption when using sensitive string in Byzer Script
Difficulty:Middle Documentation
## Background Currently, if we want to use a sensitive string like a password in Byzer Script, what we can do is just use the password as plain text in byzer script, which is not secure and mostly does not satisfy the IT policy within the company. So **Byzer should support the encode and decode sensitive string when scripting** ## The scenario example: If we want to send an email, the code like below ```sql -- send email run${CONTENT} as SendMultiMails.`` where mailType = "config" and attachmentType = "text/csv" and from = "${EMAIL_FROM}" and to = "${EMAIL_TO}" and cc = "${EMAIL_CC}" and smtpHost = "${HOST}" and smtpPort = "${PORT}" and `properties.mail.smtp.starttls.enable`= "true" and `properties.mail.smtp.ssl.protocols`="TLSv1.2" and userName = "${USERNAME}" and password="${PWD}"; ``` if the value of the sender's password is `123456`, here we need to fill the password as plain text `123456`. ## The proposal 1. Provide aes encode/encode macro function ``` !aes_encode "123456"; ``` this will return an encrypted string of "123456", we refer to it as "xxxxxx" ``` !aes_decode "xxxxxx"; ``` this will return a decrypted string of "xxxxxx", it should return "123456" 2. provide aes encode/encode udf function ```sql select aes_encode("123456") as t1; ``` this will return an encrypted string of "123456", we refer to it as "xxxxxx" ```sql select aes_decode("xxxxxx") as t2; ``` this will return a decrypted string of "xxxxxx", it should return "123456" ## Solution for the scenario above 1. First, the user can run ``` !aes_encode "123456"; ``` or ``` select aes_encode("123456") as t1; ``` to get the encrypted value of password,we refer to it as ”xxxxxx“ 2. set the variable `PWD` in byzer script as below ``` set PWD = `select aes_decode("xxxxxx")` where type="sql"; ``` 3. send email ```sql -- send email run${CONTENT} as SendMultiMails.`` where mailType = "config" and attachmentType = "text/csv" and from = "${EMAIL_FROM}" and to = "${EMAIL_TO}" and cc = "${EMAIL_CC}" and smtpHost = "${HOST}" and smtpPort = "${PORT}" and `properties.mail.smtp.starttls.enable`= "true" and `properties.mail.smtp.ssl.protocols`="TLSv1.2" and userName = "${USERNAME}" and password="${PWD}"; ``` This solution will keep the encrpted password in the byzer script instead of plain text
1.0
[Feature] Byzer should support AES encryption when using sensitive string in Byzer Script - ## Background Currently, if we want to use a sensitive string like a password in Byzer Script, what we can do is just use the password as plain text in byzer script, which is not secure and mostly does not satisfy the IT policy within the company. So **Byzer should support the encode and decode sensitive string when scripting** ## The scenario example: If we want to send an email, the code like below ```sql -- send email run${CONTENT} as SendMultiMails.`` where mailType = "config" and attachmentType = "text/csv" and from = "${EMAIL_FROM}" and to = "${EMAIL_TO}" and cc = "${EMAIL_CC}" and smtpHost = "${HOST}" and smtpPort = "${PORT}" and `properties.mail.smtp.starttls.enable`= "true" and `properties.mail.smtp.ssl.protocols`="TLSv1.2" and userName = "${USERNAME}" and password="${PWD}"; ``` if the value of the sender's password is `123456`, here we need to fill the password as plain text `123456`. ## The proposal 1. Provide aes encode/encode macro function ``` !aes_encode "123456"; ``` this will return an encrypted string of "123456", we refer to it as "xxxxxx" ``` !aes_decode "xxxxxx"; ``` this will return a decrypted string of "xxxxxx", it should return "123456" 2. provide aes encode/encode udf function ```sql select aes_encode("123456") as t1; ``` this will return an encrypted string of "123456", we refer to it as "xxxxxx" ```sql select aes_decode("xxxxxx") as t2; ``` this will return a decrypted string of "xxxxxx", it should return "123456" ## Solution for the scenario above 1. First, the user can run ``` !aes_encode "123456"; ``` or ``` select aes_encode("123456") as t1; ``` to get the encrypted value of password,we refer to it as ”xxxxxx“ 2. set the variable `PWD` in byzer script as below ``` set PWD = `select aes_decode("xxxxxx")` where type="sql"; ``` 3. send email ```sql -- send email run${CONTENT} as SendMultiMails.`` where mailType = "config" and attachmentType = "text/csv" and from = "${EMAIL_FROM}" and to = "${EMAIL_TO}" and cc = "${EMAIL_CC}" and smtpHost = "${HOST}" and smtpPort = "${PORT}" and `properties.mail.smtp.starttls.enable`= "true" and `properties.mail.smtp.ssl.protocols`="TLSv1.2" and userName = "${USERNAME}" and password="${PWD}"; ``` This solution will keep the encrpted password in the byzer script instead of plain text
non_test
byzer should support aes encryption when using sensitive string in byzer script background currently if we want to use a sensitive string like a password in byzer script what we can do is just use the password as plain text in byzer script which is not secure and mostly does not satisfy the it policy within the company so byzer should support the encode and decode sensitive string when scripting the scenario example if we want to send an email the code like below sql send email run content as sendmultimails where mailtype config and attachmenttype text csv and from email from and to email to and cc email cc and smtphost host and smtpport port and properties mail smtp starttls enable true and properties mail smtp ssl protocols and username username and password pwd if the value of the sender s password is here we need to fill the password as plain text the proposal provide aes encode encode macro function aes encode this will return an encrypted string of we refer to it as xxxxxx aes decode xxxxxx this will return a decrypted string of xxxxxx it should return provide aes encode encode udf function sql select aes encode as this will return an encrypted string of we refer to it as xxxxxx sql select aes decode xxxxxx as this will return a decrypted string of xxxxxx it should return solution for the scenario above first the user can run aes encode or select aes encode as to get the encrypted value of password,we refer to it as ”xxxxxx“ set the variable pwd in byzer script as below set pwd select aes decode xxxxxx where type sql send email sql send email run content as sendmultimails where mailtype config and attachmenttype text csv and from email from and to email to and cc email cc and smtphost host and smtpport port and properties mail smtp starttls enable true and properties mail smtp ssl protocols and username username and password pwd this solution will keep the encrpted password in the byzer script instead of plain text
0
274,834
23,871,375,727
IssuesEvent
2022-09-07 15:05:09
mozilla-mobile/focus-android
https://api.github.com/repos/mozilla-mobile/focus-android
closed
Permanent UI test failure - logs inaccessible, eterrnal Firebase throbber - affects all mobile repositories
eng:ui-test eng:intermittent-test wontfix
The last 2 ui-test-x86 task failures have inaccessible Firebase log links. The Firebase page opens and shows a throbber but never completes loading the content. ### Firebase Test Run: * https://treeherder.mozilla.org/logviewer?job_id=369859259&repo=focus-android * https://treeherder.mozilla.org/logviewer?job_id=370210843&repo=focus-android * Firebase link e.g. https://console.firebase.google.com/project/moz-focus-android/testlab/histories/bh.2189b040bbce6d5a/matrices/4763911342473929757 Older, previously accessible Firebase runs also have become inaccessible, e.g. https://console.firebase.google.com/project/moz-focus-android/testlab/histories/bh.2189b040bbce6d5a/matrices/6440687595986032278
2.0
Permanent UI test failure - logs inaccessible, eterrnal Firebase throbber - affects all mobile repositories - The last 2 ui-test-x86 task failures have inaccessible Firebase log links. The Firebase page opens and shows a throbber but never completes loading the content. ### Firebase Test Run: * https://treeherder.mozilla.org/logviewer?job_id=369859259&repo=focus-android * https://treeherder.mozilla.org/logviewer?job_id=370210843&repo=focus-android * Firebase link e.g. https://console.firebase.google.com/project/moz-focus-android/testlab/histories/bh.2189b040bbce6d5a/matrices/4763911342473929757 Older, previously accessible Firebase runs also have become inaccessible, e.g. https://console.firebase.google.com/project/moz-focus-android/testlab/histories/bh.2189b040bbce6d5a/matrices/6440687595986032278
test
permanent ui test failure logs inaccessible eterrnal firebase throbber affects all mobile repositories the last ui test task failures have inaccessible firebase log links the firebase page opens and shows a throbber but never completes loading the content firebase test run firebase link e g older previously accessible firebase runs also have become inaccessible e g
1
20,263
6,835,645,760
IssuesEvent
2017-11-10 02:24:01
matplotlib/matplotlib
https://api.github.com/repos/matplotlib/matplotlib
opened
setupext should not explicitly add /usr/{,local/}include to the include path
build
<!--To help us understand and resolve your issue, please fill out the form to the best of your ability.--> <!--You can feel free to delete the sections that do not apply.--> ### Bug report **Bug summary** Currently, on Linux, setupext.py explicitly adds /usr/{,local/}include to the search path (because it is listed in get_base_dirs). This can be confirmed by checking e.g. the output of `python setup.py build`. This makes it impossible to compile matplotlib using a non-system compiler (e.g., anaconda now provides gcc7.2, see https://www.anaconda.com/blog/developer-blog/utilizing-the-new-compilers-in-anaconda-distribution-5/) that comes with its own headers (in /path/to/env/lib/gcc/x86_64-conda_cos6-linux-gnu/7.2.0/include) which may be incompatible with the system headers. Note that conda's gcc is configured to indeed look into its own headers directory instead of /usr/include. In the specific case of gcc7.2.0 from anaconda, explicitly adding /usr/include causes it to use /usr/include/math.h instead of its own math.h; Ubuntu's /usr/include/math.h is incompatible because it tries to include bits/mathdef.h and this file is under /usr/lib/gcc/x86_64-linux-gnu/5/include, which is part of the default path of the *system* gcc but not the conda gcc. We currently explicitly look up these default paths to check that the include files are actually present, as a fallback in case they exist but no pkg-config information is present. I propose instead to require pkg-config information to be present (as a check that the package is indeed installed) and not try to manually look for the headers. This would cause a build failure if there is some linux distro that includes the headers for our dependencies *without* including the pkg-config info, which I am happy to claim to be a bad idea. This would also mean that if one is using a non-system compiler (e.g., a conda compiler), then one would also need to install freetype/libpng/zlib in a place this compiler knows about (e.g., via a conda package) rather than relying on the system version. Here too I think this is a reasonable restriction (if you know to set up your own non-system compiler you should know how to set up the dependencies properly too...). **Code for reproduction** ``` $ conda create -n test -c conda-forge numpy freetype && source activate test && conda install -yc anaconda gxx_linux-64 $ <from matplotlib source> python setup.py build ``` **Actual outcome** ``` /home/alee/miniconda3/envs/test/bin/x86_64-conda_cos6-linux-gnu-cc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -DFREETYPE_BUILD_TYPE=system -DPY_ARRAY_UNIQUE_SYMBOL=MPL_matplotlib_ft2font_ARRAY_API -DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION -D__STDC_FORMAT_MACROS=1 -I/home/alee/miniconda3/envs/test/lib/python3.6/site-packages/numpy/core/include -I/home/alee/miniconda3/envs/test/include/freetype2 -I/usr/local/include -I/usr/include -I. -I/home/alee/miniconda3/envs/test/include/python3.6m -c src/ft2font.cpp -o build/temp.linux-x86_64-3.6/src/ft2font.o In file included from /home/alee/miniconda3/envs/test/include/python3.6m/pyport.h:194:0, from /home/alee/miniconda3/envs/test/include/python3.6m/Python.h:50, from src/mplutils.h:31, from src/ft2font.cpp:9: /usr/include/math.h:31:10: fatal error: bits/math-vector.h: No such file or directory #include <bits/math-vector.h> ^~~~~~~~~~~~~~~~~~~~ compilation terminated. error: command '/home/alee/miniconda3/envs/test/bin/x86_64-conda_cos6-linux-gnu-cc' failed with exit status 1 ``` **Expected outcome** Successful compilation. In the example above, this can be achieved by patching setupext's get_base_dirs to return an empty list -- this works because pkg-config info is available for all packages (so we don't do any manual header lookup ourselves), giving information to gcc about where to find the headers. **Matplotlib version** <!--Please specify your platform and versions of the relevant libraries you are using:--> * Operating system: Ubuntu * Matplotlib version: master * Matplotlib backend (`print(matplotlib.get_backend())`): N/A * Python version: 3.6 * Jupyter version (if applicable): N/A * Other libraries: N/A <!--Please tell us how you installed matplotlib and python e.g., from source, pip, conda--> <!--If you installed from conda, please specify which channel you used if not the default-->
1.0
setupext should not explicitly add /usr/{,local/}include to the include path - <!--To help us understand and resolve your issue, please fill out the form to the best of your ability.--> <!--You can feel free to delete the sections that do not apply.--> ### Bug report **Bug summary** Currently, on Linux, setupext.py explicitly adds /usr/{,local/}include to the search path (because it is listed in get_base_dirs). This can be confirmed by checking e.g. the output of `python setup.py build`. This makes it impossible to compile matplotlib using a non-system compiler (e.g., anaconda now provides gcc7.2, see https://www.anaconda.com/blog/developer-blog/utilizing-the-new-compilers-in-anaconda-distribution-5/) that comes with its own headers (in /path/to/env/lib/gcc/x86_64-conda_cos6-linux-gnu/7.2.0/include) which may be incompatible with the system headers. Note that conda's gcc is configured to indeed look into its own headers directory instead of /usr/include. In the specific case of gcc7.2.0 from anaconda, explicitly adding /usr/include causes it to use /usr/include/math.h instead of its own math.h; Ubuntu's /usr/include/math.h is incompatible because it tries to include bits/mathdef.h and this file is under /usr/lib/gcc/x86_64-linux-gnu/5/include, which is part of the default path of the *system* gcc but not the conda gcc. We currently explicitly look up these default paths to check that the include files are actually present, as a fallback in case they exist but no pkg-config information is present. I propose instead to require pkg-config information to be present (as a check that the package is indeed installed) and not try to manually look for the headers. This would cause a build failure if there is some linux distro that includes the headers for our dependencies *without* including the pkg-config info, which I am happy to claim to be a bad idea. This would also mean that if one is using a non-system compiler (e.g., a conda compiler), then one would also need to install freetype/libpng/zlib in a place this compiler knows about (e.g., via a conda package) rather than relying on the system version. Here too I think this is a reasonable restriction (if you know to set up your own non-system compiler you should know how to set up the dependencies properly too...). **Code for reproduction** ``` $ conda create -n test -c conda-forge numpy freetype && source activate test && conda install -yc anaconda gxx_linux-64 $ <from matplotlib source> python setup.py build ``` **Actual outcome** ``` /home/alee/miniconda3/envs/test/bin/x86_64-conda_cos6-linux-gnu-cc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -DFREETYPE_BUILD_TYPE=system -DPY_ARRAY_UNIQUE_SYMBOL=MPL_matplotlib_ft2font_ARRAY_API -DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION -D__STDC_FORMAT_MACROS=1 -I/home/alee/miniconda3/envs/test/lib/python3.6/site-packages/numpy/core/include -I/home/alee/miniconda3/envs/test/include/freetype2 -I/usr/local/include -I/usr/include -I. -I/home/alee/miniconda3/envs/test/include/python3.6m -c src/ft2font.cpp -o build/temp.linux-x86_64-3.6/src/ft2font.o In file included from /home/alee/miniconda3/envs/test/include/python3.6m/pyport.h:194:0, from /home/alee/miniconda3/envs/test/include/python3.6m/Python.h:50, from src/mplutils.h:31, from src/ft2font.cpp:9: /usr/include/math.h:31:10: fatal error: bits/math-vector.h: No such file or directory #include <bits/math-vector.h> ^~~~~~~~~~~~~~~~~~~~ compilation terminated. error: command '/home/alee/miniconda3/envs/test/bin/x86_64-conda_cos6-linux-gnu-cc' failed with exit status 1 ``` **Expected outcome** Successful compilation. In the example above, this can be achieved by patching setupext's get_base_dirs to return an empty list -- this works because pkg-config info is available for all packages (so we don't do any manual header lookup ourselves), giving information to gcc about where to find the headers. **Matplotlib version** <!--Please specify your platform and versions of the relevant libraries you are using:--> * Operating system: Ubuntu * Matplotlib version: master * Matplotlib backend (`print(matplotlib.get_backend())`): N/A * Python version: 3.6 * Jupyter version (if applicable): N/A * Other libraries: N/A <!--Please tell us how you installed matplotlib and python e.g., from source, pip, conda--> <!--If you installed from conda, please specify which channel you used if not the default-->
non_test
setupext should not explicitly add usr local include to the include path bug report bug summary currently on linux setupext py explicitly adds usr local include to the search path because it is listed in get base dirs this can be confirmed by checking e g the output of python setup py build this makes it impossible to compile matplotlib using a non system compiler e g anaconda now provides see that comes with its own headers in path to env lib gcc conda linux gnu include which may be incompatible with the system headers note that conda s gcc is configured to indeed look into its own headers directory instead of usr include in the specific case of from anaconda explicitly adding usr include causes it to use usr include math h instead of its own math h ubuntu s usr include math h is incompatible because it tries to include bits mathdef h and this file is under usr lib gcc linux gnu include which is part of the default path of the system gcc but not the conda gcc we currently explicitly look up these default paths to check that the include files are actually present as a fallback in case they exist but no pkg config information is present i propose instead to require pkg config information to be present as a check that the package is indeed installed and not try to manually look for the headers this would cause a build failure if there is some linux distro that includes the headers for our dependencies without including the pkg config info which i am happy to claim to be a bad idea this would also mean that if one is using a non system compiler e g a conda compiler then one would also need to install freetype libpng zlib in a place this compiler knows about e g via a conda package rather than relying on the system version here too i think this is a reasonable restriction if you know to set up your own non system compiler you should know how to set up the dependencies properly too code for reproduction conda create n test c conda forge numpy freetype source activate test conda install yc anaconda gxx linux python setup py build actual outcome home alee envs test bin conda linux gnu cc wno unused result wsign compare dndebug g fwrapv wall fpic dfreetype build type system dpy array unique symbol mpl matplotlib array api dnpy no deprecated api npy api version d stdc format macros i home alee envs test lib site packages numpy core include i home alee envs test include i usr local include i usr include i i home alee envs test include c src cpp o build temp linux src o in file included from home alee envs test include pyport h from home alee envs test include python h from src mplutils h from src cpp usr include math h fatal error bits math vector h no such file or directory include compilation terminated error command home alee envs test bin conda linux gnu cc failed with exit status expected outcome successful compilation in the example above this can be achieved by patching setupext s get base dirs to return an empty list this works because pkg config info is available for all packages so we don t do any manual header lookup ourselves giving information to gcc about where to find the headers matplotlib version operating system ubuntu matplotlib version master matplotlib backend print matplotlib get backend n a python version jupyter version if applicable n a other libraries n a
0
58,490
6,599,986,833
IssuesEvent
2017-09-17 04:49:36
wereturtle/ghostwriter
https://api.github.com/repos/wereturtle/ghostwriter
closed
v1.5.0 still has 2016 copyright
enhancement in test
This may sound like a small detail, but I can be a little OCD about this stuff. I just installed v.1.5.0 and the about page says "Copyright 2014 - 2016 wereturtle"
1.0
v1.5.0 still has 2016 copyright - This may sound like a small detail, but I can be a little OCD about this stuff. I just installed v.1.5.0 and the about page says "Copyright 2014 - 2016 wereturtle"
test
still has copyright this may sound like a small detail but i can be a little ocd about this stuff i just installed v and the about page says copyright wereturtle
1
106,661
9,178,855,152
IssuesEvent
2019-03-05 00:41:31
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: kv95/enc=false/nodes=3 failed
C-test-failure O-roachtest O-robot
SHA: https://github.com/cockroachdb/cockroach/commits/f53d12936efd36ea51eab6f191725d1dca2ceff3 Parameters: To repro, try: ``` # Don't forget to check out a clean suitable branch and experiment with the # stress invocation until the desired results present themselves. For example, # using stress instead of stressrace and passing the '-p' stressflag which # controls concurrency. ./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh cd ~/go/src/github.com/cockroachdb/cockroach && \ stdbuf -oL -eL \ make stressrace TESTS=kv95/enc=false/nodes=3 PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log ``` Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1161677&tab=buildLog ``` The test failed on release-2.1: cluster.go:1244,kv.go:46,kv.go:121,test.go:1214: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-1161677-kv95-enc-false-nodes-3:1-3 -- killlall -ABRT cockroach returned: stderr: Error: exit status 127 stdout: teamcity-1161677-kv95-enc-false-nodes-3: killlall -ABRT cockroach 1: bash: killlall: command not found exit status 127 2: bash: killlall: command not found exit status 127 3: bash: killlall: command not found exit status 127 : exit status 1 ```
2.0
roachtest: kv95/enc=false/nodes=3 failed - SHA: https://github.com/cockroachdb/cockroach/commits/f53d12936efd36ea51eab6f191725d1dca2ceff3 Parameters: To repro, try: ``` # Don't forget to check out a clean suitable branch and experiment with the # stress invocation until the desired results present themselves. For example, # using stress instead of stressrace and passing the '-p' stressflag which # controls concurrency. ./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh cd ~/go/src/github.com/cockroachdb/cockroach && \ stdbuf -oL -eL \ make stressrace TESTS=kv95/enc=false/nodes=3 PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log ``` Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1161677&tab=buildLog ``` The test failed on release-2.1: cluster.go:1244,kv.go:46,kv.go:121,test.go:1214: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-1161677-kv95-enc-false-nodes-3:1-3 -- killlall -ABRT cockroach returned: stderr: Error: exit status 127 stdout: teamcity-1161677-kv95-enc-false-nodes-3: killlall -ABRT cockroach 1: bash: killlall: command not found exit status 127 2: bash: killlall: command not found exit status 127 3: bash: killlall: command not found exit status 127 : exit status 1 ```
test
roachtest enc false nodes failed sha parameters to repro try don t forget to check out a clean suitable branch and experiment with the stress invocation until the desired results present themselves for example using stress instead of stressrace and passing the p stressflag which controls concurrency scripts gceworker sh start scripts gceworker sh mosh cd go src github com cockroachdb cockroach stdbuf ol el make stressrace tests enc false nodes pkg roachtest testtimeout stressflags maxtime timeout tee tmp stress log failed test the test failed on release cluster go kv go kv go test go home agent work go src github com cockroachdb cockroach bin roachprod run teamcity enc false nodes killlall abrt cockroach returned stderr error exit status stdout teamcity enc false nodes killlall abrt cockroach bash killlall command not found exit status bash killlall command not found exit status bash killlall command not found exit status exit status
1
229,157
18,286,641,046
IssuesEvent
2021-10-05 11:02:57
DILCISBoard/eark-ip-test-corpus
https://api.github.com/repos/DILCISBoard/eark-ip-test-corpus
closed
CSIP19 Test Case Description
question test case ready
**Specification:** - **Name:** EARK CSIP - **Version:** 2.0-DRAFT - **URL:** http://earkcsip.dilcis.eu/ **Requirement:** - **Id:** CSIP19 - **Link:** http://earkcsip.dilcis.eu/#CSIP19 **Error Level:** ERROR **Description:** Test of the creation date (attribute created for element dmdSec) is present. Optional according to XML Schema, mandatory according to CSIP. No requirement for the date given, it therefore can be much older than the package itself, og be created in the future! Dependent on CSIP18, which is mandatory for a valid XML file according to XML Schema and to CSIP.
1.0
CSIP19 Test Case Description - **Specification:** - **Name:** EARK CSIP - **Version:** 2.0-DRAFT - **URL:** http://earkcsip.dilcis.eu/ **Requirement:** - **Id:** CSIP19 - **Link:** http://earkcsip.dilcis.eu/#CSIP19 **Error Level:** ERROR **Description:** Test of the creation date (attribute created for element dmdSec) is present. Optional according to XML Schema, mandatory according to CSIP. No requirement for the date given, it therefore can be much older than the package itself, og be created in the future! Dependent on CSIP18, which is mandatory for a valid XML file according to XML Schema and to CSIP.
test
test case description specification name eark csip version draft url requirement id link error level error description test of the creation date attribute created for element dmdsec is present optional according to xml schema mandatory according to csip no requirement for the date given it therefore can be much older than the package itself og be created in the future dependent on which is mandatory for a valid xml file according to xml schema and to csip
1
106,920
13,397,893,345
IssuesEvent
2020-09-03 12:21:37
Rob1200489/AzzyBattlesTheDarkness
https://api.github.com/repos/Rob1200489/AzzyBattlesTheDarkness
closed
Waterfall grapple can be accessed from ground
Design issue Moderate
The distance between the ground and grapple hook by the waterfall is not large enough to stop the player from bypassing platforming challenge. There is also not enough blocking the grapple hook from the ground.
1.0
Waterfall grapple can be accessed from ground - The distance between the ground and grapple hook by the waterfall is not large enough to stop the player from bypassing platforming challenge. There is also not enough blocking the grapple hook from the ground.
non_test
waterfall grapple can be accessed from ground the distance between the ground and grapple hook by the waterfall is not large enough to stop the player from bypassing platforming challenge there is also not enough blocking the grapple hook from the ground
0
422,394
12,277,785,882
IssuesEvent
2020-05-08 08:38:08
OneZoom/OZtree
https://api.github.com/repos/OneZoom/OZtree
closed
Minor CSS bug on mobile
outsource priority
When the user clicks on the search dialogue from the front page, it opens up and expands off the side of the main page (on mobile use with small phones only). See attached screenshot. ![IMG_6026](https://user-images.githubusercontent.com/3009279/80308510-628ef880-87c7-11ea-95ab-8b83ebce66cd.PNG)
1.0
Minor CSS bug on mobile - When the user clicks on the search dialogue from the front page, it opens up and expands off the side of the main page (on mobile use with small phones only). See attached screenshot. ![IMG_6026](https://user-images.githubusercontent.com/3009279/80308510-628ef880-87c7-11ea-95ab-8b83ebce66cd.PNG)
non_test
minor css bug on mobile when the user clicks on the search dialogue from the front page it opens up and expands off the side of the main page on mobile use with small phones only see attached screenshot
0
352,264
32,055,846,792
IssuesEvent
2023-09-24 04:10:36
CollinHeist/TitleCardMaker-Blueprints
https://api.github.com/repos/CollinHeist/TitleCardMaker-Blueprints
closed
[Blueprint] Demon Slayer: Kimetsu no Yaiba
blueprint created passed-tests
### Series Name Demon Slayer: Kimetsu no Yaiba ### Series Year 2019 ### Creator Username _No response_ ### Blueprint Description Anime card with arc season titles, Japanese Kanji, and the custom Series Font. ### Blueprint ```json { "series": { "font_id": 0, "card_type": "anime", "season_title_ranges": [ "1", "2", "3", "4" ], "season_title_values": [ "Finding My Life's Purpose", "Mugen Train", "Entertainment District", "Swordsmith Village" ], "extra_keys": [ "kanji_vertical_shift" ], "extra_values": [ "-40" ], "template_ids": [], "translations": [ { "language_code": "ja", "data_key": "kanji" } ] }, "episodes": {}, "templates": [], "fonts": [ { "name": "Demon Slayer", "delete_missing": true, "file": "Blood Crow Condensed.ttf", "interline_spacing": -20, "size": 1.2, "title_case": "source" } ], "preview": "preview.jpg" } ``` ### Preview Title Card ![preview](https://github.com/CollinHeist/TitleCardMaker-Blueprints/assets/17693271/49a16ef4-5e6a-47cc-b8da-b5627cf6b538) ### Zip of Font Files [fonts.zip](https://github.com/CollinHeist/TitleCardMaker-Blueprints/files/12707741/fonts.zip)
1.0
[Blueprint] Demon Slayer: Kimetsu no Yaiba - ### Series Name Demon Slayer: Kimetsu no Yaiba ### Series Year 2019 ### Creator Username _No response_ ### Blueprint Description Anime card with arc season titles, Japanese Kanji, and the custom Series Font. ### Blueprint ```json { "series": { "font_id": 0, "card_type": "anime", "season_title_ranges": [ "1", "2", "3", "4" ], "season_title_values": [ "Finding My Life's Purpose", "Mugen Train", "Entertainment District", "Swordsmith Village" ], "extra_keys": [ "kanji_vertical_shift" ], "extra_values": [ "-40" ], "template_ids": [], "translations": [ { "language_code": "ja", "data_key": "kanji" } ] }, "episodes": {}, "templates": [], "fonts": [ { "name": "Demon Slayer", "delete_missing": true, "file": "Blood Crow Condensed.ttf", "interline_spacing": -20, "size": 1.2, "title_case": "source" } ], "preview": "preview.jpg" } ``` ### Preview Title Card ![preview](https://github.com/CollinHeist/TitleCardMaker-Blueprints/assets/17693271/49a16ef4-5e6a-47cc-b8da-b5627cf6b538) ### Zip of Font Files [fonts.zip](https://github.com/CollinHeist/TitleCardMaker-Blueprints/files/12707741/fonts.zip)
test
demon slayer kimetsu no yaiba series name demon slayer kimetsu no yaiba series year creator username no response blueprint description anime card with arc season titles japanese kanji and the custom series font blueprint json series font id card type anime season title ranges season title values finding my life s purpose mugen train entertainment district swordsmith village extra keys kanji vertical shift extra values template ids translations language code ja data key kanji episodes templates fonts name demon slayer delete missing true file blood crow condensed ttf interline spacing size title case source preview preview jpg preview title card zip of font files
1
65,064
12,521,653,017
IssuesEvent
2020-06-03 17:45:54
microsoft/vscode-python
https://api.github.com/repos/microsoft/vscode-python
closed
Fix Linux Debugger tests
data science type-code health
Two debugger tests seems to be failing regularly on Linux only. https://dev.azure.com/ms/vscode-python/_build/results?buildId=81365&view=ms.vss-test-web.build-test-results-tab ![image.png](https://images.zenhubusercontent.com/5bdcd40a0faa800c749bac98/fc03b37e-dcd3-4cfc-8bff-e4f7da26c22e)
1.0
Fix Linux Debugger tests - Two debugger tests seems to be failing regularly on Linux only. https://dev.azure.com/ms/vscode-python/_build/results?buildId=81365&view=ms.vss-test-web.build-test-results-tab ![image.png](https://images.zenhubusercontent.com/5bdcd40a0faa800c749bac98/fc03b37e-dcd3-4cfc-8bff-e4f7da26c22e)
non_test
fix linux debugger tests two debugger tests seems to be failing regularly on linux only
0
346,761
31,021,178,648
IssuesEvent
2023-08-10 05:31:43
tremorlabs/tremor
https://api.github.com/repos/tremorlabs/tremor
opened
[Test]: Unit tests for select elements
Status: Help Wanted Type: Test
### What problem does this feature solve? Currently, components are only tested via storybook. However, select elements are prone to bugs and should be unit tested based on user interaction. The components that should be unit tested are: * Datepicker * DateRangePicker * Select * MultiSelect * SearchSelect * TextInput * NumberInput * Button If anyone from the community is willing to take this issue, feel free to comment below. :) ### What does the proposed API look like? _No response_
1.0
[Test]: Unit tests for select elements - ### What problem does this feature solve? Currently, components are only tested via storybook. However, select elements are prone to bugs and should be unit tested based on user interaction. The components that should be unit tested are: * Datepicker * DateRangePicker * Select * MultiSelect * SearchSelect * TextInput * NumberInput * Button If anyone from the community is willing to take this issue, feel free to comment below. :) ### What does the proposed API look like? _No response_
test
unit tests for select elements what problem does this feature solve currently components are only tested via storybook however select elements are prone to bugs and should be unit tested based on user interaction the components that should be unit tested are datepicker daterangepicker select multiselect searchselect textinput numberinput button if anyone from the community is willing to take this issue feel free to comment below what does the proposed api look like no response
1
348,541
10,449,806,425
IssuesEvent
2019-09-19 09:12:08
OpenSRP/opensrp-client-chw
https://api.github.com/repos/OpenSRP/opensrp-client-chw
opened
Can the Swahili translation for "Due" on register pages
Boresha Afya low priority
- [ ] Change "Due" filter in Swahili to "Tayari" ![Screenshot_20190918-130815](https://user-images.githubusercontent.com/20777928/65230503-a5248000-dad6-11e9-8b98-1a61283572b7.png)
1.0
Can the Swahili translation for "Due" on register pages - - [ ] Change "Due" filter in Swahili to "Tayari" ![Screenshot_20190918-130815](https://user-images.githubusercontent.com/20777928/65230503-a5248000-dad6-11e9-8b98-1a61283572b7.png)
non_test
can the swahili translation for due on register pages change due filter in swahili to tayari
0
108,458
23,611,437,141
IssuesEvent
2022-08-24 12:45:03
sourcegraph/sourcegraph
https://api.github.com/repos/sourcegraph/sourcegraph
closed
lsif-python: umbrella issue
team/code-intelligence team/language-tools
Progress tracker for Python indexer Decisions: (TODO) Status Tracker: - [x] Open new fork: https://github.com/sourcegraph/pyright - [x] Start using lsif-typescript snapshot testing - [x] Start emitting lsif-typed - [x] #31035 (Will add more as I write them up) Reported dev envs in https://github.com/sourcegraph/sourcegraph/issues/30179
1.0
lsif-python: umbrella issue - Progress tracker for Python indexer Decisions: (TODO) Status Tracker: - [x] Open new fork: https://github.com/sourcegraph/pyright - [x] Start using lsif-typescript snapshot testing - [x] Start emitting lsif-typed - [x] #31035 (Will add more as I write them up) Reported dev envs in https://github.com/sourcegraph/sourcegraph/issues/30179
non_test
lsif python umbrella issue progress tracker for python indexer decisions todo status tracker open new fork start using lsif typescript snapshot testing start emitting lsif typed will add more as i write them up reported dev envs in
0
242,551
20,254,031,633
IssuesEvent
2022-02-14 20:59:01
rspott/WAF-test02
https://api.github.com/repos/rspott/WAF-test02
opened
A vulnerability assessment solution should be enabled on your virtual machines for 1 Virtual machine(s)
WARP-Import test1 Security Azure Advisor
<a href="https://aka.ms/azure-advisor-portal">A vulnerability assessment solution should be enabled on your virtual machines for 1 Virtual machine(s)</a>
1.0
A vulnerability assessment solution should be enabled on your virtual machines for 1 Virtual machine(s) - <a href="https://aka.ms/azure-advisor-portal">A vulnerability assessment solution should be enabled on your virtual machines for 1 Virtual machine(s)</a>
test
a vulnerability assessment solution should be enabled on your virtual machines for virtual machine s
1
214,545
16,598,082,495
IssuesEvent
2021-06-01 15:37:54
miluskapajuelo/LIM014-burger-queen-api-client
https://api.github.com/repos/miluskapajuelo/LIM014-burger-queen-api-client
closed
Test HU1
git test
- [x] Code review de al menos una compañera. - [x] Test unitario y testeo manual - [x] Tests de usabilidad - [x] Feedback del usuario - [x] Desplegaste tu aplicación - [x] Etiquetado de la versión (git tag).
1.0
Test HU1 - - [x] Code review de al menos una compañera. - [x] Test unitario y testeo manual - [x] Tests de usabilidad - [x] Feedback del usuario - [x] Desplegaste tu aplicación - [x] Etiquetado de la versión (git tag).
test
test code review de al menos una compañera test unitario y testeo manual tests de usabilidad feedback del usuario desplegaste tu aplicación etiquetado de la versión git tag
1
184,824
14,289,966,065
IssuesEvent
2020-11-23 20:06:49
github-vet/rangeclosure-findings
https://api.github.com/repos/github-vet/rangeclosure-findings
closed
Oats87/docker-machine-driver-vsphere: vendor/github.com/vmware/govmomi/simulator/property_collector_test.go; 46 LoC
fresh small test
Found a possible issue in [Oats87/docker-machine-driver-vsphere](https://www.github.com/Oats87/docker-machine-driver-vsphere) at [vendor/github.com/vmware/govmomi/simulator/property_collector_test.go](https://github.com/Oats87/docker-machine-driver-vsphere/blob/2a52dac3e77b33b02eb0bf3901495f2e888956fb/vendor/github.com/vmware/govmomi/simulator/property_collector_test.go#L370-L415) The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements which capture loop variables. [Click here to see the code in its original context.](https://github.com/Oats87/docker-machine-driver-vsphere/blob/2a52dac3e77b33b02eb0bf3901495f2e888956fb/vendor/github.com/vmware/govmomi/simulator/property_collector_test.go#L370-L415) <details> <summary>Click here to show the 46 line(s) of Go which triggered the analyzer.</summary> ```go for i, test := range tests { var props []string matches := false wait := make(chan bool) host := obj.Summary.Runtime.Host // add host to filter just to have a different type in the filter filter := new(property.WaitFilter).Add(*host, host.Type, nil).Add(ref, ref.Type, test.props) go func() { perr := property.WaitForUpdates(ctx, pc, filter, func(updates []types.ObjectUpdate) bool { if updates[0].Kind == types.ObjectUpdateKindEnter { wait <- true return false } for _, update := range updates { for _, change := range update.ChangeSet { props = append(props, change.Name) } } if test.props == nil { // special case to test All flag matches = isTrue(filter.Spec.PropSet[0].All) && len(props) > 1 return matches } if len(props) > len(test.props) { return true } matches = reflect.DeepEqual(props, test.props) return matches }) if perr != nil { t.Error(perr) } wait <- matches }() <-wait // wait for enter _, _ = state[obj.Runtime.PowerState](ctx) if !<-wait { // wait for modify t.Errorf("%d: updates=%s, expected=%s", i, props, test.props) } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 2a52dac3e77b33b02eb0bf3901495f2e888956fb
1.0
Oats87/docker-machine-driver-vsphere: vendor/github.com/vmware/govmomi/simulator/property_collector_test.go; 46 LoC - Found a possible issue in [Oats87/docker-machine-driver-vsphere](https://www.github.com/Oats87/docker-machine-driver-vsphere) at [vendor/github.com/vmware/govmomi/simulator/property_collector_test.go](https://github.com/Oats87/docker-machine-driver-vsphere/blob/2a52dac3e77b33b02eb0bf3901495f2e888956fb/vendor/github.com/vmware/govmomi/simulator/property_collector_test.go#L370-L415) The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements which capture loop variables. [Click here to see the code in its original context.](https://github.com/Oats87/docker-machine-driver-vsphere/blob/2a52dac3e77b33b02eb0bf3901495f2e888956fb/vendor/github.com/vmware/govmomi/simulator/property_collector_test.go#L370-L415) <details> <summary>Click here to show the 46 line(s) of Go which triggered the analyzer.</summary> ```go for i, test := range tests { var props []string matches := false wait := make(chan bool) host := obj.Summary.Runtime.Host // add host to filter just to have a different type in the filter filter := new(property.WaitFilter).Add(*host, host.Type, nil).Add(ref, ref.Type, test.props) go func() { perr := property.WaitForUpdates(ctx, pc, filter, func(updates []types.ObjectUpdate) bool { if updates[0].Kind == types.ObjectUpdateKindEnter { wait <- true return false } for _, update := range updates { for _, change := range update.ChangeSet { props = append(props, change.Name) } } if test.props == nil { // special case to test All flag matches = isTrue(filter.Spec.PropSet[0].All) && len(props) > 1 return matches } if len(props) > len(test.props) { return true } matches = reflect.DeepEqual(props, test.props) return matches }) if perr != nil { t.Error(perr) } wait <- matches }() <-wait // wait for enter _, _ = state[obj.Runtime.PowerState](ctx) if !<-wait { // wait for modify t.Errorf("%d: updates=%s, expected=%s", i, props, test.props) } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 2a52dac3e77b33b02eb0bf3901495f2e888956fb
test
docker machine driver vsphere vendor github com vmware govmomi simulator property collector test go loc found a possible issue in at the below snippet of go code triggered static analysis which searches for goroutines and or defer statements which capture loop variables click here to show the line s of go which triggered the analyzer go for i test range tests var props string matches false wait make chan bool host obj summary runtime host add host to filter just to have a different type in the filter filter new property waitfilter add host host type nil add ref ref type test props go func perr property waitforupdates ctx pc filter func updates types objectupdate bool if updates kind types objectupdatekindenter wait true return false for update range updates for change range update changeset props append props change name if test props nil special case to test all flag matches istrue filter spec propset all len props return matches if len props len test props return true matches reflect deepequal props test props return matches if perr nil t error perr wait matches wait wait for enter state ctx if wait wait for modify t errorf d updates s expected s i props test props leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
1
447,551
12,889,719,660
IssuesEvent
2020-07-13 14:56:07
wso2/product-microgateway
https://api.github.com/repos/wso2/product-microgateway
opened
Version * not working in API key authentication when validateAllowedAPIs is enabled.
Affected/3.1.0 Priority/Normal Type/Bug
### Description: <!-- Describe the issue --> apikey(with allowedapi's version:*) allow all versions is not working. ### Steps to reproduce: 1. Get an api key with config similar to below. Make it allow for all versions. ``` [apikey.issuer] [apikey.issuer.tokenConfig] enabled = true issuer = "https://localhost:9095/apikey" certificateAlias = "ballerina" validityTime = -1 [[apikey.issuer.api]] name="Swagger Petstore" versions="*" ``` 2. Make validateAllowedAPIs to true ``` [apikey.tokenConfigs] issuer="https://localhost:9095/apikey" certificateAlias="ballerina" # Validate Allowed/subscribed APIs validateAllowedAPIs=true ``` 2. Invoke the Swagger Petstore with the above key. `{"fault":{"code":900901, "message":"Invalid Credentials", "description":"Invalid Credentials. Make sure you have given the correct access token"}}` ### Affected Product Version: <!-- Members can use Affected/*** labels --> 3.1.0 3.2.0 ### Environment details (with versions): - OS: - Client: - Env (Docker/K8s): --- ### Optional Fields #### Related Issues: <!-- Any related issues from this/other repositories--> #### Suggested Labels: <!--Only to be used by non-members--> #### Suggested Assignees: <!--Only to be used by non-members-->
1.0
Version * not working in API key authentication when validateAllowedAPIs is enabled. - ### Description: <!-- Describe the issue --> apikey(with allowedapi's version:*) allow all versions is not working. ### Steps to reproduce: 1. Get an api key with config similar to below. Make it allow for all versions. ``` [apikey.issuer] [apikey.issuer.tokenConfig] enabled = true issuer = "https://localhost:9095/apikey" certificateAlias = "ballerina" validityTime = -1 [[apikey.issuer.api]] name="Swagger Petstore" versions="*" ``` 2. Make validateAllowedAPIs to true ``` [apikey.tokenConfigs] issuer="https://localhost:9095/apikey" certificateAlias="ballerina" # Validate Allowed/subscribed APIs validateAllowedAPIs=true ``` 2. Invoke the Swagger Petstore with the above key. `{"fault":{"code":900901, "message":"Invalid Credentials", "description":"Invalid Credentials. Make sure you have given the correct access token"}}` ### Affected Product Version: <!-- Members can use Affected/*** labels --> 3.1.0 3.2.0 ### Environment details (with versions): - OS: - Client: - Env (Docker/K8s): --- ### Optional Fields #### Related Issues: <!-- Any related issues from this/other repositories--> #### Suggested Labels: <!--Only to be used by non-members--> #### Suggested Assignees: <!--Only to be used by non-members-->
non_test
version not working in api key authentication when validateallowedapis is enabled description apikey with allowedapi s version allow all versions is not working steps to reproduce get an api key with config similar to below make it allow for all versions enabled true issuer certificatealias ballerina validitytime name swagger petstore versions make validateallowedapis to true issuer certificatealias ballerina validate allowed subscribed apis validateallowedapis true invoke the swagger petstore with the above key fault code message invalid credentials description invalid credentials make sure you have given the correct access token affected product version environment details with versions os client env docker optional fields related issues suggested labels suggested assignees
0
33,718
16,088,236,428
IssuesEvent
2021-04-26 13:51:20
qbittorrent/qBittorrent
https://api.github.com/repos/qbittorrent/qBittorrent
closed
4.3.4 Using huge amounts of CPU
Performance
**Please provide the following information** ### qBittorrent version and Operating System linuxserver/qbittorrent:14.3.4.99202104031018-7348-2b6baa609ubuntu20.04.1-ls125 docker container Host is debian stable, fully updated ### If on linux, libtorrent-rasterbar and Qt version Qt 5.12.8 Libtorrent 1.2.13.0 Boost: 1.71.0 OpenSSL 1.1.1f zlib: 1.2.11 ### What is the problem It uses 2 cores @ about 40% cpu. I have a 8c16t processor and when the daemon is fairly empty I have high cpu usage. 1200 torrents, 1 active, sub 1MBps upload of traffic. Host has 1G uplink ### What is the expected behavior 1-5% cpu ### Steps to reproduce install the container load up 1200 dormant torrents (with maybe 1-2 active) low traffic sub 1MBps See your cpu go up ### Extra info(if any) 4.3.2 had the same issue, I updated to 4.3.4 to fix the issue AIO threads 1,4,10(default) doesnt change usage
True
4.3.4 Using huge amounts of CPU - **Please provide the following information** ### qBittorrent version and Operating System linuxserver/qbittorrent:14.3.4.99202104031018-7348-2b6baa609ubuntu20.04.1-ls125 docker container Host is debian stable, fully updated ### If on linux, libtorrent-rasterbar and Qt version Qt 5.12.8 Libtorrent 1.2.13.0 Boost: 1.71.0 OpenSSL 1.1.1f zlib: 1.2.11 ### What is the problem It uses 2 cores @ about 40% cpu. I have a 8c16t processor and when the daemon is fairly empty I have high cpu usage. 1200 torrents, 1 active, sub 1MBps upload of traffic. Host has 1G uplink ### What is the expected behavior 1-5% cpu ### Steps to reproduce install the container load up 1200 dormant torrents (with maybe 1-2 active) low traffic sub 1MBps See your cpu go up ### Extra info(if any) 4.3.2 had the same issue, I updated to 4.3.4 to fix the issue AIO threads 1,4,10(default) doesnt change usage
non_test
using huge amounts of cpu please provide the following information qbittorrent version and operating system linuxserver qbittorrent docker container host is debian stable fully updated if on linux libtorrent rasterbar and qt version qt libtorrent boost openssl zlib what is the problem it uses cores about cpu i have a processor and when the daemon is fairly empty i have high cpu usage torrents active sub upload of traffic host has uplink what is the expected behavior cpu steps to reproduce install the container load up dormant torrents with maybe active low traffic sub see your cpu go up extra info if any had the same issue i updated to to fix the issue aio threads default doesnt change usage
0
808,762
30,109,830,063
IssuesEvent
2023-06-30 06:42:23
napari/napari
https://api.github.com/repos/napari/napari
closed
No option to select a given contribution from a plugin in 0.4.18rc2
bug priority-high
## 🐛 Bug In napari 0.4.18rc2 there is no option to select a specific contribution from a plugin. In our case, this is needed to load a directory one of two ways (in two coordinate systems). This was posted on zulip, and I'm raising an issue upon @Czaki's request. ## To Reproduce In napari <= 0.4.17, I can open data using a specific reader from [brainglobe-napari-io](https://github.com/brainglobe/brainglobe-napari-io): ```python viewer.open(file, plugin="brainglobe-napari-io.brainreg_read_dir_standard_space") ``` In napari 0.4.18rc2 this produces the error: ```bash ValueError: There is no registered plugin named 'brainglobe-napari-io.brainreg_read_dir_standard_space'. Names of plugins offering readers are: set() ``` ## Expected behavior For the data to be loaded with a specific reader as with previous versions. ## Environment <details> <summary>napari info</summary> napari: 0.4.18rc2 Platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.31 System: Ubuntu 20.04.6 LTS Python: 3.10.11 | packaged by conda-forge | (main, May 10 2023, 18:58:44) [GCC 11.3.0] Qt: 5.15.2 PyQt5: 5.15.9 NumPy: 1.24.3 SciPy: 1.10.1 Dask: 2023.5.1 VisPy: 0.12.2 magicgui: 0.7.2 superqt: unknown in-n-out: 0.1.7 app-model: 0.1.4 npe2: 0.7.0 OpenGL: - GL version: 4.6.0 NVIDIA 515.105.01 - MAX_TEXTURE_SIZE: 32768 Screens: - screen 1: resolution 3440x1440, scale 1.0 Settings path: - /home/adam/.config/napari/brainglobe_eaa3f9e4140a9304dcb5c8fb309cc49886dc0b64/settings.yaml </details> ## Additional context A workaround would be much appreciated, otherwise I think we'd need to maintain multiple plugins?
1.0
No option to select a given contribution from a plugin in 0.4.18rc2 - ## 🐛 Bug In napari 0.4.18rc2 there is no option to select a specific contribution from a plugin. In our case, this is needed to load a directory one of two ways (in two coordinate systems). This was posted on zulip, and I'm raising an issue upon @Czaki's request. ## To Reproduce In napari <= 0.4.17, I can open data using a specific reader from [brainglobe-napari-io](https://github.com/brainglobe/brainglobe-napari-io): ```python viewer.open(file, plugin="brainglobe-napari-io.brainreg_read_dir_standard_space") ``` In napari 0.4.18rc2 this produces the error: ```bash ValueError: There is no registered plugin named 'brainglobe-napari-io.brainreg_read_dir_standard_space'. Names of plugins offering readers are: set() ``` ## Expected behavior For the data to be loaded with a specific reader as with previous versions. ## Environment <details> <summary>napari info</summary> napari: 0.4.18rc2 Platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.31 System: Ubuntu 20.04.6 LTS Python: 3.10.11 | packaged by conda-forge | (main, May 10 2023, 18:58:44) [GCC 11.3.0] Qt: 5.15.2 PyQt5: 5.15.9 NumPy: 1.24.3 SciPy: 1.10.1 Dask: 2023.5.1 VisPy: 0.12.2 magicgui: 0.7.2 superqt: unknown in-n-out: 0.1.7 app-model: 0.1.4 npe2: 0.7.0 OpenGL: - GL version: 4.6.0 NVIDIA 515.105.01 - MAX_TEXTURE_SIZE: 32768 Screens: - screen 1: resolution 3440x1440, scale 1.0 Settings path: - /home/adam/.config/napari/brainglobe_eaa3f9e4140a9304dcb5c8fb309cc49886dc0b64/settings.yaml </details> ## Additional context A workaround would be much appreciated, otherwise I think we'd need to maintain multiple plugins?
non_test
no option to select a given contribution from a plugin in 🐛 bug in napari there is no option to select a specific contribution from a plugin in our case this is needed to load a directory one of two ways in two coordinate systems this was posted on zulip and i m raising an issue upon czaki s request to reproduce in napari i can open data using a specific reader from python viewer open file plugin brainglobe napari io brainreg read dir standard space in napari this produces the error bash valueerror there is no registered plugin named brainglobe napari io brainreg read dir standard space names of plugins offering readers are set expected behavior for the data to be loaded with a specific reader as with previous versions environment napari info napari platform linux generic with system ubuntu lts python packaged by conda forge main may qt numpy scipy dask vispy magicgui superqt unknown in n out app model opengl gl version nvidia max texture size screens screen resolution scale settings path home adam config napari brainglobe settings yaml additional context a workaround would be much appreciated otherwise i think we d need to maintain multiple plugins
0
107,239
9,204,914,850
IssuesEvent
2019-03-08 09:04:46
OpenTechFund/opentech.fund
https://api.github.com/repos/OpenTechFund/opentech.fund
closed
Batch action: status
needs tests
Batch actions will allow OTF staff to apply the same action to multiple submissions at the same time via the listing table. **Acceptance criteria** - [x] When at least one submission row is selected using the checkbox, a 'Update status' action appears - [x] Batch Status changes can only be applied to submissions in statuses that allow the same actions. The button will be disabled otherwise. - [x] Clicking on an action will show similar interface as used for the current Update status functionality on the individual submission page but with extra context about the submissions selected (number of submissions and expand link to see submission titles) to clarify that this is a batch action. Design: https://projects.invisionapp.com/share/KZPIO1NENQJ#/screens/342781107 - [x] On batch action submit, page refreshes, batch selections are unselected and message displayed to user confirming action (design of message as per https://projects.invisionapp.com/share/KZPIO1NENQJ#/screens/340971429 but no undo!) - [x] If error when batch updating display message to user (same design as per action confirmation at top of screen) - [x] Batch actions available on all submission tables on All Submissions page, Submissions by Round page and Submission by Status page (not submission summary blocks on dashboard or submission overview page) - [x] Users will have the same permissions to perform changes as they do currently - [x] Email notifications are unaffected: Applicants will each receive emails as if an action had been taken on their submission only - [x] Activity feed entries are unaffected: Submissions will have an activity added, it will not be tracked as a batch action - [x] Slack notifications updated: only one notification is sent to handle batch message e.g. `“<user> has done <action> to: <submission1><submission2><submission3>”` Design (batch actions): https://projects.invisionapp.com/share/KZPIO1NENQJ#/screens/340971996 Design (Change status modal): https://projects.invisionapp.com/share/KZPIO1NENQJ#/screens/342781107 **QA criteria** Dev: - [x] checked feature meets acceptance criteria/conforms exactly to the specification. - [x] provided good unit test coverage (if this is non-trivial behaviour). - [x] checked all tests for the project pass with this feature enabled. - [x] checked code conforms to the project coding standards. - [x] had code reviewed by another developer and resolved any issues raised. - [x] tested this feature as an end user of the website/app (Can I get to it? Is it useable? Can I break it? Does it work in an end-to-end context?) - [x] checked that this feature works on the server/s I am deploying it to. QA: * [x] tested this feature as a front end user and it meets the acceptance criteria/conforms to the specification and design. * [x] checked that the feature works on the server/s deployed to
1.0
Batch action: status - Batch actions will allow OTF staff to apply the same action to multiple submissions at the same time via the listing table. **Acceptance criteria** - [x] When at least one submission row is selected using the checkbox, a 'Update status' action appears - [x] Batch Status changes can only be applied to submissions in statuses that allow the same actions. The button will be disabled otherwise. - [x] Clicking on an action will show similar interface as used for the current Update status functionality on the individual submission page but with extra context about the submissions selected (number of submissions and expand link to see submission titles) to clarify that this is a batch action. Design: https://projects.invisionapp.com/share/KZPIO1NENQJ#/screens/342781107 - [x] On batch action submit, page refreshes, batch selections are unselected and message displayed to user confirming action (design of message as per https://projects.invisionapp.com/share/KZPIO1NENQJ#/screens/340971429 but no undo!) - [x] If error when batch updating display message to user (same design as per action confirmation at top of screen) - [x] Batch actions available on all submission tables on All Submissions page, Submissions by Round page and Submission by Status page (not submission summary blocks on dashboard or submission overview page) - [x] Users will have the same permissions to perform changes as they do currently - [x] Email notifications are unaffected: Applicants will each receive emails as if an action had been taken on their submission only - [x] Activity feed entries are unaffected: Submissions will have an activity added, it will not be tracked as a batch action - [x] Slack notifications updated: only one notification is sent to handle batch message e.g. `“<user> has done <action> to: <submission1><submission2><submission3>”` Design (batch actions): https://projects.invisionapp.com/share/KZPIO1NENQJ#/screens/340971996 Design (Change status modal): https://projects.invisionapp.com/share/KZPIO1NENQJ#/screens/342781107 **QA criteria** Dev: - [x] checked feature meets acceptance criteria/conforms exactly to the specification. - [x] provided good unit test coverage (if this is non-trivial behaviour). - [x] checked all tests for the project pass with this feature enabled. - [x] checked code conforms to the project coding standards. - [x] had code reviewed by another developer and resolved any issues raised. - [x] tested this feature as an end user of the website/app (Can I get to it? Is it useable? Can I break it? Does it work in an end-to-end context?) - [x] checked that this feature works on the server/s I am deploying it to. QA: * [x] tested this feature as a front end user and it meets the acceptance criteria/conforms to the specification and design. * [x] checked that the feature works on the server/s deployed to
test
batch action status batch actions will allow otf staff to apply the same action to multiple submissions at the same time via the listing table acceptance criteria when at least one submission row is selected using the checkbox a update status action appears batch status changes can only be applied to submissions in statuses that allow the same actions the button will be disabled otherwise clicking on an action will show similar interface as used for the current update status functionality on the individual submission page but with extra context about the submissions selected number of submissions and expand link to see submission titles to clarify that this is a batch action design on batch action submit page refreshes batch selections are unselected and message displayed to user confirming action design of message as per but no undo if error when batch updating display message to user same design as per action confirmation at top of screen batch actions available on all submission tables on all submissions page submissions by round page and submission by status page not submission summary blocks on dashboard or submission overview page users will have the same permissions to perform changes as they do currently email notifications are unaffected applicants will each receive emails as if an action had been taken on their submission only activity feed entries are unaffected submissions will have an activity added it will not be tracked as a batch action slack notifications updated only one notification is sent to handle batch message e g “ has done to ” design batch actions design change status modal qa criteria dev checked feature meets acceptance criteria conforms exactly to the specification provided good unit test coverage if this is non trivial behaviour checked all tests for the project pass with this feature enabled checked code conforms to the project coding standards had code reviewed by another developer and resolved any issues raised tested this feature as an end user of the website app can i get to it is it useable can i break it does it work in an end to end context checked that this feature works on the server s i am deploying it to qa tested this feature as a front end user and it meets the acceptance criteria conforms to the specification and design checked that the feature works on the server s deployed to
1
32,884
4,792,665,435
IssuesEvent
2016-10-31 16:05:50
TheScienceMuseum/collectionsonline
https://api.github.com/repos/TheScienceMuseum/collectionsonline
closed
Issue with sticky pill boxes
bug please-test priority-2
1. Select a category on an object page 2. Results page neither keeps category selected or adds the ‘sticky’ pill
1.0
Issue with sticky pill boxes - 1. Select a category on an object page 2. Results page neither keeps category selected or adds the ‘sticky’ pill
test
issue with sticky pill boxes select a category on an object page results page neither keeps category selected or adds the ‘sticky’ pill
1
30,152
7,166,443,662
IssuesEvent
2018-01-29 17:14:10
zurb/foundation-sites
https://api.github.com/repos/zurb/foundation-sites
closed
Visual Studio Template
Revisit for F7 codebase help wanted
We're looking to make Foundation more friendly for our .NET/Visual Studio audience, so there's two things we want to do there: - Publish a NuGet package #8302 - Create a Visual Studio template for Foundation Basically, like those Bootstrap templates that already ship with Visual Studio, but with Foundation ;) We can use the basic [index.html](https://github.com/zurb/foundation-sites-template/blob/master/index.html) we ship with our starter projects.
1.0
Visual Studio Template - We're looking to make Foundation more friendly for our .NET/Visual Studio audience, so there's two things we want to do there: - Publish a NuGet package #8302 - Create a Visual Studio template for Foundation Basically, like those Bootstrap templates that already ship with Visual Studio, but with Foundation ;) We can use the basic [index.html](https://github.com/zurb/foundation-sites-template/blob/master/index.html) we ship with our starter projects.
non_test
visual studio template we re looking to make foundation more friendly for our net visual studio audience so there s two things we want to do there publish a nuget package create a visual studio template for foundation basically like those bootstrap templates that already ship with visual studio but with foundation we can use the basic we ship with our starter projects
0
194,297
15,418,876,980
IssuesEvent
2021-03-05 09:24:21
ait-aecid/logdata-anomaly-miner
https://api.github.com/repos/ait-aecid/logdata-anomaly-miner
opened
Adjust Detector-HowTo for code refactoring from development branch
documentation
The HowTo currently only reflects the main branch, not the development branch. Important changes so far: - __init__.py in analysis does not exist anymore, it is not necessary to add new detectors there. - The YAMLSchema file does not exist anymore. Instead, the parameter seq_len needs to be added to /home/ubuntu/aminer/source/root/usr/lib/logdata-anomaly-miner/aminer/schemas/normalisation/AnalysisNormalisationSchema.py - In addition, all parameters of the detector have to be added in /home/ubuntu/aminer/source/root/usr/lib/logdata-anomaly-miner/aminer/schemas/validation/AnalysisValidationSchema.py. This should look as follows (not verified): { 'id': {'type': 'string', 'nullable': True}, 'type': {'type': 'string', 'allowed': ['EventSequenceDetector'], 'required': True}, 'paths': {'type': 'list', 'schema': {'type': 'string'}, 'nullable': True}, 'seq_len': {'type': 'integer', 'min': 1}, 'persistence_id': {'type': 'string'}, 'learn_mode': {'type': 'boolean'}, 'output_logline': {'type': 'boolean'}, 'output_event_handlers': {'type': 'list', 'nullable': True}, },
1.0
Adjust Detector-HowTo for code refactoring from development branch - The HowTo currently only reflects the main branch, not the development branch. Important changes so far: - __init__.py in analysis does not exist anymore, it is not necessary to add new detectors there. - The YAMLSchema file does not exist anymore. Instead, the parameter seq_len needs to be added to /home/ubuntu/aminer/source/root/usr/lib/logdata-anomaly-miner/aminer/schemas/normalisation/AnalysisNormalisationSchema.py - In addition, all parameters of the detector have to be added in /home/ubuntu/aminer/source/root/usr/lib/logdata-anomaly-miner/aminer/schemas/validation/AnalysisValidationSchema.py. This should look as follows (not verified): { 'id': {'type': 'string', 'nullable': True}, 'type': {'type': 'string', 'allowed': ['EventSequenceDetector'], 'required': True}, 'paths': {'type': 'list', 'schema': {'type': 'string'}, 'nullable': True}, 'seq_len': {'type': 'integer', 'min': 1}, 'persistence_id': {'type': 'string'}, 'learn_mode': {'type': 'boolean'}, 'output_logline': {'type': 'boolean'}, 'output_event_handlers': {'type': 'list', 'nullable': True}, },
non_test
adjust detector howto for code refactoring from development branch the howto currently only reflects the main branch not the development branch important changes so far init py in analysis does not exist anymore it is not necessary to add new detectors there the yamlschema file does not exist anymore instead the parameter seq len needs to be added to home ubuntu aminer source root usr lib logdata anomaly miner aminer schemas normalisation analysisnormalisationschema py in addition all parameters of the detector have to be added in home ubuntu aminer source root usr lib logdata anomaly miner aminer schemas validation analysisvalidationschema py this should look as follows not verified id type string nullable true type type string allowed required true paths type list schema type string nullable true seq len type integer min persistence id type string learn mode type boolean output logline type boolean output event handlers type list nullable true
0
345,066
30,784,135,500
IssuesEvent
2023-07-31 12:09:52
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
Failing test: X-Pack Alerting API Integration Tests.x-pack/test/alerting_api_integration/security_and_spaces/group2/tests/telemetry/alerting_and_actions_telemetry·ts - alerting api integration security and spaces enabled - Group 2 Alerting and Actions Telemetry telemetry should retrieve telemetry data in the expected format
blocker failed-test skipped-test Team:ResponseOps v8.10.0
A test failed on a tracked branch ``` Error: expected false to equal true at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11) at Assertion.be.Assertion.equal (node_modules/@kbn/expect/expect.js:227:8) at Assertion.be (node_modules/@kbn/expect/expect.js:69:22) at verifyActionsTelemetry (x-pack/test/alerting_api_integration/security_and_spaces/group2/tests/telemetry/alerting_and_actions_telemetry.ts:239:59) at Context.<anonymous> (x-pack/test/alerting_api_integration/security_and_spaces/group2/tests/telemetry/alerting_and_actions_telemetry.ts:590:7) at runMicrotasks (<anonymous>) at processTicksAndRejections (node:internal/process/task_queues:96:5) at Object.apply (node_modules/@kbn/test/target_node/src/functional_test_runner/lib/mocha/wrap_function.js:87:16) ``` First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/20991#01835658-b5c3-4f17-b032-320a9c77286d) <!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Alerting API Integration Tests.x-pack/test/alerting_api_integration/security_and_spaces/group2/tests/telemetry/alerting_and_actions_telemetry·ts","test.name":"alerting api integration security and spaces enabled - Group 2 Alerting and Actions Telemetry telemetry should retrieve telemetry data in the expected format","test.failCount":34}} -->
2.0
Failing test: X-Pack Alerting API Integration Tests.x-pack/test/alerting_api_integration/security_and_spaces/group2/tests/telemetry/alerting_and_actions_telemetry·ts - alerting api integration security and spaces enabled - Group 2 Alerting and Actions Telemetry telemetry should retrieve telemetry data in the expected format - A test failed on a tracked branch ``` Error: expected false to equal true at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11) at Assertion.be.Assertion.equal (node_modules/@kbn/expect/expect.js:227:8) at Assertion.be (node_modules/@kbn/expect/expect.js:69:22) at verifyActionsTelemetry (x-pack/test/alerting_api_integration/security_and_spaces/group2/tests/telemetry/alerting_and_actions_telemetry.ts:239:59) at Context.<anonymous> (x-pack/test/alerting_api_integration/security_and_spaces/group2/tests/telemetry/alerting_and_actions_telemetry.ts:590:7) at runMicrotasks (<anonymous>) at processTicksAndRejections (node:internal/process/task_queues:96:5) at Object.apply (node_modules/@kbn/test/target_node/src/functional_test_runner/lib/mocha/wrap_function.js:87:16) ``` First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/20991#01835658-b5c3-4f17-b032-320a9c77286d) <!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Alerting API Integration Tests.x-pack/test/alerting_api_integration/security_and_spaces/group2/tests/telemetry/alerting_and_actions_telemetry·ts","test.name":"alerting api integration security and spaces enabled - Group 2 Alerting and Actions Telemetry telemetry should retrieve telemetry data in the expected format","test.failCount":34}} -->
test
failing test x pack alerting api integration tests x pack test alerting api integration security and spaces tests telemetry alerting and actions telemetry·ts alerting api integration security and spaces enabled group alerting and actions telemetry telemetry should retrieve telemetry data in the expected format a test failed on a tracked branch error expected false to equal true at assertion assert node modules kbn expect expect js at assertion be assertion equal node modules kbn expect expect js at assertion be node modules kbn expect expect js at verifyactionstelemetry x pack test alerting api integration security and spaces tests telemetry alerting and actions telemetry ts at context x pack test alerting api integration security and spaces tests telemetry alerting and actions telemetry ts at runmicrotasks at processticksandrejections node internal process task queues at object apply node modules kbn test target node src functional test runner lib mocha wrap function js first failure
1
294,816
25,407,948,777
IssuesEvent
2022-11-22 16:36:04
bkd-mba-fbi/webapp-schulverwaltung
https://api.github.com/repos/bkd-mba-fbi/webapp-schulverwaltung
closed
Layout issues: Tests
module-Tests
**Tests** * Dropdowns haben keinen Pfeil mehr nach unten * Kopfbereich mit Fachname: Pfeil zurück zu sehr links, Fach und Anzahl Anmeldungen nicht aligniert ![image](https://user-images.githubusercontent.com/49237948/199266502-f78ad7fb-dd93-46cb-b0d1-7c49c8f477e7.png) **Neuen Test erfassen** * Abstand vom Feld Bezeichnung zum Label Datum zu klein * Faktor zu sehr links (nicht aligniert) * Abstand vom Label Bezeichnung nach oben dürfte kleiner sein (war aber immer schon zu gross) ![image](https://user-images.githubusercontent.com/49237948/199265365-41ae37dd-715d-4c16-896d-c59e58fe60fe.png)
1.0
Layout issues: Tests - **Tests** * Dropdowns haben keinen Pfeil mehr nach unten * Kopfbereich mit Fachname: Pfeil zurück zu sehr links, Fach und Anzahl Anmeldungen nicht aligniert ![image](https://user-images.githubusercontent.com/49237948/199266502-f78ad7fb-dd93-46cb-b0d1-7c49c8f477e7.png) **Neuen Test erfassen** * Abstand vom Feld Bezeichnung zum Label Datum zu klein * Faktor zu sehr links (nicht aligniert) * Abstand vom Label Bezeichnung nach oben dürfte kleiner sein (war aber immer schon zu gross) ![image](https://user-images.githubusercontent.com/49237948/199265365-41ae37dd-715d-4c16-896d-c59e58fe60fe.png)
test
layout issues tests tests dropdowns haben keinen pfeil mehr nach unten kopfbereich mit fachname pfeil zurück zu sehr links fach und anzahl anmeldungen nicht aligniert neuen test erfassen abstand vom feld bezeichnung zum label datum zu klein faktor zu sehr links nicht aligniert abstand vom label bezeichnung nach oben dürfte kleiner sein war aber immer schon zu gross
1
338,435
30,297,808,943
IssuesEvent
2023-07-10 01:36:12
unifyai/ivy
https://api.github.com/repos/unifyai/ivy
reopened
Fix tensor.test_torch_instance_unsqueeze
PyTorch Frontend Sub Task Failing Test
| | | |---|---| |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5502830611/jobs/10027435093"><img src=https://img.shields.io/badge/-success-success></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/5502830611/jobs/10027435093"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5502830611/jobs/10027435093"><img src=https://img.shields.io/badge/-failure-red></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/5502830611/jobs/10027435093"><img src=https://img.shields.io/badge/-success-success></a> |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5502830611/jobs/10027435093"><img src=https://img.shields.io/badge/-success-success></a>
1.0
Fix tensor.test_torch_instance_unsqueeze - | | | |---|---| |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5502830611/jobs/10027435093"><img src=https://img.shields.io/badge/-success-success></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/5502830611/jobs/10027435093"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5502830611/jobs/10027435093"><img src=https://img.shields.io/badge/-failure-red></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/5502830611/jobs/10027435093"><img src=https://img.shields.io/badge/-success-success></a> |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5502830611/jobs/10027435093"><img src=https://img.shields.io/badge/-success-success></a>
test
fix tensor test torch instance unsqueeze tensorflow a href src jax a href src numpy a href src torch a href src paddle a href src
1
159,386
12,474,842,721
IssuesEvent
2020-05-29 10:24:03
aliasrobotics/RVD
https://api.github.com/repos/aliasrobotics/RVD
closed
RVD#2008: subprocess call with shell=True identified, security issue., /opt/ros_noetic_ws/src/catkin/test/unit_tests/test_catkin_make_isolated.py:148
bandit bug static analysis testing triage
```yaml { "id": 2008, "title": "RVD#2008: subprocess call with shell=True identified, security issue., /opt/ros_noetic_ws/src/catkin/test/unit_tests/test_catkin_make_isolated.py:148", "type": "bug", "description": "HIGH confidence of HIGH severity bug. subprocess call with shell=True identified, security issue. at /opt/ros_noetic_ws/src/catkin/test/unit_tests/test_catkin_make_isolated.py:148 See links for more info on the bug.", "cwe": "None", "cve": "None", "keywords": [ "bandit", "bug", "static analysis", "testing", "triage", "bug" ], "system": "", "vendor": null, "severity": { "rvss-score": 0, "rvss-vector": "", "severity-description": "", "cvss-score": 0, "cvss-vector": "" }, "links": [ "https://github.com/aliasrobotics/RVD/issues/2008", "https://bandit.readthedocs.io/en/latest/plugins/b602_subprocess_popen_with_shell_equals_true.html" ], "flaw": { "phase": "testing", "specificity": "subject-specific", "architectural-location": "application-specific", "application": "N/A", "subsystem": "N/A", "package": "N/A", "languages": "None", "date-detected": "2020-05-29 (09:10)", "detected-by": "Alias Robotics", "detected-by-method": "testing static", "date-reported": "2020-05-29 (09:10)", "reported-by": "Alias Robotics", "reported-by-relationship": "automatic", "issue": "https://github.com/aliasrobotics/RVD/issues/2008", "reproducibility": "always", "trace": "/opt/ros_noetic_ws/src/catkin/test/unit_tests/test_catkin_make_isolated.py:148", "reproduction": "See artifacts below (if available)", "reproduction-image": "" }, "exploitation": { "description": "", "exploitation-image": "", "exploitation-vector": "" }, "mitigation": { "description": "", "pull-request": "", "date-mitigation": "" } } ```
1.0
RVD#2008: subprocess call with shell=True identified, security issue., /opt/ros_noetic_ws/src/catkin/test/unit_tests/test_catkin_make_isolated.py:148 - ```yaml { "id": 2008, "title": "RVD#2008: subprocess call with shell=True identified, security issue., /opt/ros_noetic_ws/src/catkin/test/unit_tests/test_catkin_make_isolated.py:148", "type": "bug", "description": "HIGH confidence of HIGH severity bug. subprocess call with shell=True identified, security issue. at /opt/ros_noetic_ws/src/catkin/test/unit_tests/test_catkin_make_isolated.py:148 See links for more info on the bug.", "cwe": "None", "cve": "None", "keywords": [ "bandit", "bug", "static analysis", "testing", "triage", "bug" ], "system": "", "vendor": null, "severity": { "rvss-score": 0, "rvss-vector": "", "severity-description": "", "cvss-score": 0, "cvss-vector": "" }, "links": [ "https://github.com/aliasrobotics/RVD/issues/2008", "https://bandit.readthedocs.io/en/latest/plugins/b602_subprocess_popen_with_shell_equals_true.html" ], "flaw": { "phase": "testing", "specificity": "subject-specific", "architectural-location": "application-specific", "application": "N/A", "subsystem": "N/A", "package": "N/A", "languages": "None", "date-detected": "2020-05-29 (09:10)", "detected-by": "Alias Robotics", "detected-by-method": "testing static", "date-reported": "2020-05-29 (09:10)", "reported-by": "Alias Robotics", "reported-by-relationship": "automatic", "issue": "https://github.com/aliasrobotics/RVD/issues/2008", "reproducibility": "always", "trace": "/opt/ros_noetic_ws/src/catkin/test/unit_tests/test_catkin_make_isolated.py:148", "reproduction": "See artifacts below (if available)", "reproduction-image": "" }, "exploitation": { "description": "", "exploitation-image": "", "exploitation-vector": "" }, "mitigation": { "description": "", "pull-request": "", "date-mitigation": "" } } ```
test
rvd subprocess call with shell true identified security issue opt ros noetic ws src catkin test unit tests test catkin make isolated py yaml id title rvd subprocess call with shell true identified security issue opt ros noetic ws src catkin test unit tests test catkin make isolated py type bug description high confidence of high severity bug subprocess call with shell true identified security issue at opt ros noetic ws src catkin test unit tests test catkin make isolated py see links for more info on the bug cwe none cve none keywords bandit bug static analysis testing triage bug system vendor null severity rvss score rvss vector severity description cvss score cvss vector links flaw phase testing specificity subject specific architectural location application specific application n a subsystem n a package n a languages none date detected detected by alias robotics detected by method testing static date reported reported by alias robotics reported by relationship automatic issue reproducibility always trace opt ros noetic ws src catkin test unit tests test catkin make isolated py reproduction see artifacts below if available reproduction image exploitation description exploitation image exploitation vector mitigation description pull request date mitigation
1
265,881
23,207,028,814
IssuesEvent
2022-08-02 06:42:40
Uuvana-Studios/longvinter-windows-client
https://api.github.com/repos/Uuvana-Studios/longvinter-windows-client
closed
The house is gone after the update
Bug Not Tested
**Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See an error **Expected behavior** There were three-family houses, among which the middle house disappeared after the update. The loot was visible inside the house, but the house itself was gone and the tent could not be rebuilt. **Screenshots** If applicable, add screenshots to help explain your problem. ![스크린샷(4)](https://user-images.githubusercontent.com/110277198/181877370-b6a61bae-b183-4fd2-a5b9-01059a156596.png) ![스크린샷(5)](https://user-images.githubusercontent.com/110277198/181877378-6d29701e-0b35-4348-8889-8a69afbaa27f.png) **Desktop (please complete the following information):** - OS: [e.g. Windows] - Game Version [e.g. 1.0] - Steam Version [e.g. 1.0] **Additional context** Add any other context about the problem here.
1.0
The house is gone after the update - **Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See an error **Expected behavior** There were three-family houses, among which the middle house disappeared after the update. The loot was visible inside the house, but the house itself was gone and the tent could not be rebuilt. **Screenshots** If applicable, add screenshots to help explain your problem. ![스크린샷(4)](https://user-images.githubusercontent.com/110277198/181877370-b6a61bae-b183-4fd2-a5b9-01059a156596.png) ![스크린샷(5)](https://user-images.githubusercontent.com/110277198/181877378-6d29701e-0b35-4348-8889-8a69afbaa27f.png) **Desktop (please complete the following information):** - OS: [e.g. Windows] - Game Version [e.g. 1.0] - Steam Version [e.g. 1.0] **Additional context** Add any other context about the problem here.
test
the house is gone after the update describe the bug a clear and concise description of what the bug is to reproduce steps to reproduce the behavior go to click on scroll down to see an error expected behavior there were three family houses among which the middle house disappeared after the update the loot was visible inside the house but the house itself was gone and the tent could not be rebuilt screenshots if applicable add screenshots to help explain your problem desktop please complete the following information os game version steam version additional context add any other context about the problem here
1
56,356
8,068,366,186
IssuesEvent
2018-08-05 19:34:40
riking/joycon
https://api.github.com/repos/riking/joycon
reopened
when i type in the first command it says unable to locate package go
documentation
when i type in sudo apt install libudev-dev go git it says unable to locate package go
1.0
when i type in the first command it says unable to locate package go - when i type in sudo apt install libudev-dev go git it says unable to locate package go
non_test
when i type in the first command it says unable to locate package go when i type in sudo apt install libudev dev go git it says unable to locate package go
0
96,476
8,615,976,997
IssuesEvent
2018-11-19 22:17:46
mozilla-mobile/android-components
https://api.github.com/repos/mozilla-mobile/android-components
opened
Flaky test `debounce 3 and 4 of 4 inputs` on support ktx
✅ testing
``` > Task :support-ktx:testDebugUnitTest mozilla.components.support.ktx.kotlin.CoroutineScopeTest > debounce 1 and 4 of 4 inputs FAILED java.lang.AssertionError 35 tests completed, 1 failed > Task :support-ktx:testDebugUnitTest FAILED FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':support-ktx:testDebugUnitTest'. > There were failing tests. See the report at: file:///build/android-components/components/support/ktx/build/reports/tests/testDebugUnitTest/index.html * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. * Get more help at https://help.gradle.org BUILD FAILED in 5m 54s 151 actionable tasks: 149 executed, 2 up-to-date [taskcluster 2018-11-19 22:08:29.029Z] === Task Finished === [taskcluster 2018-11-19 22:08:29.029Z] Unsuccessful task run with exit code: 1 completed in 361.873 seconds ```
1.0
Flaky test `debounce 3 and 4 of 4 inputs` on support ktx - ``` > Task :support-ktx:testDebugUnitTest mozilla.components.support.ktx.kotlin.CoroutineScopeTest > debounce 1 and 4 of 4 inputs FAILED java.lang.AssertionError 35 tests completed, 1 failed > Task :support-ktx:testDebugUnitTest FAILED FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':support-ktx:testDebugUnitTest'. > There were failing tests. See the report at: file:///build/android-components/components/support/ktx/build/reports/tests/testDebugUnitTest/index.html * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. * Get more help at https://help.gradle.org BUILD FAILED in 5m 54s 151 actionable tasks: 149 executed, 2 up-to-date [taskcluster 2018-11-19 22:08:29.029Z] === Task Finished === [taskcluster 2018-11-19 22:08:29.029Z] Unsuccessful task run with exit code: 1 completed in 361.873 seconds ```
test
flaky test debounce and of inputs on support ktx task support ktx testdebugunittest mozilla components support ktx kotlin coroutinescopetest debounce and of inputs failed java lang assertionerror tests completed failed task support ktx testdebugunittest failed failure build failed with an exception what went wrong execution failed for task support ktx testdebugunittest there were failing tests see the report at file build android components components support ktx build reports tests testdebugunittest index html try run with stacktrace option to get the stack trace run with info or debug option to get more log output run with scan to get full insights get more help at build failed in actionable tasks executed up to date task finished unsuccessful task run with exit code completed in seconds
1
420,875
28,302,499,963
IssuesEvent
2023-04-10 07:41:09
Arquisoft/lomap_es3c
https://api.github.com/repos/Arquisoft/lomap_es3c
closed
Trabajo realizado por Alex para la tercera entrega
documentation
A continuación se muestra el conjunto de tareas que he realizado durante la segunda entrega de la asignatura: - Actualización de la parte de la documentación solicitada. - [Reformas en la página de Inicio (login/registro, nueva info en el carrusel, nuevos menús, cambio ligero de disposición).](https://github.com/Arquisoft/lomap_es3c/issues/54) - [Reformas en la página de Home (lista de mapas y amigos, nuevos menús, cambios menú añadir puntos, formularios y diálogos emergentes de menús).](https://github.com/Arquisoft/lomap_es3c/issues/55) - [Formularios para crear nuevos mapas y amigos, con la interfaz gráfica amigable.](https://github.com/Arquisoft/lomap_es3c/issues/58) - [Compartimentación y refactorización de código para definir las APIs que trabajen con información de los PODs.](https://github.com/Arquisoft/lomap_es3c/issues/59) - [Creación de la Base de Datos de MongoDB.](https://github.com/Arquisoft/lomap_es3c/issues/49) - [Investigación sobre el funcionamiento de MongoDB y su integración al proyecto (Conexión entre WebApp y RestAPI).](https://github.com/Arquisoft/lomap_es3c/issues/57) - [Preparación de métodos que crean la conexión con la base de datos e importación de elementos necesarios.](https://github.com/Arquisoft/lomap_es3c/issues/62) - [Pruebas de Conexión con la BBDD + Pruebas de Operaciones CRUD.](https://github.com/Arquisoft/lomap_es3c/issues/62) - [Elaboración del esquema y planificación de información a guardar en la BBDD.](https://github.com/Arquisoft/lomap_es3c/issues/48) - [Investigación GeoJson.](https://github.com/Arquisoft/lomap_es3c/issues/35) - [Diálogos emergentes con interfaz amigable y funcionalidad (con los forms y desplegables) para nuevos amigos y mapas de amigos.](https://github.com/Arquisoft/lomap_es3c/issues/60) - [Función de almacenamiento de usuarios en la BBDD (con las comprobaciones necesarias).](https://github.com/Arquisoft/lomap_es3c/issues/50) - [Función de enviar solicitudes de amistad en la BBDD (con comprobaciones de amistad, existencia de solicitudes y existencia de usuarios).](https://github.com/Arquisoft/lomap_es3c/issues/61) - [Función de aceptar o rechazar solicitudes de amistad (parte de la base de datos, sobre las solicitudes).](https://github.com/Arquisoft/lomap_es3c/issues/66) - [Mejora de Interfaz de los filtros y del listado de amigos recuperado de los PODs.](https://github.com/Arquisoft/lomap_es3c/issues/73) - [Función del apartado de "Mi cuenta" con la interfaz necesaria.](https://github.com/Arquisoft/lomap_es3c/issues/69) - [Función del desactivado de cuentas.](https://github.com/Arquisoft/lomap_es3c/issues/70)
1.0
Trabajo realizado por Alex para la tercera entrega - A continuación se muestra el conjunto de tareas que he realizado durante la segunda entrega de la asignatura: - Actualización de la parte de la documentación solicitada. - [Reformas en la página de Inicio (login/registro, nueva info en el carrusel, nuevos menús, cambio ligero de disposición).](https://github.com/Arquisoft/lomap_es3c/issues/54) - [Reformas en la página de Home (lista de mapas y amigos, nuevos menús, cambios menú añadir puntos, formularios y diálogos emergentes de menús).](https://github.com/Arquisoft/lomap_es3c/issues/55) - [Formularios para crear nuevos mapas y amigos, con la interfaz gráfica amigable.](https://github.com/Arquisoft/lomap_es3c/issues/58) - [Compartimentación y refactorización de código para definir las APIs que trabajen con información de los PODs.](https://github.com/Arquisoft/lomap_es3c/issues/59) - [Creación de la Base de Datos de MongoDB.](https://github.com/Arquisoft/lomap_es3c/issues/49) - [Investigación sobre el funcionamiento de MongoDB y su integración al proyecto (Conexión entre WebApp y RestAPI).](https://github.com/Arquisoft/lomap_es3c/issues/57) - [Preparación de métodos que crean la conexión con la base de datos e importación de elementos necesarios.](https://github.com/Arquisoft/lomap_es3c/issues/62) - [Pruebas de Conexión con la BBDD + Pruebas de Operaciones CRUD.](https://github.com/Arquisoft/lomap_es3c/issues/62) - [Elaboración del esquema y planificación de información a guardar en la BBDD.](https://github.com/Arquisoft/lomap_es3c/issues/48) - [Investigación GeoJson.](https://github.com/Arquisoft/lomap_es3c/issues/35) - [Diálogos emergentes con interfaz amigable y funcionalidad (con los forms y desplegables) para nuevos amigos y mapas de amigos.](https://github.com/Arquisoft/lomap_es3c/issues/60) - [Función de almacenamiento de usuarios en la BBDD (con las comprobaciones necesarias).](https://github.com/Arquisoft/lomap_es3c/issues/50) - [Función de enviar solicitudes de amistad en la BBDD (con comprobaciones de amistad, existencia de solicitudes y existencia de usuarios).](https://github.com/Arquisoft/lomap_es3c/issues/61) - [Función de aceptar o rechazar solicitudes de amistad (parte de la base de datos, sobre las solicitudes).](https://github.com/Arquisoft/lomap_es3c/issues/66) - [Mejora de Interfaz de los filtros y del listado de amigos recuperado de los PODs.](https://github.com/Arquisoft/lomap_es3c/issues/73) - [Función del apartado de "Mi cuenta" con la interfaz necesaria.](https://github.com/Arquisoft/lomap_es3c/issues/69) - [Función del desactivado de cuentas.](https://github.com/Arquisoft/lomap_es3c/issues/70)
non_test
trabajo realizado por alex para la tercera entrega a continuación se muestra el conjunto de tareas que he realizado durante la segunda entrega de la asignatura actualización de la parte de la documentación solicitada
0
207,264
15,801,261,842
IssuesEvent
2021-04-03 03:54:55
Hamlib/Hamlib
https://api.github.com/repos/Hamlib/Hamlib
closed
FTDX5000 VFO inaccessible
JTDX WSJTX bug critical needs test
20210318_145715.594(0) handle_transceiver_update started, current rig state: PTT On, requested state: PTT On, m_tx_when_ready: false, g_iptt=1 20210318_145715.605(0) handle_transceiver_update PTT On Split On s.frequency:50313000 s.tx_frequency:50313000 20210318_145728.933(0) handle_transceiver_update started, current rig state: PTT On, requested state: PTT Off, m_tx_when_ready: false, g_iptt=0 20210318_145728.951(0) handle_transceiver_update PTT Off Split On s.frequency:50313000 s.tx_frequency:50313000 20210318_145728.964(0) Halt Tx triggered at RX: Rig control error: Hamlib error: Target VFO unaccessible while setting frequency
1.0
FTDX5000 VFO inaccessible - 20210318_145715.594(0) handle_transceiver_update started, current rig state: PTT On, requested state: PTT On, m_tx_when_ready: false, g_iptt=1 20210318_145715.605(0) handle_transceiver_update PTT On Split On s.frequency:50313000 s.tx_frequency:50313000 20210318_145728.933(0) handle_transceiver_update started, current rig state: PTT On, requested state: PTT Off, m_tx_when_ready: false, g_iptt=0 20210318_145728.951(0) handle_transceiver_update PTT Off Split On s.frequency:50313000 s.tx_frequency:50313000 20210318_145728.964(0) Halt Tx triggered at RX: Rig control error: Hamlib error: Target VFO unaccessible while setting frequency
test
vfo inaccessible handle transceiver update started current rig state ptt on requested state ptt on m tx when ready false g iptt handle transceiver update ptt on split on s frequency s tx frequency handle transceiver update started current rig state ptt on requested state ptt off m tx when ready false g iptt handle transceiver update ptt off split on s frequency s tx frequency halt tx triggered at rx rig control error hamlib error target vfo unaccessible while setting frequency
1
829,502
31,881,374,027
IssuesEvent
2023-09-16 12:20:30
GoogleCloudPlatform/python-docs-samples
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
closed
vision.snippets.face_detection.faces_test: test_main failed
priority: p1 type: bug api: vision samples flakybot: issue
This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: 291e31411bd48a4261a5166bd55f08884e3d4124 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/1e1e2361-33d2-41c1-b560-07b83c37fe71), [Sponge](http://sponge2/1e1e2361-33d2-41c1-b560-07b83c37fe71) status: failed <details><summary>Test output</summary><br><pre>Traceback (most recent call last): File "/workspace/vision/snippets/face_detection/.nox/py-3-11/lib/python3.11/site-packages/google/api_core/grpc_helpers.py", line 72, in error_remapped_callable return callable_(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/vision/snippets/face_detection/.nox/py-3-11/lib/python3.11/site-packages/grpc/_channel.py", line 1161, in __call__ return _end_unary_response_blocking(state, call, False, None) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/vision/snippets/face_detection/.nox/py-3-11/lib/python3.11/site-packages/grpc/_channel.py", line 1004, in _end_unary_response_blocking raise _InactiveRpcError(state) # pytype: disable=not-instantiable ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "502:Bad Gateway" debug_error_string = "UNKNOWN:Error received from peer {created_time:"2023-09-15T13:46:18.134297403+00:00", grpc_status:14, grpc_message:"502:Bad Gateway"}" > The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/workspace/vision/snippets/face_detection/faces_test.py", line 33, in test_main main(in_file, out_file, 10) File "/workspace/vision/snippets/face_detection/faces.py", line 87, in main faces = detect_face(image, max_results) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/vision/snippets/face_detection/faces.py", line 45, in detect_face return client.face_detection(image=image, max_results=max_results).face_annotations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/vision/snippets/face_detection/.nox/py-3-11/lib/python3.11/site-packages/google/cloud/vision_helpers/decorators.py", line 112, in inner response = self.annotate_image( ^^^^^^^^^^^^^^^^^^^^ File "/workspace/vision/snippets/face_detection/.nox/py-3-11/lib/python3.11/site-packages/google/cloud/vision_helpers/__init__.py", line 76, in annotate_image r = self.batch_annotate_images( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/vision/snippets/face_detection/.nox/py-3-11/lib/python3.11/site-packages/google/cloud/vision_v1/services/image_annotator/client.py", line 564, in batch_annotate_images response = rpc( ^^^^ File "/workspace/vision/snippets/face_detection/.nox/py-3-11/lib/python3.11/site-packages/google/api_core/gapic_v1/method.py", line 113, in __call__ return wrapped_func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/vision/snippets/face_detection/.nox/py-3-11/lib/python3.11/site-packages/google/api_core/grpc_helpers.py", line 74, in error_remapped_callable raise exceptions.from_grpc_error(exc) from exc google.api_core.exceptions.ServiceUnavailable: 503 502:Bad Gateway</pre></details>
1.0
vision.snippets.face_detection.faces_test: test_main failed - This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: 291e31411bd48a4261a5166bd55f08884e3d4124 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/1e1e2361-33d2-41c1-b560-07b83c37fe71), [Sponge](http://sponge2/1e1e2361-33d2-41c1-b560-07b83c37fe71) status: failed <details><summary>Test output</summary><br><pre>Traceback (most recent call last): File "/workspace/vision/snippets/face_detection/.nox/py-3-11/lib/python3.11/site-packages/google/api_core/grpc_helpers.py", line 72, in error_remapped_callable return callable_(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/vision/snippets/face_detection/.nox/py-3-11/lib/python3.11/site-packages/grpc/_channel.py", line 1161, in __call__ return _end_unary_response_blocking(state, call, False, None) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/vision/snippets/face_detection/.nox/py-3-11/lib/python3.11/site-packages/grpc/_channel.py", line 1004, in _end_unary_response_blocking raise _InactiveRpcError(state) # pytype: disable=not-instantiable ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "502:Bad Gateway" debug_error_string = "UNKNOWN:Error received from peer {created_time:"2023-09-15T13:46:18.134297403+00:00", grpc_status:14, grpc_message:"502:Bad Gateway"}" > The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/workspace/vision/snippets/face_detection/faces_test.py", line 33, in test_main main(in_file, out_file, 10) File "/workspace/vision/snippets/face_detection/faces.py", line 87, in main faces = detect_face(image, max_results) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/vision/snippets/face_detection/faces.py", line 45, in detect_face return client.face_detection(image=image, max_results=max_results).face_annotations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/vision/snippets/face_detection/.nox/py-3-11/lib/python3.11/site-packages/google/cloud/vision_helpers/decorators.py", line 112, in inner response = self.annotate_image( ^^^^^^^^^^^^^^^^^^^^ File "/workspace/vision/snippets/face_detection/.nox/py-3-11/lib/python3.11/site-packages/google/cloud/vision_helpers/__init__.py", line 76, in annotate_image r = self.batch_annotate_images( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/vision/snippets/face_detection/.nox/py-3-11/lib/python3.11/site-packages/google/cloud/vision_v1/services/image_annotator/client.py", line 564, in batch_annotate_images response = rpc( ^^^^ File "/workspace/vision/snippets/face_detection/.nox/py-3-11/lib/python3.11/site-packages/google/api_core/gapic_v1/method.py", line 113, in __call__ return wrapped_func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/vision/snippets/face_detection/.nox/py-3-11/lib/python3.11/site-packages/google/api_core/grpc_helpers.py", line 74, in error_remapped_callable raise exceptions.from_grpc_error(exc) from exc google.api_core.exceptions.ServiceUnavailable: 503 502:Bad Gateway</pre></details>
non_test
vision snippets face detection faces test test main failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output traceback most recent call last file workspace vision snippets face detection nox py lib site packages google api core grpc helpers py line in error remapped callable return callable args kwargs file workspace vision snippets face detection nox py lib site packages grpc channel py line in call return end unary response blocking state call false none file workspace vision snippets face detection nox py lib site packages grpc channel py line in end unary response blocking raise inactiverpcerror state pytype disable not instantiable grpc channel inactiverpcerror inactiverpcerror of rpc that terminated with status statuscode unavailable details bad gateway debug error string unknown error received from peer created time grpc status grpc message bad gateway the above exception was the direct cause of the following exception traceback most recent call last file workspace vision snippets face detection faces test py line in test main main in file out file file workspace vision snippets face detection faces py line in main faces detect face image max results file workspace vision snippets face detection faces py line in detect face return client face detection image image max results max results face annotations file workspace vision snippets face detection nox py lib site packages google cloud vision helpers decorators py line in inner response self annotate image file workspace vision snippets face detection nox py lib site packages google cloud vision helpers init py line in annotate image r self batch annotate images file workspace vision snippets face detection nox py lib site packages google cloud vision services image annotator client py line in batch annotate images response rpc file workspace vision snippets face detection nox py lib site packages google api core gapic method py line in call return wrapped func args kwargs file workspace vision snippets face detection nox py lib site packages google api core grpc helpers py line in error remapped callable raise exceptions from grpc error exc from exc google api core exceptions serviceunavailable bad gateway
0
223,119
17,568,295,011
IssuesEvent
2021-08-14 06:10:59
druxt/druxt-entity
https://api.github.com/repos/druxt/druxt-entity
closed
Add DruxtFieldImageUrl component.
enhancement needs tests
**Is your feature request related to a problem? Please describe.** DruxtFieldImageUrl component is missing. **Describe the solution you'd like** Add DruxtFieldImageUrl component. **Describe alternatives you've considered** N/A **Additional context** N/A
1.0
Add DruxtFieldImageUrl component. - **Is your feature request related to a problem? Please describe.** DruxtFieldImageUrl component is missing. **Describe the solution you'd like** Add DruxtFieldImageUrl component. **Describe alternatives you've considered** N/A **Additional context** N/A
test
add druxtfieldimageurl component is your feature request related to a problem please describe druxtfieldimageurl component is missing describe the solution you d like add druxtfieldimageurl component describe alternatives you ve considered n a additional context n a
1
103,169
11,341,238,770
IssuesEvent
2020-01-23 08:54:32
spring-projects/spring-boot
https://api.github.com/repos/spring-projects/spring-boot
closed
Update documentation on excluding an auto-configuration to recommend exclude on SpringBootApplication
status: forward-port type: documentation
Forward port of issue #19855 to 2.2.5.
1.0
Update documentation on excluding an auto-configuration to recommend exclude on SpringBootApplication - Forward port of issue #19855 to 2.2.5.
non_test
update documentation on excluding an auto configuration to recommend exclude on springbootapplication forward port of issue to
0
215,545
16,608,425,845
IssuesEvent
2021-06-02 08:00:22
alphagov/govuk-design-system
https://api.github.com/repos/alphagov/govuk-design-system
opened
Determine guidelines on focus states
accessibility documentation guidance
## What We want to improve the consistency of focus states and expected focus behaviour across GOV.UK. As a team, we need to set some principles for our ideal focus states, when and how they are used. ## Why In order to publish documentation on focus states we need to determine some appropriate rules/guides. ## Who needs to know about this Designer, Developer, all the team ## Further detail - [Focus states - an audit ](https://docs.google.com/presentation/d/1kXxnh47Jqw6ZcNzyyTewR_JPwCHyreTMdBGNvE1huWo/edit#slide=id.gdb4961032f_0_30) - [Understanding focus states](https://design-system.service.gov.uk/get-started/focus-states/) in get started guidance ## Done when - [ ] Collect examples of focus states - [ ] Workshop with team - [ ] Defined some clear principles - [ ] Have enough information to start documentation
1.0
Determine guidelines on focus states - ## What We want to improve the consistency of focus states and expected focus behaviour across GOV.UK. As a team, we need to set some principles for our ideal focus states, when and how they are used. ## Why In order to publish documentation on focus states we need to determine some appropriate rules/guides. ## Who needs to know about this Designer, Developer, all the team ## Further detail - [Focus states - an audit ](https://docs.google.com/presentation/d/1kXxnh47Jqw6ZcNzyyTewR_JPwCHyreTMdBGNvE1huWo/edit#slide=id.gdb4961032f_0_30) - [Understanding focus states](https://design-system.service.gov.uk/get-started/focus-states/) in get started guidance ## Done when - [ ] Collect examples of focus states - [ ] Workshop with team - [ ] Defined some clear principles - [ ] Have enough information to start documentation
non_test
determine guidelines on focus states what we want to improve the consistency of focus states and expected focus behaviour across gov uk as a team we need to set some principles for our ideal focus states when and how they are used why in order to publish documentation on focus states we need to determine some appropriate rules guides who needs to know about this designer developer all the team further detail in get started guidance done when collect examples of focus states workshop with team defined some clear principles have enough information to start documentation
0
350,806
25,000,465,444
IssuesEvent
2022-11-03 07:19:51
AY2223S1-CS2103T-T15-4/tp
https://api.github.com/repos/AY2223S1-CS2103T-T15-4/tp
closed
[PE-D][Tester A] Specific non-positive integer accepted for Range command
severity.VeryLow bug.documentation
In the user guide under the `:range` command (Format 2), it is specified that `NUMBER_OF_DAYS` can only take positive integers. However, running the following command with the number 0 is accepted. Input: `:range last/0` <br>Output: Exercises that are dated on the current day ![image.png](https://raw.githubusercontent.com/TYKCodes/ped/main/files/1314a12f-f48d-4d8e-9f09-c0924e1c5a39.png) <!--session: 1666943688479-5fd6d040-dd85-4c51-b02e-fa7d83a63dba--> <!--Version: Web v3.4.4--> ------------- Labels: `severity.VeryLow` `type.DocumentationBug` original: TYKCodes/ped#4
1.0
[PE-D][Tester A] Specific non-positive integer accepted for Range command - In the user guide under the `:range` command (Format 2), it is specified that `NUMBER_OF_DAYS` can only take positive integers. However, running the following command with the number 0 is accepted. Input: `:range last/0` <br>Output: Exercises that are dated on the current day ![image.png](https://raw.githubusercontent.com/TYKCodes/ped/main/files/1314a12f-f48d-4d8e-9f09-c0924e1c5a39.png) <!--session: 1666943688479-5fd6d040-dd85-4c51-b02e-fa7d83a63dba--> <!--Version: Web v3.4.4--> ------------- Labels: `severity.VeryLow` `type.DocumentationBug` original: TYKCodes/ped#4
non_test
specific non positive integer accepted for range command in the user guide under the range command format it is specified that number of days can only take positive integers however running the following command with the number is accepted input range last output exercises that are dated on the current day labels severity verylow type documentationbug original tykcodes ped
0
723,729
24,906,688,063
IssuesEvent
2022-10-29 10:41:47
bounswe/bounswe2022group1
https://api.github.com/repos/bounswe/bounswe2022group1
closed
Implementing login, logout and change password functionalities
Priority: High Status: Completed Backend
**Issue Description:** For the coherency of our application, backend app is vital and login, logout and change password functionalities are not yet fully implemented. **Tasks to Do:** - [ ] Implement login - [ ] Implement logout - [ ] Implement change-password *Task Deadline:* 29.10.2022 *Final Situation:* Done *Reviewer:* @kkadirkkalkan *Review Deadline:* 29.10.2022
1.0
Implementing login, logout and change password functionalities - **Issue Description:** For the coherency of our application, backend app is vital and login, logout and change password functionalities are not yet fully implemented. **Tasks to Do:** - [ ] Implement login - [ ] Implement logout - [ ] Implement change-password *Task Deadline:* 29.10.2022 *Final Situation:* Done *Reviewer:* @kkadirkkalkan *Review Deadline:* 29.10.2022
non_test
implementing login logout and change password functionalities issue description for the coherency of our application backend app is vital and login logout and change password functionalities are not yet fully implemented tasks to do implement login implement logout implement change password task deadline final situation done reviewer kkadirkkalkan review deadline
0
76,304
26,353,907,980
IssuesEvent
2023-01-11 08:10:34
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
Error when running yarn build:native
T-Defect
### Steps to reproduce I am trying to create a Element desktop client for windows. When running <yarn build:native>, the following error occurs: crypto\aes\aesni-mb-x86_64.obj : fatal error LNK1112: Module machine type "x64" conflicts with target machine type "x86". No matter with which initialisation: call "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Auxiliary\Build\vcvarsall.bat" x86 or call "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Auxiliary\Build\vcvarsall.bat" x64 I am a little desperate... ### Outcome C:\Users\xxx\Documents\Programmierung\public_repos\element-desktop>call "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Auxiliary\Build\vcvarsall.bat" x86 ********************************************************************** ** Visual Studio 2019 Developer Command Prompt v16.7.28 ** Copyright (c) 2020 Microsoft Corporation ********************************************************************** [vcvarsall.bat] Environment initialized for: 'x86' C:\Users\xxx\Documents\Programmierung\public_repos\element-desktop>yarn build:native yarn run v1.22.19 $ yarn run hak $ ts-node scripts/hak/index.ts • loaded configuration file=package.json ("build" field) • loaded configuration file=package.json ("build" field) hak check: matrix-seshat hak check: keytar hak fetch: matrix-seshat hak fetch: keytar hak fetchDeps: matrix-seshat hak fetchDeps: keytar hak build: matrix-seshat getTargetId: --host=x86_64-pc-windows-msvc getTargetArch: --host=x64 Building openssl in C:\Users\xxx\Documents\Programmierung\public_repos\element-desktop\.hak\matrix-seshat\x86_64-pc-windows-msvc\openssl-1.1.1f Configuring OpenSSL version 1.1.1f (0x1010106fL) for VC-WIN64A Using os-specific seed configuration Creating configdata.pm Creating makefile ********************************************************************** *** *** *** OpenSSL has been successfully configured *** *** *** *** If you encounter a problem while building, please open an *** *** issue on GitHub <https://github.com/openssl/openssl/issues> *** *** and include the output from the following command: *** *** *** *** perl configdata.pm --dump *** *** *** *** (If you are new to OpenSSL, you might want to consult the *** *** 'Troubleshooting' section in the INSTALL file first) *** *** *** ********************************************************************** Microsoft (R) Program Maintenance Utility, Version 14.27.29120.0 Copyright (C) Microsoft Corporation. Alle Rechte vorbehalten. "C:\Strawberry\perl\bin\perl.exe" "-I." -Mconfigdata "util\dofile.pl" "-omakefile" "include\crypto\bn_conf.h.in" > include\crypto\bn_conf.h "C:\Strawberry\perl\bin\perl.exe" "-I." -Mconfigdata "util\dofile.pl" "-omakefile" "include\crypto\dso_conf.h.in" > include\crypto\dso_conf.h "C:\Strawberry\perl\bin\perl.exe" "-I." -Mconfigdata "util\dofile.pl" "-omakefile" "include\openssl\opensslconf.h.in" > include\openssl\opensslconf.h "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.27.29110\bin\HostX86\x86\nmake.exe" / depend && "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.27.29110\bin\HostX86\x86\nmake.exe" / _build_libs Microsoft (R) Program Maintenance Utility, Version 14.27.29120.0 Copyright (C) Microsoft Corporation. Alle Rechte vorbehalten. Microsoft (R) Program Maintenance Utility, Version 14.27.29120.0 Copyright (C) Microsoft Corporation. Alle Rechte vorbehalten. "C:\Strawberry\perl\bin\perl.exe" "util\mkbuildinf.pl" "cl /Zi /Fdossl_static.pdb /MT /Zl /Gs0 /GF /Gy /W3 /wd4090 /nologo /O2 -D"L_ENDIAN" -D"OPENSSL_PIC" -D"OPENSSL_CPUID_OBJ" -D"OPENSSL_IA32_SSE2" -D"OPENSSL_BN_ASM_MONT" -D"OPENSSL_BN_ASM_MONT5" -D"OPENSSL_BN_ASM_GF2m" -D"SHA1_ASM" -D"SHA256_ASM" -D"SHA512_ASM" -D"KECCAK1600_ASM" -D"RC4_ASM" -D"MD5_ASM" -D"AESNI_ASM" -D"VPAES_ASM" -D"GHASH_ASM" -D"ECP_NISTZ256_ASM" -D"X25519_ASM" -D"POLY1305_ASM"" "VC-WIN64A" > crypto\buildinf.h cl /Zi /Fdossl_static.pdb /MT /Zl /Gs0 /GF /Gy /W3 /wd4090 /nologo /O2 /I "." /I "include" /I "crypto" -D"L_ENDIAN" -D"OPENSSL_PIC" -D"OPENSSL_CPUID_OBJ" -D"OPENSSL_IA32_SSE2" -D"OPENSSL_BN_ASM_MONT" -D"OPENSSL_BN_ASM_MONT5" -D"OPENSSL_BN_ASM_GF2m" -D"SHA1_ASM" -D"SHA256_ASM" -D"SHA512_ASM" -D"KECCAK1600_ASM" -D"RC4_ASM" -D"MD5_ASM" -D"AESNI_ASM" -D"VPAES_ASM" -D"GHASH_ASM" -D"ECP_NISTZ256_ASM" -D"X25519_ASM" -D"POLY1305_ASM" -D"OPENSSLDIR=\"C:\\Program Files\\Common Files\\SSL\"" -D"ENGINESDIR=\"C:\\Users\\xxx\\Documents\\Programmierung\\public_repos\\element-desktop\\.hak\\matrix-seshat\\x86_64-pc-windows-msvc\\opt\\lib\\engines-1_1\"" -D"OPENSSL_SYS_WIN32" -D"WIN32_LEAN_AND_MEAN" -D"UNICODE" -D"_UNICODE" -D"_CRT_SECURE_NO_DEPRECATE" -D"_WINSOCK_DEPRECATED_NO_WARNINGS" -D"NDEBUG" -D"OPENSSL_API_COMPAT=0x10100000L" -c /Focrypto\cversion.obj "crypto\cversion.c" cversion.c cl /Zi /Fdossl_static.pdb /MT /Zl /Gs0 /GF /Gy /W3 /wd4090 /nologo /O2 /I "." /I "include" /I "crypto" -D"L_ENDIAN" -D"OPENSSL_PIC" -D"OPENSSL_CPUID_OBJ" -D"OPENSSL_IA32_SSE2" -D"OPENSSL_BN_ASM_MONT" -D"OPENSSL_BN_ASM_MONT5" -D"OPENSSL_BN_ASM_GF2m" -D"SHA1_ASM" -D"SHA256_ASM" -D"SHA512_ASM" -D"KECCAK1600_ASM" -D"RC4_ASM" -D"MD5_ASM" -D"AESNI_ASM" -D"VPAES_ASM" -D"GHASH_ASM" -D"ECP_NISTZ256_ASM" -D"X25519_ASM" -D"POLY1305_ASM" -D"OPENSSLDIR=\"C:\\Program Files\\Common Files\\SSL\"" -D"ENGINESDIR=\"C:\\Users\\xxx\\Documents\\Programmierung\\public_repos\\element-desktop\\.hak\\matrix-seshat\\x86_64-pc-windows-msvc\\opt\\lib\\engines-1_1\"" -D"OPENSSL_SYS_WIN32" -D"WIN32_LEAN_AND_MEAN" -D"UNICODE" -D"_UNICODE" -D"_CRT_SECURE_NO_DEPRECATE" -D"_WINSOCK_DEPRECATED_NO_WARNINGS" -D"NDEBUG" -D"OPENSSL_API_COMPAT=0x10100000L" /Zs /showIncludes "crypto\cversion.c" 2>&1 > crypto\cversion.d lib /nologo /out:libcrypto.lib @C:\Users\xxx~1\AppData\Local\Temp\nm22EF.tmp crypto\aes\aesni-mb-x86_64.obj : fatal error LNK1112: Module machine type "x64" conflicts with target machine type "x86". NMAKE : fatal error U1077: ""C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.27.29110\bin\HostX86\x86\lib.EXE"": Rückgabe-Code "0x458" Stop. NMAKE : fatal error U1077: ""C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.27.29110\bin\HostX86\x86\nmake.exe"": Rückgabe-Code "0x2" Stop. 2 error Command failed with exit code 1. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. error Command failed with exit code 1. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. ### Operating system Windows 10 ### Application version _No response_ ### How did you install the app? _No response_ ### Homeserver _No response_ ### Will you send logs? No
1.0
Error when running yarn build:native - ### Steps to reproduce I am trying to create a Element desktop client for windows. When running <yarn build:native>, the following error occurs: crypto\aes\aesni-mb-x86_64.obj : fatal error LNK1112: Module machine type "x64" conflicts with target machine type "x86". No matter with which initialisation: call "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Auxiliary\Build\vcvarsall.bat" x86 or call "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Auxiliary\Build\vcvarsall.bat" x64 I am a little desperate... ### Outcome C:\Users\xxx\Documents\Programmierung\public_repos\element-desktop>call "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Auxiliary\Build\vcvarsall.bat" x86 ********************************************************************** ** Visual Studio 2019 Developer Command Prompt v16.7.28 ** Copyright (c) 2020 Microsoft Corporation ********************************************************************** [vcvarsall.bat] Environment initialized for: 'x86' C:\Users\xxx\Documents\Programmierung\public_repos\element-desktop>yarn build:native yarn run v1.22.19 $ yarn run hak $ ts-node scripts/hak/index.ts • loaded configuration file=package.json ("build" field) • loaded configuration file=package.json ("build" field) hak check: matrix-seshat hak check: keytar hak fetch: matrix-seshat hak fetch: keytar hak fetchDeps: matrix-seshat hak fetchDeps: keytar hak build: matrix-seshat getTargetId: --host=x86_64-pc-windows-msvc getTargetArch: --host=x64 Building openssl in C:\Users\xxx\Documents\Programmierung\public_repos\element-desktop\.hak\matrix-seshat\x86_64-pc-windows-msvc\openssl-1.1.1f Configuring OpenSSL version 1.1.1f (0x1010106fL) for VC-WIN64A Using os-specific seed configuration Creating configdata.pm Creating makefile ********************************************************************** *** *** *** OpenSSL has been successfully configured *** *** *** *** If you encounter a problem while building, please open an *** *** issue on GitHub <https://github.com/openssl/openssl/issues> *** *** and include the output from the following command: *** *** *** *** perl configdata.pm --dump *** *** *** *** (If you are new to OpenSSL, you might want to consult the *** *** 'Troubleshooting' section in the INSTALL file first) *** *** *** ********************************************************************** Microsoft (R) Program Maintenance Utility, Version 14.27.29120.0 Copyright (C) Microsoft Corporation. Alle Rechte vorbehalten. "C:\Strawberry\perl\bin\perl.exe" "-I." -Mconfigdata "util\dofile.pl" "-omakefile" "include\crypto\bn_conf.h.in" > include\crypto\bn_conf.h "C:\Strawberry\perl\bin\perl.exe" "-I." -Mconfigdata "util\dofile.pl" "-omakefile" "include\crypto\dso_conf.h.in" > include\crypto\dso_conf.h "C:\Strawberry\perl\bin\perl.exe" "-I." -Mconfigdata "util\dofile.pl" "-omakefile" "include\openssl\opensslconf.h.in" > include\openssl\opensslconf.h "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.27.29110\bin\HostX86\x86\nmake.exe" / depend && "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.27.29110\bin\HostX86\x86\nmake.exe" / _build_libs Microsoft (R) Program Maintenance Utility, Version 14.27.29120.0 Copyright (C) Microsoft Corporation. Alle Rechte vorbehalten. Microsoft (R) Program Maintenance Utility, Version 14.27.29120.0 Copyright (C) Microsoft Corporation. Alle Rechte vorbehalten. "C:\Strawberry\perl\bin\perl.exe" "util\mkbuildinf.pl" "cl /Zi /Fdossl_static.pdb /MT /Zl /Gs0 /GF /Gy /W3 /wd4090 /nologo /O2 -D"L_ENDIAN" -D"OPENSSL_PIC" -D"OPENSSL_CPUID_OBJ" -D"OPENSSL_IA32_SSE2" -D"OPENSSL_BN_ASM_MONT" -D"OPENSSL_BN_ASM_MONT5" -D"OPENSSL_BN_ASM_GF2m" -D"SHA1_ASM" -D"SHA256_ASM" -D"SHA512_ASM" -D"KECCAK1600_ASM" -D"RC4_ASM" -D"MD5_ASM" -D"AESNI_ASM" -D"VPAES_ASM" -D"GHASH_ASM" -D"ECP_NISTZ256_ASM" -D"X25519_ASM" -D"POLY1305_ASM"" "VC-WIN64A" > crypto\buildinf.h cl /Zi /Fdossl_static.pdb /MT /Zl /Gs0 /GF /Gy /W3 /wd4090 /nologo /O2 /I "." /I "include" /I "crypto" -D"L_ENDIAN" -D"OPENSSL_PIC" -D"OPENSSL_CPUID_OBJ" -D"OPENSSL_IA32_SSE2" -D"OPENSSL_BN_ASM_MONT" -D"OPENSSL_BN_ASM_MONT5" -D"OPENSSL_BN_ASM_GF2m" -D"SHA1_ASM" -D"SHA256_ASM" -D"SHA512_ASM" -D"KECCAK1600_ASM" -D"RC4_ASM" -D"MD5_ASM" -D"AESNI_ASM" -D"VPAES_ASM" -D"GHASH_ASM" -D"ECP_NISTZ256_ASM" -D"X25519_ASM" -D"POLY1305_ASM" -D"OPENSSLDIR=\"C:\\Program Files\\Common Files\\SSL\"" -D"ENGINESDIR=\"C:\\Users\\xxx\\Documents\\Programmierung\\public_repos\\element-desktop\\.hak\\matrix-seshat\\x86_64-pc-windows-msvc\\opt\\lib\\engines-1_1\"" -D"OPENSSL_SYS_WIN32" -D"WIN32_LEAN_AND_MEAN" -D"UNICODE" -D"_UNICODE" -D"_CRT_SECURE_NO_DEPRECATE" -D"_WINSOCK_DEPRECATED_NO_WARNINGS" -D"NDEBUG" -D"OPENSSL_API_COMPAT=0x10100000L" -c /Focrypto\cversion.obj "crypto\cversion.c" cversion.c cl /Zi /Fdossl_static.pdb /MT /Zl /Gs0 /GF /Gy /W3 /wd4090 /nologo /O2 /I "." /I "include" /I "crypto" -D"L_ENDIAN" -D"OPENSSL_PIC" -D"OPENSSL_CPUID_OBJ" -D"OPENSSL_IA32_SSE2" -D"OPENSSL_BN_ASM_MONT" -D"OPENSSL_BN_ASM_MONT5" -D"OPENSSL_BN_ASM_GF2m" -D"SHA1_ASM" -D"SHA256_ASM" -D"SHA512_ASM" -D"KECCAK1600_ASM" -D"RC4_ASM" -D"MD5_ASM" -D"AESNI_ASM" -D"VPAES_ASM" -D"GHASH_ASM" -D"ECP_NISTZ256_ASM" -D"X25519_ASM" -D"POLY1305_ASM" -D"OPENSSLDIR=\"C:\\Program Files\\Common Files\\SSL\"" -D"ENGINESDIR=\"C:\\Users\\xxx\\Documents\\Programmierung\\public_repos\\element-desktop\\.hak\\matrix-seshat\\x86_64-pc-windows-msvc\\opt\\lib\\engines-1_1\"" -D"OPENSSL_SYS_WIN32" -D"WIN32_LEAN_AND_MEAN" -D"UNICODE" -D"_UNICODE" -D"_CRT_SECURE_NO_DEPRECATE" -D"_WINSOCK_DEPRECATED_NO_WARNINGS" -D"NDEBUG" -D"OPENSSL_API_COMPAT=0x10100000L" /Zs /showIncludes "crypto\cversion.c" 2>&1 > crypto\cversion.d lib /nologo /out:libcrypto.lib @C:\Users\xxx~1\AppData\Local\Temp\nm22EF.tmp crypto\aes\aesni-mb-x86_64.obj : fatal error LNK1112: Module machine type "x64" conflicts with target machine type "x86". NMAKE : fatal error U1077: ""C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.27.29110\bin\HostX86\x86\lib.EXE"": Rückgabe-Code "0x458" Stop. NMAKE : fatal error U1077: ""C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.27.29110\bin\HostX86\x86\nmake.exe"": Rückgabe-Code "0x2" Stop. 2 error Command failed with exit code 1. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. error Command failed with exit code 1. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. ### Operating system Windows 10 ### Application version _No response_ ### How did you install the app? _No response_ ### Homeserver _No response_ ### Will you send logs? No
non_test
error when running yarn build native steps to reproduce i am trying to create a element desktop client for windows when running the following error occurs crypto aes aesni mb obj fatal error module machine type conflicts with target machine type no matter with which initialisation call c program files microsoft visual studio buildtools vc auxiliary build vcvarsall bat or call c program files microsoft visual studio buildtools vc auxiliary build vcvarsall bat i am a little desperate outcome c users xxx documents programmierung public repos element desktop call c program files microsoft visual studio buildtools vc auxiliary build vcvarsall bat visual studio developer command prompt copyright c microsoft corporation environment initialized for c users xxx documents programmierung public repos element desktop yarn build native yarn run yarn run hak ts node scripts hak index ts • loaded configuration file package json build field • loaded configuration file package json build field hak check matrix seshat hak check keytar hak fetch matrix seshat hak fetch keytar hak fetchdeps matrix seshat hak fetchdeps keytar hak build matrix seshat gettargetid host pc windows msvc gettargetarch host building openssl in c users xxx documents programmierung public repos element desktop hak matrix seshat pc windows msvc openssl configuring openssl version for vc using os specific seed configuration creating configdata pm creating makefile openssl has been successfully configured if you encounter a problem while building please open an issue on github and include the output from the following command perl configdata pm dump if you are new to openssl you might want to consult the troubleshooting section in the install file first microsoft r program maintenance utility version copyright c microsoft corporation alle rechte vorbehalten c strawberry perl bin perl exe i mconfigdata util dofile pl omakefile include crypto bn conf h in include crypto bn conf h c strawberry perl bin perl exe i mconfigdata util dofile pl omakefile include crypto dso conf h in include crypto dso conf h c strawberry perl bin perl exe i mconfigdata util dofile pl omakefile include openssl opensslconf h in include openssl opensslconf h c program files microsoft visual studio buildtools vc tools msvc bin nmake exe depend c program files microsoft visual studio buildtools vc tools msvc bin nmake exe build libs microsoft r program maintenance utility version copyright c microsoft corporation alle rechte vorbehalten microsoft r program maintenance utility version copyright c microsoft corporation alle rechte vorbehalten c strawberry perl bin perl exe util mkbuildinf pl cl zi fdossl static pdb mt zl gf gy nologo d l endian d openssl pic d openssl cpuid obj d openssl d openssl bn asm mont d openssl bn asm d openssl bn asm d asm d asm d asm d asm d asm d asm d aesni asm d vpaes asm d ghash asm d ecp asm d asm d asm vc crypto buildinf h cl zi fdossl static pdb mt zl gf gy nologo i i include i crypto d l endian d openssl pic d openssl cpuid obj d openssl d openssl bn asm mont d openssl bn asm d openssl bn asm d asm d asm d asm d asm d asm d asm d aesni asm d vpaes asm d ghash asm d ecp asm d asm d asm d openssldir c program files common files ssl d enginesdir c users xxx documents programmierung public repos element desktop hak matrix seshat pc windows msvc opt lib engines d openssl sys d lean and mean d unicode d unicode d crt secure no deprecate d winsock deprecated no warnings d ndebug d openssl api compat c focrypto cversion obj crypto cversion c cversion c cl zi fdossl static pdb mt zl gf gy nologo i i include i crypto d l endian d openssl pic d openssl cpuid obj d openssl d openssl bn asm mont d openssl bn asm d openssl bn asm d asm d asm d asm d asm d asm d asm d aesni asm d vpaes asm d ghash asm d ecp asm d asm d asm d openssldir c program files common files ssl d enginesdir c users xxx documents programmierung public repos element desktop hak matrix seshat pc windows msvc opt lib engines d openssl sys d lean and mean d unicode d unicode d crt secure no deprecate d winsock deprecated no warnings d ndebug d openssl api compat zs showincludes crypto cversion c crypto cversion d lib nologo out libcrypto lib c users xxx appdata local temp tmp crypto aes aesni mb obj fatal error module machine type conflicts with target machine type nmake fatal error c program files microsoft visual studio buildtools vc tools msvc bin lib exe rückgabe code stop nmake fatal error c program files microsoft visual studio buildtools vc tools msvc bin nmake exe rückgabe code stop error command failed with exit code info visit for documentation about this command error command failed with exit code info visit for documentation about this command operating system windows application version no response how did you install the app no response homeserver no response will you send logs no
0
68,459
9,188,393,147
IssuesEvent
2019-03-06 07:15:27
EventStore/EventStore
https://api.github.com/repos/EventStore/EventStore
closed
Large Index Merge to be triggered manually
area/documentation detectability/easy kind/enhancement tracking/In Progress
During an Index Merge operation, writes can be slowed down To avoid this issue, we can call merges manually. In this way it will be possible to schedule a job at midnight There will be a command line option (--no-automerge-index or similar) then an http endpoint that forces a merge to happen
1.0
Large Index Merge to be triggered manually - During an Index Merge operation, writes can be slowed down To avoid this issue, we can call merges manually. In this way it will be possible to schedule a job at midnight There will be a command line option (--no-automerge-index or similar) then an http endpoint that forces a merge to happen
non_test
large index merge to be triggered manually during an index merge operation writes can be slowed down to avoid this issue we can call merges manually in this way it will be possible to schedule a job at midnight there will be a command line option no automerge index or similar then an http endpoint that forces a merge to happen
0
804,680
29,497,625,744
IssuesEvent
2023-06-02 18:26:10
googleapis/nodejs-pubsub
https://api.github.com/repos/googleapis/nodejs-pubsub
closed
Calling Subscription.setOptions won't update the maxStreams count
priority: p2 type: bug api: pubsub
Due to other recent changes (https://github.com/googleapis/nodejs-pubsub/issues/1712), the MessageStream class will only decide how many streams to use when it's constructed, so this setting will only work if passed to `pubsub.subscription()`. `MessageStream.fillStreams()` needs to pay attention to those config changes.
1.0
Calling Subscription.setOptions won't update the maxStreams count - Due to other recent changes (https://github.com/googleapis/nodejs-pubsub/issues/1712), the MessageStream class will only decide how many streams to use when it's constructed, so this setting will only work if passed to `pubsub.subscription()`. `MessageStream.fillStreams()` needs to pay attention to those config changes.
non_test
calling subscription setoptions won t update the maxstreams count due to other recent changes the messagestream class will only decide how many streams to use when it s constructed so this setting will only work if passed to pubsub subscription messagestream fillstreams needs to pay attention to those config changes
0
176,855
13,655,328,685
IssuesEvent
2020-09-27 21:46:48
AliceGrey/OctoprintKlipperPlugin
https://api.github.com/repos/AliceGrey/OctoprintKlipperPlugin
closed
Editing printer.cfg from within Octoklipper nulls the printer.cfg
bug testing needed
octopi, Octoprint version 1.4.2 Octopi Version 0.17.0 [octoprint.log](https://github.com/AliceGrey/OctoprintKlipperPlugin/files/5230625/octoprint.log) When updating the klipper Printer.cfg from within octoklipper it nulls your printer CFG config. Im not sure exactly how that happens, the file chmod is 0777. Ive confirmed it twice on accident by editing and having to recover/remake my file
1.0
Editing printer.cfg from within Octoklipper nulls the printer.cfg - octopi, Octoprint version 1.4.2 Octopi Version 0.17.0 [octoprint.log](https://github.com/AliceGrey/OctoprintKlipperPlugin/files/5230625/octoprint.log) When updating the klipper Printer.cfg from within octoklipper it nulls your printer CFG config. Im not sure exactly how that happens, the file chmod is 0777. Ive confirmed it twice on accident by editing and having to recover/remake my file
test
editing printer cfg from within octoklipper nulls the printer cfg octopi octoprint version octopi version when updating the klipper printer cfg from within octoklipper it nulls your printer cfg config im not sure exactly how that happens the file chmod is ive confirmed it twice on accident by editing and having to recover remake my file
1
356,684
10,596,380,080
IssuesEvent
2019-10-09 21:06:32
Polymer/tools
https://api.github.com/repos/Polymer/tools
closed
PR & issue templates
Priority: Medium Status: Available Type: Maintenance
After a migration to a monorepo there are multiple PR/issue templates. It's still possible to keep multiple templates using [query parameters](https://help.github.com/articles/about-automation-for-issues-and-pull-requests-with-query-parameters/), but I see no point in keeping these in each package separately. Do you guys agree?
1.0
PR & issue templates - After a migration to a monorepo there are multiple PR/issue templates. It's still possible to keep multiple templates using [query parameters](https://help.github.com/articles/about-automation-for-issues-and-pull-requests-with-query-parameters/), but I see no point in keeping these in each package separately. Do you guys agree?
non_test
pr issue templates after a migration to a monorepo there are multiple pr issue templates it s still possible to keep multiple templates using but i see no point in keeping these in each package separately do you guys agree
0
30,748
4,651,075,995
IssuesEvent
2016-10-03 08:39:12
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
opened
IndexSplitBrainTest.testIndexDoesNotReturnStaleResultsAfterSplit
Team: Core Type: Test-Failure
``` java.lang.AssertionError: entry should be in map2 before split at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertNotNull(Assert.java:712) at com.hazelcast.query.impl.IndexSplitBrainTest.testIndexDoesNotReturnStaleResultsAfterSplit(IndexSplitBrainTest.java:74) ``` https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-OracleJDK1.6/com.hazelcast$hazelcast/1057/testReport/junit/com.hazelcast.query.impl/IndexSplitBrainTest/testIndexDoesNotReturnStaleResultsAfterSplit/
1.0
IndexSplitBrainTest.testIndexDoesNotReturnStaleResultsAfterSplit - ``` java.lang.AssertionError: entry should be in map2 before split at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertNotNull(Assert.java:712) at com.hazelcast.query.impl.IndexSplitBrainTest.testIndexDoesNotReturnStaleResultsAfterSplit(IndexSplitBrainTest.java:74) ``` https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-OracleJDK1.6/com.hazelcast$hazelcast/1057/testReport/junit/com.hazelcast.query.impl/IndexSplitBrainTest/testIndexDoesNotReturnStaleResultsAfterSplit/
test
indexsplitbraintest testindexdoesnotreturnstaleresultsaftersplit java lang assertionerror entry should be in before split at org junit assert fail assert java at org junit assert asserttrue assert java at org junit assert assertnotnull assert java at com hazelcast query impl indexsplitbraintest testindexdoesnotreturnstaleresultsaftersplit indexsplitbraintest java
1
68,194
7,089,016,856
IssuesEvent
2018-01-12 00:05:36
StuPro-TOSCAna/TOSCAna
https://api.github.com/repos/StuPro-TOSCAna/TOSCAna
closed
Environment variable "database_user" in Lamp-App
plugin test
In the [LampApp implementation](https://github.com/StuPro-TOSCAna/TOSCAna/blob/master/server/src/test/java/org/opentosca/toscana/plugins/testdata/LampApp.java) there is a input for the "database_user" (Line 139): ` appInputs.add(new OperationVariable("database_user", database.getUser().get()));` In the file [mysql-credentials.php](https://github.com/StuPro-TOSCAna/TOSCAna/blob/master/server/src/test/resources/csars/yaml/valid/lamp-input/my_app/mysql-credentials.php) there is a fix input for the user: `$db_user = "root";` The Cloud Foundry plugin can´t set userprovided database credentials. Instead it provide the database credentials as environment variables. So my suggestion is to set the `$db_user` also as environment variable. Therefore we need a additional line in [configure_myphpapp.sh](https://github.com/StuPro-TOSCAna/TOSCAna/blob/master/server/src/test/resources/csars/yaml/valid/lamp-input/my_app/configure_myphpapp.sh): `sed -i "s:DATABASE_USER:${database_user}:g" $CREDENTIALS` @c-mueller unfortunately I saw this issue as of now. If it is ok for every plugin I will change the files
1.0
Environment variable "database_user" in Lamp-App - In the [LampApp implementation](https://github.com/StuPro-TOSCAna/TOSCAna/blob/master/server/src/test/java/org/opentosca/toscana/plugins/testdata/LampApp.java) there is a input for the "database_user" (Line 139): ` appInputs.add(new OperationVariable("database_user", database.getUser().get()));` In the file [mysql-credentials.php](https://github.com/StuPro-TOSCAna/TOSCAna/blob/master/server/src/test/resources/csars/yaml/valid/lamp-input/my_app/mysql-credentials.php) there is a fix input for the user: `$db_user = "root";` The Cloud Foundry plugin can´t set userprovided database credentials. Instead it provide the database credentials as environment variables. So my suggestion is to set the `$db_user` also as environment variable. Therefore we need a additional line in [configure_myphpapp.sh](https://github.com/StuPro-TOSCAna/TOSCAna/blob/master/server/src/test/resources/csars/yaml/valid/lamp-input/my_app/configure_myphpapp.sh): `sed -i "s:DATABASE_USER:${database_user}:g" $CREDENTIALS` @c-mueller unfortunately I saw this issue as of now. If it is ok for every plugin I will change the files
test
environment variable database user in lamp app in the there is a input for the database user line appinputs add new operationvariable database user database getuser get in the file there is a fix input for the user db user root the cloud foundry plugin can´t set userprovided database credentials instead it provide the database credentials as environment variables so my suggestion is to set the db user also as environment variable therefore we need a additional line in sed i s database user database user g credentials c mueller unfortunately i saw this issue as of now if it is ok for every plugin i will change the files
1
148,281
5,672,306,472
IssuesEvent
2017-04-12 00:50:34
RoboJackets/robocup-software
https://api.github.com/repos/RoboJackets/robocup-software
opened
Add C++ Geometry 2d Rotate Point to the Python side
area / soccer exp / beginner priority / low status / new type / refactor
C++ defaults to rotating a point around the origin if it is only given an angle. This needs to be added to robocup-py.cpp so it will be available in python too.
1.0
Add C++ Geometry 2d Rotate Point to the Python side - C++ defaults to rotating a point around the origin if it is only given an angle. This needs to be added to robocup-py.cpp so it will be available in python too.
non_test
add c geometry rotate point to the python side c defaults to rotating a point around the origin if it is only given an angle this needs to be added to robocup py cpp so it will be available in python too
0
806,302
29,810,593,530
IssuesEvent
2023-06-16 14:47:03
GoogleCloudPlatform/opentelemetry-operations-java
https://api.github.com/repos/GoogleCloudPlatform/opentelemetry-operations-java
closed
Exporting does not work on Windows due to invalid metric description type name - backslash instead of slash
bug priority: p1
Hello, I'm trying to export metrics from Windows system. It seems like the metrics exporter library is generating invalid type name for the metric description. It concatenate prefix with the metric name using the backslash instead of slash what comes up with the below error. ``` INVALID_ARGUMENT: Field metricDescriptor.type had an invalid value of "custom.googleapis.com\example_counter": The metric type must be a URL-formatted string with a domain and non-empty path. ``` I found the code piece that is responsible for generating metric type name. It uses the Path object which uses internally default operating system file separator. That's the reason why it's not working on the Windows. https://github.com/GoogleCloudPlatform/opentelemetry-operations-java/blob/74a108f594d081c7f059d32c6062a1add4d4b115/exporters/metrics/src/main/java/com/google/cloud/opentelemetry/metric/MetricTranslator.java#L229 I was looking for the easy work around but I couldn't find. Do you see any possibility for that?
1.0
Exporting does not work on Windows due to invalid metric description type name - backslash instead of slash - Hello, I'm trying to export metrics from Windows system. It seems like the metrics exporter library is generating invalid type name for the metric description. It concatenate prefix with the metric name using the backslash instead of slash what comes up with the below error. ``` INVALID_ARGUMENT: Field metricDescriptor.type had an invalid value of "custom.googleapis.com\example_counter": The metric type must be a URL-formatted string with a domain and non-empty path. ``` I found the code piece that is responsible for generating metric type name. It uses the Path object which uses internally default operating system file separator. That's the reason why it's not working on the Windows. https://github.com/GoogleCloudPlatform/opentelemetry-operations-java/blob/74a108f594d081c7f059d32c6062a1add4d4b115/exporters/metrics/src/main/java/com/google/cloud/opentelemetry/metric/MetricTranslator.java#L229 I was looking for the easy work around but I couldn't find. Do you see any possibility for that?
non_test
exporting does not work on windows due to invalid metric description type name backslash instead of slash hello i m trying to export metrics from windows system it seems like the metrics exporter library is generating invalid type name for the metric description it concatenate prefix with the metric name using the backslash instead of slash what comes up with the below error invalid argument field metricdescriptor type had an invalid value of custom googleapis com example counter the metric type must be a url formatted string with a domain and non empty path i found the code piece that is responsible for generating metric type name it uses the path object which uses internally default operating system file separator that s the reason why it s not working on the windows i was looking for the easy work around but i couldn t find do you see any possibility for that
0
192,610
14,621,420,390
IssuesEvent
2020-12-22 21:37:43
joacorapela/svGPFA
https://api.github.com/repos/joacorapela/svGPFA
opened
Inference using true parameters
test existing functionality
Fix the non-variational parameters to their generative values and estimate the variational parameters
1.0
Inference using true parameters - Fix the non-variational parameters to their generative values and estimate the variational parameters
test
inference using true parameters fix the non variational parameters to their generative values and estimate the variational parameters
1
216,833
16,820,517,501
IssuesEvent
2021-06-17 12:37:43
chameleon-system/chameleon-system
https://api.github.com/repos/chameleon-system/chameleon-system
closed
Cross domain frontend user impersonation
Status: Test
**Describe the bug** When trying to impersonate a frontend user on a portal that runs on a different domain than the backend, the login fails since the session is not persisted across domains. **Affected version(s)** all **To Reproduce** Steps to reproduce the behavior: 1. Create a frontend / extranet user that is assigned a portal id of a portal that runs on a different domain than the backend 2. In the backend, open the user edit view 3. Click the "Login as User" button 4. You are redirected to the home page of the portal associated with the user but that user is not logged in. **Expected behavior** You are redirected to the home page of the portal associated with the user logged in as that user. **Technical details** This is due to the user being logged in while being in one session but the new domain context opens a new session.
1.0
Cross domain frontend user impersonation - **Describe the bug** When trying to impersonate a frontend user on a portal that runs on a different domain than the backend, the login fails since the session is not persisted across domains. **Affected version(s)** all **To Reproduce** Steps to reproduce the behavior: 1. Create a frontend / extranet user that is assigned a portal id of a portal that runs on a different domain than the backend 2. In the backend, open the user edit view 3. Click the "Login as User" button 4. You are redirected to the home page of the portal associated with the user but that user is not logged in. **Expected behavior** You are redirected to the home page of the portal associated with the user logged in as that user. **Technical details** This is due to the user being logged in while being in one session but the new domain context opens a new session.
test
cross domain frontend user impersonation describe the bug when trying to impersonate a frontend user on a portal that runs on a different domain than the backend the login fails since the session is not persisted across domains affected version s all to reproduce steps to reproduce the behavior create a frontend extranet user that is assigned a portal id of a portal that runs on a different domain than the backend in the backend open the user edit view click the login as user button you are redirected to the home page of the portal associated with the user but that user is not logged in expected behavior you are redirected to the home page of the portal associated with the user logged in as that user technical details this is due to the user being logged in while being in one session but the new domain context opens a new session
1
369,624
25,860,394,399
IssuesEvent
2022-12-13 16:32:17
Capital-Regional-District/EDRMS
https://api.github.com/repos/Capital-Regional-District/EDRMS
closed
Review and 1st Draft of Project Governance
documentation EDRMS Change Management
#Privacy of Information and Protection: **Your opinion is your personal information. Please do not include any information which identifies you or others in your response.** #Description: Review governance ppt. and project leveling to determine adequate level of governance. Review with project management team and RM. #Outcome: - [ ] Document to inform Project Charter - [ ] Terms of engagement for Project Advisory Council - [ ] Proposed list of members (do we need the ELT to vet this list - unsure of proper process)
1.0
Review and 1st Draft of Project Governance - #Privacy of Information and Protection: **Your opinion is your personal information. Please do not include any information which identifies you or others in your response.** #Description: Review governance ppt. and project leveling to determine adequate level of governance. Review with project management team and RM. #Outcome: - [ ] Document to inform Project Charter - [ ] Terms of engagement for Project Advisory Council - [ ] Proposed list of members (do we need the ELT to vet this list - unsure of proper process)
non_test
review and draft of project governance privacy of information and protection your opinion is your personal information please do not include any information which identifies you or others in your response description review governance ppt and project leveling to determine adequate level of governance review with project management team and rm outcome document to inform project charter terms of engagement for project advisory council proposed list of members do we need the elt to vet this list unsure of proper process
0
34,120
4,891,457,347
IssuesEvent
2016-11-18 16:44:22
red/red
https://api.github.com/repos/red/red
closed
make any-list!/any-path! is defined for more values than is allowed by the table
status.built status.tested type.bug
Following the order of the table, the following should error out, but they don't: at least all from `time!` to `bitset!`, plus `any-word!` and `any-string!`
1.0
make any-list!/any-path! is defined for more values than is allowed by the table - Following the order of the table, the following should error out, but they don't: at least all from `time!` to `bitset!`, plus `any-word!` and `any-string!`
test
make any list any path is defined for more values than is allowed by the table following the order of the table the following should error out but they don t at least all from time to bitset plus any word and any string
1
107,413
9,212,036,879
IssuesEvent
2019-03-09 20:30:02
Camelcade/Perl5-IDEA
https://api.github.com/repos/Camelcade/Perl5-IDEA
closed
Implement testing support
Feature Help wanted Testing
Should go to test work? Would be nice to generate tests or run tests for a module.
1.0
Implement testing support - Should go to test work? Would be nice to generate tests or run tests for a module.
test
implement testing support should go to test work would be nice to generate tests or run tests for a module
1
144,645
11,625,579,702
IssuesEvent
2020-02-27 12:58:07
MachoThemes/modula-lite
https://api.github.com/repos/MachoThemes/modula-lite
closed
Setting title/font size to 0 doesn't default to theme as indicated
enhancement need testing
The tooltip on "_Caption Font Size_" and "_Title Font Size_" (in the "Captions" tab) says: > The title [ or caption ] font size in pixels (set to 0 to use the theme defaults). However, setting the font size to 0 actually sets it to 0px, not the theme default.
1.0
Setting title/font size to 0 doesn't default to theme as indicated - The tooltip on "_Caption Font Size_" and "_Title Font Size_" (in the "Captions" tab) says: > The title [ or caption ] font size in pixels (set to 0 to use the theme defaults). However, setting the font size to 0 actually sets it to 0px, not the theme default.
test
setting title font size to doesn t default to theme as indicated the tooltip on caption font size and title font size in the captions tab says the title font size in pixels set to to use the theme defaults however setting the font size to actually sets it to not the theme default
1
217,193
16,848,834,443
IssuesEvent
2021-06-20 04:12:09
hakehuang/infoflow
https://api.github.com/repos/hakehuang/infoflow
opened
tests-ci :kernel.memory_protection.userspace.write_kerntext : zephyr-v2.6.0-286-g46029914a7ac: lpcxpresso55s28: test Timeout
area: Tests
**Describe the bug** kernel.memory_protection.userspace.write_kerntext test is Timeout on zephyr-v2.6.0-286-g46029914a7ac on lpcxpresso55s28 see logs for details **To Reproduce** 1. ``` scripts/twister --device-testing --device-serial /dev/ttyACM0 -p lpcxpresso55s28 --testcase-root tests --sub-test kernel.memory_protection ``` 2. See error **Expected behavior** test pass **Impact** **Logs and console output** ``` *** Booting Zephyr OS build zephyr-v2.6.0-286-g46029914a7ac *** Running test suite userspace =================================================================== START - test_is_usermode PASS - test_is_usermode in 0.1 seconds =================================================================== START - test_write_control PASS - test_write_control in 0.1 seconds =================================================================== START - test_disable_mmu_mpu ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993 ESF could not be retrieved successfully. Shall never occur. ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993 ESF could not be retrieved successfully. Shall never occur. ``` **Environment (please complete the following information):** - OS: (e.g. Linux ) - Toolchain (e.g Zephyr SDK) - Commit SHA or Version used: zephyr-v2.6.0-286-g46029914a7ac
1.0
tests-ci :kernel.memory_protection.userspace.write_kerntext : zephyr-v2.6.0-286-g46029914a7ac: lpcxpresso55s28: test Timeout - **Describe the bug** kernel.memory_protection.userspace.write_kerntext test is Timeout on zephyr-v2.6.0-286-g46029914a7ac on lpcxpresso55s28 see logs for details **To Reproduce** 1. ``` scripts/twister --device-testing --device-serial /dev/ttyACM0 -p lpcxpresso55s28 --testcase-root tests --sub-test kernel.memory_protection ``` 2. See error **Expected behavior** test pass **Impact** **Logs and console output** ``` *** Booting Zephyr OS build zephyr-v2.6.0-286-g46029914a7ac *** Running test suite userspace =================================================================== START - test_is_usermode PASS - test_is_usermode in 0.1 seconds =================================================================== START - test_write_control PASS - test_write_control in 0.1 seconds =================================================================== START - test_disable_mmu_mpu ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993 ESF could not be retrieved successfully. Shall never occur. ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993 ESF could not be retrieved successfully. Shall never occur. ``` **Environment (please complete the following information):** - OS: (e.g. Linux ) - Toolchain (e.g Zephyr SDK) - Commit SHA or Version used: zephyr-v2.6.0-286-g46029914a7ac
test
tests ci kernel memory protection userspace write kerntext zephyr test timeout describe the bug kernel memory protection userspace write kerntext test is timeout on zephyr on see logs for details to reproduce scripts twister device testing device serial dev p testcase root tests sub test kernel memory protection see error expected behavior test pass impact logs and console output booting zephyr os build zephyr running test suite userspace start test is usermode pass test is usermode in seconds start test write control pass test write control in seconds start test disable mmu mpu assertion fail west topdir zephyr arch arm core cortex m fault c esf could not be retrieved successfully shall never occur assertion fail west topdir zephyr arch arm core cortex m fault c esf could not be retrieved successfully shall never occur environment please complete the following information os e g linux toolchain e g zephyr sdk commit sha or version used zephyr
1
324,433
9,895,536,122
IssuesEvent
2019-06-26 08:04:25
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
ifp.gujarat.gov.in - see bug description
browser-firefox engine-gecko priority-normal
<!-- @browser: Firefox 68.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; rv:68.0) Gecko/20100101 Firefox/68.0 --> <!-- @reported_with: desktop-reporter --> **URL**: https://ifp.gujarat.gov.in/DIGIGOV/digigov.htm?actionFlag=loadGrievanceSubForm&elementId=13000021 **Browser / Version**: Firefox 68.0 **Operating System**: Windows 7 **Tested Another Browser**: Unknown **Problem type**: Something else **Description**: ATTACHMENT IS NOT DONE **Steps to Reproduce**: [![Screenshot Description](https://webcompat.com/uploads/2019/6/8e3a39f9-b91e-4862-a9fd-0555b5ef3692-thumb.jpeg)](https://webcompat.com/uploads/2019/6/8e3a39f9-b91e-4862-a9fd-0555b5ef3692.jpeg) <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190624133534</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li> </ul> <p>Console Messages:</p> <pre> [u'[JavaScript Error: "TypeError: document.getElementById(...) is null" {file: "https://ifp.gujarat.gov.in/DIGIGOV/script/common/tabcontent.js?vrsn=" line: 212}]\nupdateButtonStatus@https://ifp.gujarat.gov.in/DIGIGOV/script/common/tabcontent.js?vrsn=:212:25\nexpandcontent@https://ifp.gujarat.gov.in/DIGIGOV/script/common/tabcontent.js?vrsn=:24:1\ninitializetabcontent@https://ifp.gujarat.gov.in/DIGIGOV/script/common/tabcontent.js?vrsn=:158:1\n@https://ifp.gujarat.gov.in/DIGIGOV/digigov.htm?actionFlag=loadGrievanceSubForm&elementId=13000021:2499:2\n', u'[JavaScript Error: "XML Parsing Error: syntax error\nLocation: https://ifp.gujarat.gov.in/DIGIGOV/digigov.htm?actionFlag=loadGrievanceSubForm&elementId=13000021\nLine Number 1, Column 1:" {file: "https://ifp.gujarat.gov.in/DIGIGOV/digigov.htm?actionFlag=loadGrievanceSubForm&elementId=13000021" line: 1 column: 1 source: "[object XMLDocument]"}]'] </pre> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
ifp.gujarat.gov.in - see bug description - <!-- @browser: Firefox 68.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; rv:68.0) Gecko/20100101 Firefox/68.0 --> <!-- @reported_with: desktop-reporter --> **URL**: https://ifp.gujarat.gov.in/DIGIGOV/digigov.htm?actionFlag=loadGrievanceSubForm&elementId=13000021 **Browser / Version**: Firefox 68.0 **Operating System**: Windows 7 **Tested Another Browser**: Unknown **Problem type**: Something else **Description**: ATTACHMENT IS NOT DONE **Steps to Reproduce**: [![Screenshot Description](https://webcompat.com/uploads/2019/6/8e3a39f9-b91e-4862-a9fd-0555b5ef3692-thumb.jpeg)](https://webcompat.com/uploads/2019/6/8e3a39f9-b91e-4862-a9fd-0555b5ef3692.jpeg) <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190624133534</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li> </ul> <p>Console Messages:</p> <pre> [u'[JavaScript Error: "TypeError: document.getElementById(...) is null" {file: "https://ifp.gujarat.gov.in/DIGIGOV/script/common/tabcontent.js?vrsn=" line: 212}]\nupdateButtonStatus@https://ifp.gujarat.gov.in/DIGIGOV/script/common/tabcontent.js?vrsn=:212:25\nexpandcontent@https://ifp.gujarat.gov.in/DIGIGOV/script/common/tabcontent.js?vrsn=:24:1\ninitializetabcontent@https://ifp.gujarat.gov.in/DIGIGOV/script/common/tabcontent.js?vrsn=:158:1\n@https://ifp.gujarat.gov.in/DIGIGOV/digigov.htm?actionFlag=loadGrievanceSubForm&elementId=13000021:2499:2\n', u'[JavaScript Error: "XML Parsing Error: syntax error\nLocation: https://ifp.gujarat.gov.in/DIGIGOV/digigov.htm?actionFlag=loadGrievanceSubForm&elementId=13000021\nLine Number 1, Column 1:" {file: "https://ifp.gujarat.gov.in/DIGIGOV/digigov.htm?actionFlag=loadGrievanceSubForm&elementId=13000021" line: 1 column: 1 source: "[object XMLDocument]"}]'] </pre> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_test
ifp gujarat gov in see bug description url browser version firefox operating system windows tested another browser unknown problem type something else description attachment is not done steps to reproduce browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen false mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel beta console messages nupdatebuttonstatus u from with ❤️
0
443,717
30,927,390,402
IssuesEvent
2023-08-06 16:53:48
wymcg/matricks
https://api.github.com/repos/wymcg/matricks
opened
Add cross-compilation instructions to the README
documentation
Compiling on Raspberry Pi boards takes a while, and may be impossible depending on the memory available to the board (i.e. Raspberry Pi 3 and earlier are known to run out of memory while compiling, see #14). Although there are workarounds for these memory issues, it would be much easier if we could just tell users how to cross compile Matricks instead.
1.0
Add cross-compilation instructions to the README - Compiling on Raspberry Pi boards takes a while, and may be impossible depending on the memory available to the board (i.e. Raspberry Pi 3 and earlier are known to run out of memory while compiling, see #14). Although there are workarounds for these memory issues, it would be much easier if we could just tell users how to cross compile Matricks instead.
non_test
add cross compilation instructions to the readme compiling on raspberry pi boards takes a while and may be impossible depending on the memory available to the board i e raspberry pi and earlier are known to run out of memory while compiling see although there are workarounds for these memory issues it would be much easier if we could just tell users how to cross compile matricks instead
0
256,954
22,113,897,631
IssuesEvent
2022-06-02 00:34:35
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: awsdms failed
C-test-failure O-robot O-roachtest branch-master T-sql-experience
roachtest.awsdms [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyAwsBazel/5312208?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyAwsBazel/5312208?buildTab=artifacts#/awsdms) on master @ [e6815947a050e32f21e983aa30dc74ab2a247af3](https://github.com/cockroachdb/cockroach/commits/e6815947a050e32f21e983aa30dc74ab2a247af3): ``` The test failed on branch=master, cloud=aws: test artifacts and logs in: /artifacts/awsdms/run_1 awsdms.go:136,test_runner.go:884: failed to set up AWS DMS: InvalidResourceStateFault: Test connection for replication instance roachtest-awsdms-replication-instance and endpoint roachtest-awsdms-rds-endpoint should be successful for starting the replication task (1) attached stack trace -- stack trace: | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.setupAWSDMS | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/awsdms.go:269 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runAWSDMS | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/awsdms.go:134 | main.(*testRunner).runTest.func2 | main/pkg/cmd/roachtest/test_runner.go:884 | runtime.goexit | GOROOT/src/runtime/asm_amd64.s:1581 Wraps: (2) failed to set up AWS DMS Wraps: (3) Wraps: (4) Wraps: (5) InvalidResourceStateFault: Test connection for replication instance roachtest-awsdms-replication-instance and endpoint roachtest-awsdms-rds-endpoint should be successful for starting the replication task Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *smithy.OperationError (4) *http.ResponseError (5) *types.InvalidResourceStateFault ``` <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> /cc @cockroachdb/sql-experience <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*awsdms.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub> Jira issue: CRDB-16232
2.0
roachtest: awsdms failed - roachtest.awsdms [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyAwsBazel/5312208?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyAwsBazel/5312208?buildTab=artifacts#/awsdms) on master @ [e6815947a050e32f21e983aa30dc74ab2a247af3](https://github.com/cockroachdb/cockroach/commits/e6815947a050e32f21e983aa30dc74ab2a247af3): ``` The test failed on branch=master, cloud=aws: test artifacts and logs in: /artifacts/awsdms/run_1 awsdms.go:136,test_runner.go:884: failed to set up AWS DMS: InvalidResourceStateFault: Test connection for replication instance roachtest-awsdms-replication-instance and endpoint roachtest-awsdms-rds-endpoint should be successful for starting the replication task (1) attached stack trace -- stack trace: | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.setupAWSDMS | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/awsdms.go:269 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runAWSDMS | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/awsdms.go:134 | main.(*testRunner).runTest.func2 | main/pkg/cmd/roachtest/test_runner.go:884 | runtime.goexit | GOROOT/src/runtime/asm_amd64.s:1581 Wraps: (2) failed to set up AWS DMS Wraps: (3) Wraps: (4) Wraps: (5) InvalidResourceStateFault: Test connection for replication instance roachtest-awsdms-replication-instance and endpoint roachtest-awsdms-rds-endpoint should be successful for starting the replication task Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *smithy.OperationError (4) *http.ResponseError (5) *types.InvalidResourceStateFault ``` <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> /cc @cockroachdb/sql-experience <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*awsdms.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub> Jira issue: CRDB-16232
test
roachtest awsdms failed roachtest awsdms with on master the test failed on branch master cloud aws test artifacts and logs in artifacts awsdms run awsdms go test runner go failed to set up aws dms invalidresourcestatefault test connection for replication instance roachtest awsdms replication instance and endpoint roachtest awsdms rds endpoint should be successful for starting the replication task attached stack trace stack trace github com cockroachdb cockroach pkg cmd roachtest tests setupawsdms github com cockroachdb cockroach pkg cmd roachtest tests awsdms go github com cockroachdb cockroach pkg cmd roachtest tests runawsdms github com cockroachdb cockroach pkg cmd roachtest tests awsdms go main testrunner runtest main pkg cmd roachtest test runner go runtime goexit goroot src runtime asm s wraps failed to set up aws dms wraps wraps wraps invalidresourcestatefault test connection for replication instance roachtest awsdms replication instance and endpoint roachtest awsdms rds endpoint should be successful for starting the replication task error types withstack withstack errutil withprefix smithy operationerror http responseerror types invalidresourcestatefault help see see cc cockroachdb sql experience jira issue crdb
1