Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
5,301
8,121,837,688
IssuesEvent
2018-08-16 09:30:06
openvstorage/framework
https://api.github.com/repos/openvstorage/framework
closed
why adding vpool can choose multiple IPs
process_wontfix
When I add or extend a vpool , I find both public ip and storage ip show up in the list widget `storage ip`. This means I can choose either ip. However, I selected only one specific ip for storage during asd manager setup. What's the sense for vpool to use other ips?
1.0
why adding vpool can choose multiple IPs - When I add or extend a vpool , I find both public ip and storage ip show up in the list widget `storage ip`. This means I can choose either ip. However, I selected only one specific ip for storage during asd manager setup. What's the sense for vpool to use other ips?
process
why adding vpool can choose multiple ips when i add or extend a vpool i find both public ip and storage ip show up in the list widget storage ip this means i can choose either ip however i selected only one specific ip for storage during asd manager setup what s the sense for vpool to use other ips
1
49,876
13,187,284,038
IssuesEvent
2020-08-13 02:55:35
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
opened
[BadDomList] uses explicit string numbers (Trac #2209)
Incomplete Migration Migrated from Trac combo core defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2209">https://code.icecube.wisc.edu/ticket/2209</a>, reported by kjmeagher and owned by andrii.terliuk</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:15:23", "description": "r165011/IceCube introduced an option which excludes DOMs with string number >86. This is highly discouraged, please use I3OMGeo.omtype (or similar) to identify new unneeded DOMs", "reporter": "kjmeagher", "cc": "", "resolution": "fixed", "_ts": "1550067323910946", "component": "combo core", "summary": "[BadDomList] uses explicit string numbers", "priority": "normal", "keywords": "", "time": "2018-11-27T19:19:56", "milestone": "", "owner": "andrii.terliuk", "type": "defect" } ``` </p> </details>
1.0
[BadDomList] uses explicit string numbers (Trac #2209) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2209">https://code.icecube.wisc.edu/ticket/2209</a>, reported by kjmeagher and owned by andrii.terliuk</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:15:23", "description": "r165011/IceCube introduced an option which excludes DOMs with string number >86. This is highly discouraged, please use I3OMGeo.omtype (or similar) to identify new unneeded DOMs", "reporter": "kjmeagher", "cc": "", "resolution": "fixed", "_ts": "1550067323910946", "component": "combo core", "summary": "[BadDomList] uses explicit string numbers", "priority": "normal", "keywords": "", "time": "2018-11-27T19:19:56", "milestone": "", "owner": "andrii.terliuk", "type": "defect" } ``` </p> </details>
non_process
uses explicit string numbers trac migrated from json status closed changetime description icecube introduced an option which excludes doms with string number this is highly discouraged please use omtype or similar to identify new unneeded doms reporter kjmeagher cc resolution fixed ts component combo core summary uses explicit string numbers priority normal keywords time milestone owner andrii terliuk type defect
0
8,027
11,209,171,600
IssuesEvent
2020-01-06 09:51:39
inasafe/inasafe-realtime
https://api.github.com/repos/inasafe/inasafe-realtime
closed
We need a way to test that Realtime is running
enhancement feature request orchestration ready realtime processor
Problem It is possible for InaSAFE Realtime to stop working and for no one to notice. Today - the last Realtime event is dated 13th July, yet there have been 17 earthquakes in Indonesia since that date, 5 of which are greater than 5.0 We seem to be unaware of system failures unless a person looks at it. InAWARE provides links to files that do not exist. proposed solution We need to build an automatic system to check that EQ realtime is running and generating reports regularly. This system needs to run automatically and send a message to an administrator or trigger a server restart if there is an error. I propose a daily / 12 hourly test where a dummy shakemap is sent to realtime. This dummy data is tagged in a way so that the resulting impact assessment is filtered off and stored in a test location (not delivered to the real output). If a new file is not delivered to the test location at the expected time - a message is sent to an administrator to check the system / the server is poked to make it run ... See original ticket at https://github.com/inasafe/inasafe/issues/2160 for further discussion.
1.0
We need a way to test that Realtime is running - Problem It is possible for InaSAFE Realtime to stop working and for no one to notice. Today - the last Realtime event is dated 13th July, yet there have been 17 earthquakes in Indonesia since that date, 5 of which are greater than 5.0 We seem to be unaware of system failures unless a person looks at it. InAWARE provides links to files that do not exist. proposed solution We need to build an automatic system to check that EQ realtime is running and generating reports regularly. This system needs to run automatically and send a message to an administrator or trigger a server restart if there is an error. I propose a daily / 12 hourly test where a dummy shakemap is sent to realtime. This dummy data is tagged in a way so that the resulting impact assessment is filtered off and stored in a test location (not delivered to the real output). If a new file is not delivered to the test location at the expected time - a message is sent to an administrator to check the system / the server is poked to make it run ... See original ticket at https://github.com/inasafe/inasafe/issues/2160 for further discussion.
process
we need a way to test that realtime is running problem it is possible for inasafe realtime to stop working and for no one to notice today the last realtime event is dated july yet there have been earthquakes in indonesia since that date of which are greater than we seem to be unaware of system failures unless a person looks at it inaware provides links to files that do not exist proposed solution we need to build an automatic system to check that eq realtime is running and generating reports regularly this system needs to run automatically and send a message to an administrator or trigger a server restart if there is an error i propose a daily hourly test where a dummy shakemap is sent to realtime this dummy data is tagged in a way so that the resulting impact assessment is filtered off and stored in a test location not delivered to the real output if a new file is not delivered to the test location at the expected time a message is sent to an administrator to check the system the server is poked to make it run see original ticket at for further discussion
1
11,799
14,624,203,122
IssuesEvent
2020-12-23 05:43:26
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Cannot convert the value of type "Microsoft.Azure.Commands.Profile.Models.PSAzureSubscription" to type "Microsoft.Azure.Commands.Common.Authentication.Abstractions.Core.IAzureContextContainer".
automation/svc cxp process-automation/subsvc product-issue triaged
$AzureContext = Get-AzSubscription -SubscriptionId $ServicePrincipalConnection.SubscriptionID returns error when using in Start-AzAutomationRunbook Start-AzAutomationRunbook : Cannot bind parameter 'DefaultProfile'. Cannot convert the "XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" value of type "Microsoft.Azure.Commands.Profile.Models.PSAzureSubscription" to type "Microsoft.Azure.Commands.Common.Authentication.Abstractions.Core.IAzureContextContainer". At line:19 char:16 + -AzContext $AzureContext + ~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (:) [Start-AzAutomationRunbook], ParameterBindingException + FullyQualifiedErrorId : CannotConvertArgumentNoMessage,Microsoft.Azure.Commands.Automation.Cmdlet.StartAzureAutomationRunbook --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 23c183d0-5012-e2e1-5562-69135b3f6509 * Version Independent ID: 7f36ff87-e24a-7442-8d42-f621f5391814 * Content: [Create modular runbooks in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-child-runbooks) * Content Source: [articles/automation/automation-child-runbooks.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-child-runbooks.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**
1.0
Cannot convert the value of type "Microsoft.Azure.Commands.Profile.Models.PSAzureSubscription" to type "Microsoft.Azure.Commands.Common.Authentication.Abstractions.Core.IAzureContextContainer". - $AzureContext = Get-AzSubscription -SubscriptionId $ServicePrincipalConnection.SubscriptionID returns error when using in Start-AzAutomationRunbook Start-AzAutomationRunbook : Cannot bind parameter 'DefaultProfile'. Cannot convert the "XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" value of type "Microsoft.Azure.Commands.Profile.Models.PSAzureSubscription" to type "Microsoft.Azure.Commands.Common.Authentication.Abstractions.Core.IAzureContextContainer". At line:19 char:16 + -AzContext $AzureContext + ~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (:) [Start-AzAutomationRunbook], ParameterBindingException + FullyQualifiedErrorId : CannotConvertArgumentNoMessage,Microsoft.Azure.Commands.Automation.Cmdlet.StartAzureAutomationRunbook --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 23c183d0-5012-e2e1-5562-69135b3f6509 * Version Independent ID: 7f36ff87-e24a-7442-8d42-f621f5391814 * Content: [Create modular runbooks in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-child-runbooks) * Content Source: [articles/automation/automation-child-runbooks.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-child-runbooks.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**
process
cannot convert the value of type microsoft azure commands profile models psazuresubscription to type microsoft azure commands common authentication abstractions core iazurecontextcontainer azurecontext get azsubscription subscriptionid serviceprincipalconnection subscriptionid returns error when using in start azautomationrunbook start azautomationrunbook cannot bind parameter defaultprofile cannot convert the xxxxxxx xxxx xxxx xxxx xxxxxxxxxxxx value of type microsoft azure commands profile models psazuresubscription to type microsoft azure commands common authentication abstractions core iazurecontextcontainer at line char azcontext azurecontext categoryinfo invalidargument parameterbindingexception fullyqualifiederrorid cannotconvertargumentnomessage microsoft azure commands automation cmdlet startazureautomationrunbook document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
1
1,840
4,646,971,567
IssuesEvent
2016-10-01 06:54:30
nodejs/node
https://api.github.com/repos/nodejs/node
opened
Investigate flaky parallel/test-tick-processor-unknown
process test tools
* **Version**: master * **Platform**: smartos, windows * **Subsystem**: process I've recently started seeing `test-tick-processor-unknown` failures on `smartos14-32` and various Windows configurations in CI. Most are merely timeouts, but I did see this instance on smartos that resulted in a [different result](https://ci.nodejs.org/job/node-test-commit-smartos/4551/nodes=smartos14-32/console): ``` not ok 1174 parallel/test-tick-processor-unknown # TIMEOUT # FATAL ERROR: invalid array length Allocation failed - JavaScript heap out of memory # 1: node::Abort() [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 2: node::OnFatalError(char const*, char const*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 3: v8::Utils::ReportOOMFailure(char const*, bool) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 5: v8::internal::Heap::AllocateUninitializedFixedArray(int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 6: v8::internal::Factory::NewUninitializedFixedArray(int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 7: v8::internal::(anonymous namespace)::ElementsAccessorBase<v8::internal::(anonymous namespace)::FastPackedSmiElementsAccessor, v8::internal::(anonymous namespace)::ElementsKindTraits<(v8::internal::ElementsKind)0> >::GrowCapacityAndConvertImpl(v8::internal::Handle<v8::internal::JSObject>, unsigned int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 8: v8::internal::Runtime_GrowArrayElements(int, v8::internal::Object**, v8::internal::Isolate*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 9: 8f60a23e # 10: a8c55c7b # 11: a8c563ae # 12: a8c3c37c # 13: a8c3c084 # 14: a8c3bf5f # 15: a8c4920b # 16: a8c36f64 # 17: a8c36a4e # 18: a8c3686e # 19: a8c18962 # 20: a8c18a5f # 21: 8f60b6b6 # 22: a8c149cf # 23: 8f60b6b6 # 24: 8f66537d # 25: 8f664baf # 26: 8f663e0e # 27: 8f66198a # 28: 8f63e83e # 29: 8f627878 # 30: v8::internal::Execution::Call(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, int, v8::internal::Handle<v8::internal::Object>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 31: v8::Function::Call(v8::Local<v8::Context>, v8::Local<v8::Value>, int, v8::Local<v8::Value>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 32: v8::Function::Call(v8::Local<v8::Value>, int, v8::Local<v8::Value>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 33: node::LoadEnvironment(node::Environment*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 34: node::StartNodeInstance(void*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 35: node::Start(int, char**) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 36: main [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 37: _start [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # FATAL ERROR: invalid array length Allocation failed - JavaScript heap out of memory # 1: node::Abort() [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 2: node::OnFatalError(char const*, char const*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 3: v8::Utils::ReportOOMFailure(char const*, bool) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 5: v8::internal::Heap::AllocateUninitializedFixedArray(int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 6: v8::internal::Factory::NewUninitializedFixedArray(int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 7: v8::internal::(anonymous namespace)::ElementsAccessorBase<v8::internal::(anonymous namespace)::FastPackedSmiElementsAccessor, v8::internal::(anonymous namespace)::ElementsKindTraits<(v8::internal::ElementsKind)0> >::GrowCapacityAndConvertImpl(v8::internal::Handle<v8::internal::JSObject>, unsigned int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 8: v8::internal::Runtime_GrowArrayElements(int, v8::internal::Object**, v8::internal::Isolate*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 9: a9c0a23e # 10: bb85613b # 11: bb85686e # 12: bb83c37c # 13: bb83c084 # 14: bb83bf5f # 15: bb8457ab # 16: bb836f64 # 17: bb836a4e # 18: bb83686e # 19: bb818962 # 20: bb818a5f # 21: a9c0b6b6 # 22: bb8149cf # 23: a9c0b6b6 # 24: a9c6537d # 25: a9c64baf # 26: a9c63e0e # 27: a9c6198a # 28: a9c3e83e # 29: a9c27878 # 30: v8::internal::Execution::Call(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, int, v8::internal::Handle<v8::internal::Object>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 31: v8::Function::Call(v8::Local<v8::Context>, v8::Local<v8::Value>, int, v8::Local<v8::Value>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 32: v8::Function::Call(v8::Local<v8::Value>, int, v8::Local<v8::Value>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 33: node::LoadEnvironment(node::Environment*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 34: node::StartNodeInstance(void*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 35: node::Start(int, char**) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 36: main [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 37: _start [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # FATAL ERROR: invalid array length Allocation failed - JavaScript heap out of memory # 1: node::Abort() [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 2: node::OnFatalError(char const*, char const*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 3: v8::Utils::ReportOOMFailure(char const*, bool) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 5: v8::internal::Heap::AllocateUninitializedFixedArray(int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 6: v8::internal::Factory::NewUninitializedFixedArray(int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 7: v8::internal::(anonymous namespace)::ElementsAccessorBase<v8::internal::(anonymous namespace)::FastPackedSmiElementsAccessor, v8::internal::(anonymous namespace)::ElementsKindTraits<(v8::internal::ElementsKind)0> >::GrowCapacityAndConvertImpl(v8::internal::Handle<v8::internal::JSObject>, unsigned int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 8: v8::internal::Runtime_GrowArrayElements(int, v8::internal::Object**, v8::internal::Isolate*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 9: a2d0a23e # 10: 94c57adb # 11: 94c5820e # 12: 94c3c37c # 13: 94c3c084 # 14: 94c3bf5f # 15: 94c4832b # 16: 94c36f64 # 17: 94c36a4e # 18: 94c3686e # 19: 94c18962 # 20: 94c18a5f # 21: a2d0b6b6 # 22: 94c149cf # 23: a2d0b6b6 # 24: a2d6537d # 25: a2d64baf # 26: a2d63e0e # 27: a2d6198a # 28: a2d3e83e # 29: a2d27878 # 30: v8::internal::Execution::Call(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, int, v8::internal::Handle<v8::internal::Object>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 31: v8::Function::Call(v8::Local<v8::Context>, v8::Local<v8::Value>, int, v8::Local<v8::Value>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 32: v8::Function::Call(v8::Local<v8::Value>, int, v8::Local<v8::Value>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 33: node::LoadEnvironment(node::Environment*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 34: node::StartNodeInstance(void*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 35: node::Start(int, char**) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 36: main [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 37: _start [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # FATAL ERROR: invalid array length Allocation failed - JavaScript heap out of memory # 1: node::Abort() [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 2: node::OnFatalError(char const*, char const*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 3: v8::Utils::ReportOOMFailure(char const*, bool) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 5: v8::internal::Heap::AllocateUninitializedFixedArray(int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 6: v8::internal::Factory::NewUninitializedFixedArray(int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 7: v8::internal::(anonymous namespace)::ElementsAccessorBase<v8::internal::(anonymous namespace)::FastPackedSmiElementsAccessor, v8::internal::(anonymous namespace)::ElementsKindTraits<(v8::internal::ElementsKind)0> >::GrowCapacityAndConvertImpl(v8::internal::Handle<v8::internal::JSObject>, unsigned int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 8: v8::internal::Runtime_GrowArrayElements(int, v8::internal::Object**, v8::internal::Isolate*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 9: 8260a23e # 10: 951560bb # 11: 951567f9 # 12: 9513c37c # 13: 9513c084 # 14: 9513bf5f # 15: 9514a5cb # 16: 95136f64 # 17: 95136a4e # 18: 9513686e # 19: 95118962 # 20: 95118a5f # 21: 8260b6b6 # 22: 951149cf # 23: 8260b6b6 # 24: 8266537d # 25: 82664baf # 26: 82663e0e # 27: 8266198a # 28: 8263e83e # 29: 82627878 # 30: v8::internal::Execution::Call(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, int, v8::internal::Handle<v8::internal::Object>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 31: v8::Function::Call(v8::Local<v8::Context>, v8::Local<v8::Value>, int, v8::Local<v8::Value>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 32: v8::Function::Call(v8::Local<v8::Value>, int, v8::Local<v8::Value>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 33: node::LoadEnvironment(node::Environment*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 34: node::StartNodeInstance(void*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 35: node::Start(int, char**) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 36: main [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 37: _start [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] --- duration_ms: 60.105 ```
1.0
Investigate flaky parallel/test-tick-processor-unknown - * **Version**: master * **Platform**: smartos, windows * **Subsystem**: process I've recently started seeing `test-tick-processor-unknown` failures on `smartos14-32` and various Windows configurations in CI. Most are merely timeouts, but I did see this instance on smartos that resulted in a [different result](https://ci.nodejs.org/job/node-test-commit-smartos/4551/nodes=smartos14-32/console): ``` not ok 1174 parallel/test-tick-processor-unknown # TIMEOUT # FATAL ERROR: invalid array length Allocation failed - JavaScript heap out of memory # 1: node::Abort() [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 2: node::OnFatalError(char const*, char const*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 3: v8::Utils::ReportOOMFailure(char const*, bool) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 5: v8::internal::Heap::AllocateUninitializedFixedArray(int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 6: v8::internal::Factory::NewUninitializedFixedArray(int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 7: v8::internal::(anonymous namespace)::ElementsAccessorBase<v8::internal::(anonymous namespace)::FastPackedSmiElementsAccessor, v8::internal::(anonymous namespace)::ElementsKindTraits<(v8::internal::ElementsKind)0> >::GrowCapacityAndConvertImpl(v8::internal::Handle<v8::internal::JSObject>, unsigned int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 8: v8::internal::Runtime_GrowArrayElements(int, v8::internal::Object**, v8::internal::Isolate*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 9: 8f60a23e # 10: a8c55c7b # 11: a8c563ae # 12: a8c3c37c # 13: a8c3c084 # 14: a8c3bf5f # 15: a8c4920b # 16: a8c36f64 # 17: a8c36a4e # 18: a8c3686e # 19: a8c18962 # 20: a8c18a5f # 21: 8f60b6b6 # 22: a8c149cf # 23: 8f60b6b6 # 24: 8f66537d # 25: 8f664baf # 26: 8f663e0e # 27: 8f66198a # 28: 8f63e83e # 29: 8f627878 # 30: v8::internal::Execution::Call(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, int, v8::internal::Handle<v8::internal::Object>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 31: v8::Function::Call(v8::Local<v8::Context>, v8::Local<v8::Value>, int, v8::Local<v8::Value>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 32: v8::Function::Call(v8::Local<v8::Value>, int, v8::Local<v8::Value>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 33: node::LoadEnvironment(node::Environment*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 34: node::StartNodeInstance(void*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 35: node::Start(int, char**) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 36: main [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 37: _start [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # FATAL ERROR: invalid array length Allocation failed - JavaScript heap out of memory # 1: node::Abort() [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 2: node::OnFatalError(char const*, char const*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 3: v8::Utils::ReportOOMFailure(char const*, bool) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 5: v8::internal::Heap::AllocateUninitializedFixedArray(int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 6: v8::internal::Factory::NewUninitializedFixedArray(int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 7: v8::internal::(anonymous namespace)::ElementsAccessorBase<v8::internal::(anonymous namespace)::FastPackedSmiElementsAccessor, v8::internal::(anonymous namespace)::ElementsKindTraits<(v8::internal::ElementsKind)0> >::GrowCapacityAndConvertImpl(v8::internal::Handle<v8::internal::JSObject>, unsigned int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 8: v8::internal::Runtime_GrowArrayElements(int, v8::internal::Object**, v8::internal::Isolate*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 9: a9c0a23e # 10: bb85613b # 11: bb85686e # 12: bb83c37c # 13: bb83c084 # 14: bb83bf5f # 15: bb8457ab # 16: bb836f64 # 17: bb836a4e # 18: bb83686e # 19: bb818962 # 20: bb818a5f # 21: a9c0b6b6 # 22: bb8149cf # 23: a9c0b6b6 # 24: a9c6537d # 25: a9c64baf # 26: a9c63e0e # 27: a9c6198a # 28: a9c3e83e # 29: a9c27878 # 30: v8::internal::Execution::Call(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, int, v8::internal::Handle<v8::internal::Object>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 31: v8::Function::Call(v8::Local<v8::Context>, v8::Local<v8::Value>, int, v8::Local<v8::Value>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 32: v8::Function::Call(v8::Local<v8::Value>, int, v8::Local<v8::Value>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 33: node::LoadEnvironment(node::Environment*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 34: node::StartNodeInstance(void*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 35: node::Start(int, char**) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 36: main [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 37: _start [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # FATAL ERROR: invalid array length Allocation failed - JavaScript heap out of memory # 1: node::Abort() [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 2: node::OnFatalError(char const*, char const*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 3: v8::Utils::ReportOOMFailure(char const*, bool) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 5: v8::internal::Heap::AllocateUninitializedFixedArray(int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 6: v8::internal::Factory::NewUninitializedFixedArray(int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 7: v8::internal::(anonymous namespace)::ElementsAccessorBase<v8::internal::(anonymous namespace)::FastPackedSmiElementsAccessor, v8::internal::(anonymous namespace)::ElementsKindTraits<(v8::internal::ElementsKind)0> >::GrowCapacityAndConvertImpl(v8::internal::Handle<v8::internal::JSObject>, unsigned int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 8: v8::internal::Runtime_GrowArrayElements(int, v8::internal::Object**, v8::internal::Isolate*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 9: a2d0a23e # 10: 94c57adb # 11: 94c5820e # 12: 94c3c37c # 13: 94c3c084 # 14: 94c3bf5f # 15: 94c4832b # 16: 94c36f64 # 17: 94c36a4e # 18: 94c3686e # 19: 94c18962 # 20: 94c18a5f # 21: a2d0b6b6 # 22: 94c149cf # 23: a2d0b6b6 # 24: a2d6537d # 25: a2d64baf # 26: a2d63e0e # 27: a2d6198a # 28: a2d3e83e # 29: a2d27878 # 30: v8::internal::Execution::Call(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, int, v8::internal::Handle<v8::internal::Object>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 31: v8::Function::Call(v8::Local<v8::Context>, v8::Local<v8::Value>, int, v8::Local<v8::Value>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 32: v8::Function::Call(v8::Local<v8::Value>, int, v8::Local<v8::Value>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 33: node::LoadEnvironment(node::Environment*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 34: node::StartNodeInstance(void*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 35: node::Start(int, char**) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 36: main [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 37: _start [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # FATAL ERROR: invalid array length Allocation failed - JavaScript heap out of memory # 1: node::Abort() [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 2: node::OnFatalError(char const*, char const*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 3: v8::Utils::ReportOOMFailure(char const*, bool) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 5: v8::internal::Heap::AllocateUninitializedFixedArray(int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 6: v8::internal::Factory::NewUninitializedFixedArray(int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 7: v8::internal::(anonymous namespace)::ElementsAccessorBase<v8::internal::(anonymous namespace)::FastPackedSmiElementsAccessor, v8::internal::(anonymous namespace)::ElementsKindTraits<(v8::internal::ElementsKind)0> >::GrowCapacityAndConvertImpl(v8::internal::Handle<v8::internal::JSObject>, unsigned int) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 8: v8::internal::Runtime_GrowArrayElements(int, v8::internal::Object**, v8::internal::Isolate*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 9: 8260a23e # 10: 951560bb # 11: 951567f9 # 12: 9513c37c # 13: 9513c084 # 14: 9513bf5f # 15: 9514a5cb # 16: 95136f64 # 17: 95136a4e # 18: 9513686e # 19: 95118962 # 20: 95118a5f # 21: 8260b6b6 # 22: 951149cf # 23: 8260b6b6 # 24: 8266537d # 25: 82664baf # 26: 82663e0e # 27: 8266198a # 28: 8263e83e # 29: 82627878 # 30: v8::internal::Execution::Call(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, int, v8::internal::Handle<v8::internal::Object>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 31: v8::Function::Call(v8::Local<v8::Context>, v8::Local<v8::Value>, int, v8::Local<v8::Value>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 32: v8::Function::Call(v8::Local<v8::Value>, int, v8::Local<v8::Value>*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 33: node::LoadEnvironment(node::Environment*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 34: node::StartNodeInstance(void*) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 35: node::Start(int, char**) [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 36: main [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] # 37: _start [/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos14-32/out/Release/node] --- duration_ms: 60.105 ```
process
investigate flaky parallel test tick processor unknown version master platform smartos windows subsystem process i ve recently started seeing test tick processor unknown failures on and various windows configurations in ci most are merely timeouts but i did see this instance on smartos that resulted in a not ok parallel test tick processor unknown timeout fatal error invalid array length allocation failed javascript heap out of memory node abort node onfatalerror char const char const utils reportoomfailure char const bool internal fatalprocessoutofmemory char const bool internal heap allocateuninitializedfixedarray int internal factory newuninitializedfixedarray int internal anonymous namespace elementsaccessorbase growcapacityandconvertimpl internal handle unsigned int internal runtime growarrayelements int internal object internal isolate internal execution call internal isolate internal handle internal handle int internal handle function call local local int local function call local int local node loadenvironment node environment node startnodeinstance void node start int char main start fatal error invalid array length allocation failed javascript heap out of memory node abort node onfatalerror char const char const utils reportoomfailure char const bool internal fatalprocessoutofmemory char const bool internal heap allocateuninitializedfixedarray int internal factory newuninitializedfixedarray int internal anonymous namespace elementsaccessorbase growcapacityandconvertimpl internal handle unsigned int internal runtime growarrayelements int internal object internal isolate internal execution call internal isolate internal handle internal handle int internal handle function call local local int local function call local int local node loadenvironment node environment node startnodeinstance void node start int char main start fatal error invalid array length allocation failed javascript heap out of memory node abort node onfatalerror char const char const utils reportoomfailure char const bool internal fatalprocessoutofmemory char const bool internal heap allocateuninitializedfixedarray int internal factory newuninitializedfixedarray int internal anonymous namespace elementsaccessorbase growcapacityandconvertimpl internal handle unsigned int internal runtime growarrayelements int internal object internal isolate internal execution call internal isolate internal handle internal handle int internal handle function call local local int local function call local int local node loadenvironment node environment node startnodeinstance void node start int char main start fatal error invalid array length allocation failed javascript heap out of memory node abort node onfatalerror char const char const utils reportoomfailure char const bool internal fatalprocessoutofmemory char const bool internal heap allocateuninitializedfixedarray int internal factory newuninitializedfixedarray int internal anonymous namespace elementsaccessorbase growcapacityandconvertimpl internal handle unsigned int internal runtime growarrayelements int internal object internal isolate internal execution call internal isolate internal handle internal handle int internal handle function call local local int local function call local int local node loadenvironment node environment node startnodeinstance void node start int char main start duration ms
1
18,899
24,837,755,027
IssuesEvent
2022-10-26 10:11:20
hoprnet/hoprnet
https://api.github.com/repos/hoprnet/hoprnet
closed
Prevent PRs from merging if target branch is broken
devops processes
We would like to make it harder to merge PRs to a branch, particularly master and release branches, when these are broken in the sense of previous CI workflow having failed. It should be somehow shown in the PR merge UI that a merge is not advised. Obviously we still need to be able to merge PRs in order to fix the broken CI workflows. Possibly Github provides functionality which can be used for that out of the box.
1.0
Prevent PRs from merging if target branch is broken - We would like to make it harder to merge PRs to a branch, particularly master and release branches, when these are broken in the sense of previous CI workflow having failed. It should be somehow shown in the PR merge UI that a merge is not advised. Obviously we still need to be able to merge PRs in order to fix the broken CI workflows. Possibly Github provides functionality which can be used for that out of the box.
process
prevent prs from merging if target branch is broken we would like to make it harder to merge prs to a branch particularly master and release branches when these are broken in the sense of previous ci workflow having failed it should be somehow shown in the pr merge ui that a merge is not advised obviously we still need to be able to merge prs in order to fix the broken ci workflows possibly github provides functionality which can be used for that out of the box
1
17,498
23,305,508,106
IssuesEvent
2022-08-07 23:50:05
lynnandtonic/nestflix.fun
https://api.github.com/repos/lynnandtonic/nestflix.fun
closed
Add Mirror, Father Mirror
suggested title in process
Please add as much of the following info as you can: Title: The Flower that Drank the Moon Type (film/tv show): Film Film or show in which it appears: Ghost World (2001) Is the parent film/show streaming anywhere? Yes (Pluto, Hulu, Amazon Prime, Vudu, among others). About when in the parent film/show does it appear? 19:35 Actual footage of the film/show can be seen (yes/no)? Yes
1.0
Add Mirror, Father Mirror - Please add as much of the following info as you can: Title: The Flower that Drank the Moon Type (film/tv show): Film Film or show in which it appears: Ghost World (2001) Is the parent film/show streaming anywhere? Yes (Pluto, Hulu, Amazon Prime, Vudu, among others). About when in the parent film/show does it appear? 19:35 Actual footage of the film/show can be seen (yes/no)? Yes
process
add mirror father mirror please add as much of the following info as you can title the flower that drank the moon type film tv show film film or show in which it appears ghost world is the parent film show streaming anywhere yes pluto hulu amazon prime vudu among others about when in the parent film show does it appear actual footage of the film show can be seen yes no yes
1
21,529
3,901,454,457
IssuesEvent
2016-04-18 10:55:37
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
stress: failed test in cockroach/rpc/rpc.test: TestOffsetMeasurement
Robot test-failure
Binary: cockroach/static-tests.tar.gz sha: https://github.com/cockroachdb/cockroach/commits/0958303ee7be3dd3fe7281c7008f520c53c8ebbd Stress build found a failed test: ``` === RUN TestOffsetMeasurement SIGABRT: abort PC=0x4612c9 m=0 goroutine 0 [idle]: runtime.epollwait(0x7fff00000004, 0x7fff370baed8, 0xffffffff00000080, 0x1, 0x1ffffffff, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /usr/local/go/src/runtime/sys_linux_amd64.s:440 +0x19 runtime.netpoll(0x1160b01, 0x0) /usr/local/go/src/runtime/netpoll_epoll.go:67 +0x94 runtime.findrunnable(0xc820016a00, 0x0) /usr/local/go/src/runtime/proc.go:1955 +0x62c runtime.schedule() /usr/local/go/src/runtime/proc.go:2072 +0x24f runtime.goexit0(0xc820146600) /usr/local/go/src/runtime/proc.go:2207 +0x1f9 runtime.mcall(0x7fff370bb5f0) /usr/local/go/src/runtime/asm_amd64.s:233 +0x5b goroutine 1 [chan receive, 9 minutes]: testing.RunTests(0xcf75a8, 0x1141060, 0xb, 0xb, 0xb57301) /usr/local/go/src/testing/testing.go:583 +0x8d2 testing.(*M).Run(0xc820043f08, 0xb57320) /usr/local/go/src/testing/testing.go:515 +0x81 main.main() github.com/cockroachdb/cockroach/rpc/_test/_testmain.go:74 +0x117 goroutine 17 [syscall, 9 minutes, locked to thread]: runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:1998 +0x1 goroutine 14 [semacquire, 9 minutes]: sync.runtime_Semacquire(0xc8201ef08c) /usr/local/go/src/runtime/sema.go:47 +0x26 sync.(*Mutex).Lock(0xc8201ef088) /usr/local/go/src/sync/mutex.go:83 +0x1c4 github.com/cockroachdb/cockroach/util/hlc.(*Clock).MaxOffset(0xc8201ef080, 0x0) /go/src/github.com/cockroachdb/cockroach/util/hlc/hlc.go:135 +0x3a github.com/cockroachdb/cockroach/rpc.TestOffsetMeasurement(0xc820392000) /go/src/github.com/cockroachdb/cockroach/rpc/context_test.go:129 +0x719 testing.tRunner(0xc820392000, 0x11410d8) /usr/local/go/src/testing/testing.go:473 +0x98 created by testing.RunTests /usr/local/go/src/testing/testing.go:582 +0x892 goroutine 34 [chan receive]: github.com/cockroachdb/cockroach/util/log.(*loggingT).flushDaemon(0x1160a60) /go/src/github.com/cockroachdb/cockroach/util/log/clog.go:994 +0x64 created by github.com/cockroachdb/cockroach/util/log.init.1 /go/src/github.com/cockroachdb/cockroach/util/log/clog.go:595 +0x8a goroutine 29 [select, 9 minutes]: google.golang.org/grpc/transport.(*http2Client).controller(0xc820489d10) /go/src/google.golang.org/grpc/transport/http2_client.go:835 +0x5da created by google.golang.org/grpc/transport.newHTTP2Client /go/src/google.golang.org/grpc/transport/http2_client.go:194 +0x153b goroutine 30 [IO wait, 9 minutes]: net.runtime_pollWait(0x7f88b80e2618, 0x72, 0xc82006a800) /usr/local/go/src/runtime/netpoll.go:160 +0x60 net.(*pollDesc).Wait(0xc82025d870, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a net.(*pollDesc).WaitRead(0xc82025d870, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:78 +0x36 net.(*netFD).Read(0xc82025d810, 0xc82006a800, 0x400, 0x400, 0x0, 0x7f88b80a1028, 0xc82007c080) /usr/local/go/src/net/fd_unix.go:250 +0x23a net.(*conn).Read(0xc820154068, 0xc82006a800, 0x400, 0x400, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:172 +0xe4 crypto/tls.(*block).readFromUntil(0xc8201f8f60, 0x7f88b80e2a50, 0xc820154068, 0x5, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:460 +0xcc crypto/tls.(*Conn).readRecord(0xc820242000, 0xcf7e17, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:562 +0x2d1 crypto/tls.(*Conn).Read(0xc820242000, 0xc82048a000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:939 +0x167 bufio.(*Reader).fill(0xc82007afc0) /usr/local/go/src/bufio/bufio.go:97 +0x1e9 bufio.(*Reader).Read(0xc82007afc0, 0xc82044a938, 0x9, 0x9, 0xc82007b618, 0x0, 0x0) /usr/local/go/src/bufio/bufio.go:207 +0x260 io.ReadAtLeast(0x7f88b8030918, 0xc82007afc0, 0xc82044a938, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0) /usr/local/go/src/io/io.go:297 +0xe6 io.ReadFull(0x7f88b8030918, 0xc82007afc0, 0xc82044a938, 0x9, 0x9, 0xc82016e5a8, 0x0, 0x0) /usr/local/go/src/io/io.go:315 +0x62 golang.org/x/net/http2.readFrameHeader(0xc82044a938, 0x9, 0x9, 0x7f88b8030918, 0xc82007afc0, 0x20000000, 0xc800000000, 0x0, 0x0) /go/src/golang.org/x/net/http2/frame.go:236 +0xa5 golang.org/x/net/http2.(*Framer).ReadFrame(0xc82044a900, 0x0, 0x0, 0x0, 0x0) /go/src/golang.org/x/net/http2/frame.go:463 +0x106 google.golang.org/grpc/transport.(*framer).readFrame(0xc8201fefc0, 0x0, 0x0, 0x0, 0x0) /go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d google.golang.org/grpc/transport.(*http2Client).reader(0xc820489d10) /go/src/google.golang.org/grpc/transport/http2_client.go:757 +0x109 created by google.golang.org/grpc/transport.newHTTP2Client /go/src/google.golang.org/grpc/transport/http2_client.go:200 +0x159a goroutine 51 [chan receive, 9 minutes]: github.com/cockroachdb/cockroach/rpc.NewContext.func1() /go/src/github.com/cockroachdb/cockroach/rpc/context.go:89 +0x57 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201e6000, 0xc82000a4c0) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 52 [chan receive, 9 minutes]: github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func1() /go/src/github.com/cockroachdb/cockroach/util/net.go:47 +0x47 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201e6000, 0xc8201f15a0) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 53 [IO wait, 9 minutes]: net.runtime_pollWait(0x7f88b80e26d8, 0x72, 0x0) /usr/local/go/src/runtime/netpoll.go:160 +0x60 net.(*pollDesc).Wait(0xc8201353a0, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a net.(*pollDesc).WaitRead(0xc8201353a0, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:78 +0x36 net.(*netFD).accept(0xc820135340, 0x0, 0x7f88b802c480, 0xc820230180) /usr/local/go/src/net/fd_unix.go:426 +0x27c net.(*TCPListener).AcceptTCP(0xc8201721e0, 0xc820046ea8, 0x0, 0x0) /usr/local/go/src/net/tcpsock_posix.go:254 +0x4d net.(*TCPListener).Accept(0xc8201721e0, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/net/tcpsock_posix.go:264 +0x3d google.golang.org/grpc.(*Server).Serve(0xc8200aba00, 0x7f88b80e1730, 0xc8201721e0, 0x0, 0x0) /go/src/google.golang.org/grpc/server.go:252 +0x1b5 github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func2() /go/src/github.com/cockroachdb/cockroach/util/net.go:52 +0x3f github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201e6000, 0xc8201f15c0) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 54 [chan receive, 9 minutes]: github.com/cockroachdb/cockroach/rpc.NewContext.func1() /go/src/github.com/cockroachdb/cockroach/rpc/context.go:89 +0x57 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201e6000, 0xc8201f16c0) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 55 [select, 9 minutes]: google.golang.org/grpc.(*Conn).transportMonitor(0xc8201922a0) /go/src/google.golang.org/grpc/clientconn.go:510 +0x1d3 google.golang.org/grpc.NewConn.func1(0xc8201922a0) /go/src/google.golang.org/grpc/clientconn.go:321 +0x1b5 created by google.golang.org/grpc.NewConn /go/src/google.golang.org/grpc/clientconn.go:322 +0x4dd goroutine 56 [semacquire, 9 minutes]: sync.runtime_Semacquire(0xc8201ff7d4) /usr/local/go/src/runtime/sema.go:47 +0x26 sync.(*Mutex).Lock(0xc8201ff7d0) /usr/local/go/src/sync/mutex.go:83 +0x1c4 github.com/cockroachdb/cockroach/rpc.(*AdvancingClock).UnixNano(0xc8201ff7d0, 0x1160680) /go/src/github.com/cockroachdb/cockroach/rpc/context_test.go:202 +0x30 github.com/cockroachdb/cockroach/rpc.(*AdvancingClock).UnixNano-fm(0x2) /go/src/github.com/cockroachdb/cockroach/rpc/context_test.go:105 +0x20 github.com/cockroachdb/cockroach/util/hlc.(*Clock).getPhysicalClock(0xc8201ef080, 0xcf7e28) /go/src/github.com/cockroachdb/cockroach/util/hlc/hlc.go:160 +0x25 github.com/cockroachdb/cockroach/util/hlc.(*Clock).PhysicalNow(0xc8201ef080, 0x0) /go/src/github.com/cockroachdb/cockroach/util/hlc/hlc.go:200 +0x7d github.com/cockroachdb/cockroach/util/hlc.(*Clock).PhysicalTime(0xc8201ef080, 0x0, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/util/hlc/hlc.go:206 +0x35 github.com/cockroachdb/cockroach/rpc.(*Context).runHeartbeat(0xc8201353b0, 0xc820260280, 0xc820642260, 0xf, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/rpc/context.go:175 +0x19c github.com/cockroachdb/cockroach/rpc.(*Context).GRPCDial.func1() /go/src/github.com/cockroachdb/cockroach/rpc/context.go:157 +0x66 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201e6000, 0xc8201f8e40) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 15 [IO wait, 9 minutes]: net.runtime_pollWait(0x7f88b80e2558, 0x72, 0xc820145400) /usr/local/go/src/runtime/netpoll.go:160 +0x60 net.(*pollDesc).Wait(0xc820240060, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a net.(*pollDesc).WaitRead(0xc820240060, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:78 +0x36 net.(*netFD).Read(0xc820240000, 0xc820145400, 0x400, 0x400, 0x0, 0x7f88b80a1028, 0xc82007c080) /usr/local/go/src/net/fd_unix.go:250 +0x23a net.(*conn).Read(0xc820088018, 0xc820145400, 0x400, 0x400, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:172 +0xe4 crypto/tls.(*block).readFromUntil(0xc820194540, 0x7f88b80e2a50, 0xc820088018, 0x5, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:460 +0xcc crypto/tls.(*Conn).readRecord(0xc820462000, 0xcf7e17, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:562 +0x2d1 crypto/tls.(*Conn).Read(0xc820462000, 0xc820496000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:939 +0x167 bufio.(*Reader).fill(0xc820010360) /usr/local/go/src/bufio/bufio.go:97 +0x1e9 bufio.(*Reader).Read(0xc820010360, 0xc82042c038, 0x9, 0x9, 0x19, 0x0, 0x0) /usr/local/go/src/bufio/bufio.go:207 +0x260 io.ReadAtLeast(0x7f88b8030918, 0xc820010360, 0xc82042c038, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0) /usr/local/go/src/io/io.go:297 +0xe6 io.ReadFull(0x7f88b8030918, 0xc820010360, 0xc82042c038, 0x9, 0x9, 0xc8203b3a48, 0x0, 0x0) /usr/local/go/src/io/io.go:315 +0x62 golang.org/x/net/http2.readFrameHeader(0xc82042c038, 0x9, 0x9, 0x7f88b8030918, 0xc820010360, 0x0, 0x0, 0x0, 0x0) /go/src/golang.org/x/net/http2/frame.go:236 +0xa5 golang.org/x/net/http2.(*Framer).ReadFrame(0xc82042c000, 0x0, 0x0, 0x0, 0x0) /go/src/golang.org/x/net/http2/frame.go:463 +0x106 google.golang.org/grpc/transport.(*framer).readFrame(0xc8201fa330, 0x0, 0x0, 0x0, 0x0) /go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d google.golang.org/grpc/transport.(*http2Server).HandleStreams(0xc820236000, 0xc8201fa3f0) /go/src/google.golang.org/grpc/transport/http2_server.go:247 +0x646 google.golang.org/grpc.(*Server).serveStreams(0xc8200aba00, 0x7f88b8030968, 0xc820236000) /go/src/google.golang.org/grpc/server.go:325 +0x159 google.golang.org/grpc.(*Server).serveNewHTTP2Transport(0xc8200aba00, 0x7f88b8030840, 0xc820462000, 0x7f88b80308a0, 0xc8202562c0) /go/src/google.golang.org/grpc/server.go:312 +0x49d google.golang.org/grpc.(*Server).handleRawConn(0xc8200aba00, 0x7f88b80e29f0, 0xc820088018) /go/src/google.golang.org/grpc/server.go:289 +0x4ee created by google.golang.org/grpc.(*Server).Serve /go/src/google.golang.org/grpc/server.go:261 +0x372 goroutine 16 [select, 9 minutes]: google.golang.org/grpc/transport.(*http2Server).controller(0xc820236000) /go/src/google.golang.org/grpc/transport/http2_server.go:620 +0x5da created by google.golang.org/grpc/transport.newHTTP2Server /go/src/google.golang.org/grpc/transport/http2_server.go:134 +0x853 rax 0xfffffffffffffffc rbx 0xffffffff rcx 0x4612c9 rdx 0x80 rdi 0x4 rsi 0x7fff370baed8 rbp 0x11614a0 rsp 0x7fff370bae98 r8 0x11614a0 r9 0xc820020000 r10 0xffffffff r11 0x246 r12 0xc820236000 r13 0xa r14 0xbc91a0 r15 0x8 rip 0x4612c9 rflags 0x246 cs 0x33 fs 0x0 gs 0x0 ERROR: exit status 2 ``` Run Details: ``` 103 runs so far, 0 failures, over 5s 215 runs so far, 0 failures, over 10s 327 runs so far, 0 failures, over 15s 439 runs so far, 0 failures, over 20s 550 runs so far, 0 failures, over 25s 664 runs so far, 0 failures, over 30s 771 runs so far, 0 failures, over 35s 882 runs so far, 0 failures, over 40s 990 runs so far, 0 failures, over 45s 1100 runs so far, 0 failures, over 50s 1209 runs so far, 0 failures, over 55s 1322 runs so far, 0 failures, over 1m0s 1430 runs so far, 0 failures, over 1m5s 1536 runs so far, 0 failures, over 1m10s 1649 runs so far, 0 failures, over 1m15s 1760 runs so far, 0 failures, over 1m20s 1867 runs so far, 0 failures, over 1m25s 1975 runs so far, 0 failures, over 1m30s 2084 runs so far, 0 failures, over 1m35s 2186 runs so far, 0 failures, over 1m40s 2291 runs so far, 0 failures, over 1m45s 2398 runs so far, 0 failures, over 1m50s 2506 runs so far, 0 failures, over 1m55s 2613 runs so far, 0 failures, over 2m0s 2717 runs so far, 0 failures, over 2m5s 2822 runs so far, 0 failures, over 2m10s 2929 runs so far, 0 failures, over 2m15s 3031 runs so far, 0 failures, over 2m20s 3143 runs so far, 0 failures, over 2m25s 3245 runs so far, 0 failures, over 2m30s 3353 runs so far, 0 failures, over 2m35s 3460 runs so far, 0 failures, over 2m40s 3565 runs so far, 0 failures, over 2m45s 3670 runs so far, 0 failures, over 2m50s 3772 runs so far, 0 failures, over 2m55s 3876 runs so far, 0 failures, over 3m0s 3981 runs so far, 0 failures, over 3m5s 4080 runs so far, 0 failures, over 3m10s 4182 runs so far, 0 failures, over 3m15s 4284 runs so far, 0 failures, over 3m20s 4385 runs so far, 0 failures, over 3m25s 4485 runs so far, 0 failures, over 3m30s 4588 runs so far, 0 failures, over 3m35s 4687 runs so far, 0 failures, over 3m40s 4785 runs so far, 0 failures, over 3m45s 4880 runs so far, 0 failures, over 3m50s 4982 runs so far, 0 failures, over 3m55s 5076 runs so far, 0 failures, over 4m0s 5177 runs so far, 0 failures, over 4m5s 5277 runs so far, 0 failures, over 4m10s 5372 runs so far, 0 failures, over 4m15s 5469 runs so far, 0 failures, over 4m20s 5566 runs so far, 0 failures, over 4m25s 5664 runs so far, 0 failures, over 4m30s 5765 runs so far, 0 failures, over 4m35s 5862 runs so far, 0 failures, over 4m40s 5961 runs so far, 0 failures, over 4m45s 6062 runs so far, 0 failures, over 4m50s 6165 runs so far, 0 failures, over 4m55s 6265 runs so far, 0 failures, over 5m0s 6361 runs so far, 0 failures, over 5m5s 6458 runs so far, 0 failures, over 5m10s 6557 runs so far, 0 failures, over 5m15s 6655 runs so far, 0 failures, over 5m20s 6756 runs so far, 0 failures, over 5m25s 6849 runs so far, 0 failures, over 5m30s 6948 runs so far, 0 failures, over 5m35s 7048 runs so far, 0 failures, over 5m40s 7146 runs so far, 0 failures, over 5m45s 7245 runs so far, 0 failures, over 5m50s 7338 runs so far, 0 failures, over 5m55s 7439 runs so far, 0 failures, over 6m0s 7536 runs so far, 0 failures, over 6m5s 7632 runs so far, 0 failures, over 6m10s 7732 runs so far, 0 failures, over 6m15s 7830 runs so far, 0 failures, over 6m20s 7928 runs so far, 0 failures, over 6m25s 8027 runs so far, 0 failures, over 6m30s 8129 runs so far, 0 failures, over 6m35s 8227 runs so far, 0 failures, over 6m40s 8326 runs so far, 0 failures, over 6m45s 8429 runs so far, 0 failures, over 6m50s 8524 runs so far, 0 failures, over 6m55s 8622 runs so far, 0 failures, over 7m0s 8719 runs so far, 0 failures, over 7m5s 8813 runs so far, 0 failures, over 7m10s 8915 runs so far, 0 failures, over 7m15s 9011 runs so far, 0 failures, over 7m20s 9111 runs so far, 0 failures, over 7m25s 9207 runs so far, 0 failures, over 7m30s 9310 runs so far, 0 failures, over 7m35s 9412 runs so far, 0 failures, over 7m40s 9513 runs so far, 0 failures, over 7m45s 9611 runs so far, 0 failures, over 7m50s 9710 runs so far, 0 failures, over 7m55s 9808 runs so far, 0 failures, over 8m0s 9905 runs so far, 0 failures, over 8m5s 10004 runs so far, 0 failures, over 8m10s 10100 runs so far, 0 failures, over 8m15s 10198 runs so far, 0 failures, over 8m20s 10300 runs so far, 0 failures, over 8m25s 10397 runs so far, 0 failures, over 8m30s 10493 runs so far, 0 failures, over 8m35s 10592 runs so far, 0 failures, over 8m40s 10692 runs so far, 0 failures, over 8m45s 10784 runs so far, 0 failures, over 8m50s 10880 runs so far, 0 failures, over 8m55s 10974 runs so far, 0 failures, over 9m0s 11073 runs so far, 0 failures, over 9m5s 11164 runs so far, 0 failures, over 9m10s 11259 runs so far, 0 failures, over 9m15s 11352 runs so far, 0 failures, over 9m20s 11452 runs so far, 0 failures, over 9m25s 11550 runs so far, 0 failures, over 9m30s 11646 runs so far, 0 failures, over 9m35s 11739 runs so far, 0 failures, over 9m40s 11838 runs so far, 0 failures, over 9m45s 11936 runs so far, 0 failures, over 9m50s 12031 runs so far, 0 failures, over 9m55s 12126 runs so far, 0 failures, over 10m0s 12224 runs so far, 0 failures, over 10m5s 12317 runs so far, 0 failures, over 10m10s 12413 runs so far, 0 failures, over 10m15s 12510 runs so far, 0 failures, over 10m20s 12605 runs so far, 0 failures, over 10m25s 12695 runs so far, 0 failures, over 10m30s 12761 runs completed, 1 failures, over 10m33s FAIL ``` Please assign, take a look and update the issue accordingly.
1.0
stress: failed test in cockroach/rpc/rpc.test: TestOffsetMeasurement - Binary: cockroach/static-tests.tar.gz sha: https://github.com/cockroachdb/cockroach/commits/0958303ee7be3dd3fe7281c7008f520c53c8ebbd Stress build found a failed test: ``` === RUN TestOffsetMeasurement SIGABRT: abort PC=0x4612c9 m=0 goroutine 0 [idle]: runtime.epollwait(0x7fff00000004, 0x7fff370baed8, 0xffffffff00000080, 0x1, 0x1ffffffff, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /usr/local/go/src/runtime/sys_linux_amd64.s:440 +0x19 runtime.netpoll(0x1160b01, 0x0) /usr/local/go/src/runtime/netpoll_epoll.go:67 +0x94 runtime.findrunnable(0xc820016a00, 0x0) /usr/local/go/src/runtime/proc.go:1955 +0x62c runtime.schedule() /usr/local/go/src/runtime/proc.go:2072 +0x24f runtime.goexit0(0xc820146600) /usr/local/go/src/runtime/proc.go:2207 +0x1f9 runtime.mcall(0x7fff370bb5f0) /usr/local/go/src/runtime/asm_amd64.s:233 +0x5b goroutine 1 [chan receive, 9 minutes]: testing.RunTests(0xcf75a8, 0x1141060, 0xb, 0xb, 0xb57301) /usr/local/go/src/testing/testing.go:583 +0x8d2 testing.(*M).Run(0xc820043f08, 0xb57320) /usr/local/go/src/testing/testing.go:515 +0x81 main.main() github.com/cockroachdb/cockroach/rpc/_test/_testmain.go:74 +0x117 goroutine 17 [syscall, 9 minutes, locked to thread]: runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:1998 +0x1 goroutine 14 [semacquire, 9 minutes]: sync.runtime_Semacquire(0xc8201ef08c) /usr/local/go/src/runtime/sema.go:47 +0x26 sync.(*Mutex).Lock(0xc8201ef088) /usr/local/go/src/sync/mutex.go:83 +0x1c4 github.com/cockroachdb/cockroach/util/hlc.(*Clock).MaxOffset(0xc8201ef080, 0x0) /go/src/github.com/cockroachdb/cockroach/util/hlc/hlc.go:135 +0x3a github.com/cockroachdb/cockroach/rpc.TestOffsetMeasurement(0xc820392000) /go/src/github.com/cockroachdb/cockroach/rpc/context_test.go:129 +0x719 testing.tRunner(0xc820392000, 0x11410d8) /usr/local/go/src/testing/testing.go:473 +0x98 created by testing.RunTests /usr/local/go/src/testing/testing.go:582 +0x892 goroutine 34 [chan receive]: github.com/cockroachdb/cockroach/util/log.(*loggingT).flushDaemon(0x1160a60) /go/src/github.com/cockroachdb/cockroach/util/log/clog.go:994 +0x64 created by github.com/cockroachdb/cockroach/util/log.init.1 /go/src/github.com/cockroachdb/cockroach/util/log/clog.go:595 +0x8a goroutine 29 [select, 9 minutes]: google.golang.org/grpc/transport.(*http2Client).controller(0xc820489d10) /go/src/google.golang.org/grpc/transport/http2_client.go:835 +0x5da created by google.golang.org/grpc/transport.newHTTP2Client /go/src/google.golang.org/grpc/transport/http2_client.go:194 +0x153b goroutine 30 [IO wait, 9 minutes]: net.runtime_pollWait(0x7f88b80e2618, 0x72, 0xc82006a800) /usr/local/go/src/runtime/netpoll.go:160 +0x60 net.(*pollDesc).Wait(0xc82025d870, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a net.(*pollDesc).WaitRead(0xc82025d870, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:78 +0x36 net.(*netFD).Read(0xc82025d810, 0xc82006a800, 0x400, 0x400, 0x0, 0x7f88b80a1028, 0xc82007c080) /usr/local/go/src/net/fd_unix.go:250 +0x23a net.(*conn).Read(0xc820154068, 0xc82006a800, 0x400, 0x400, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:172 +0xe4 crypto/tls.(*block).readFromUntil(0xc8201f8f60, 0x7f88b80e2a50, 0xc820154068, 0x5, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:460 +0xcc crypto/tls.(*Conn).readRecord(0xc820242000, 0xcf7e17, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:562 +0x2d1 crypto/tls.(*Conn).Read(0xc820242000, 0xc82048a000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:939 +0x167 bufio.(*Reader).fill(0xc82007afc0) /usr/local/go/src/bufio/bufio.go:97 +0x1e9 bufio.(*Reader).Read(0xc82007afc0, 0xc82044a938, 0x9, 0x9, 0xc82007b618, 0x0, 0x0) /usr/local/go/src/bufio/bufio.go:207 +0x260 io.ReadAtLeast(0x7f88b8030918, 0xc82007afc0, 0xc82044a938, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0) /usr/local/go/src/io/io.go:297 +0xe6 io.ReadFull(0x7f88b8030918, 0xc82007afc0, 0xc82044a938, 0x9, 0x9, 0xc82016e5a8, 0x0, 0x0) /usr/local/go/src/io/io.go:315 +0x62 golang.org/x/net/http2.readFrameHeader(0xc82044a938, 0x9, 0x9, 0x7f88b8030918, 0xc82007afc0, 0x20000000, 0xc800000000, 0x0, 0x0) /go/src/golang.org/x/net/http2/frame.go:236 +0xa5 golang.org/x/net/http2.(*Framer).ReadFrame(0xc82044a900, 0x0, 0x0, 0x0, 0x0) /go/src/golang.org/x/net/http2/frame.go:463 +0x106 google.golang.org/grpc/transport.(*framer).readFrame(0xc8201fefc0, 0x0, 0x0, 0x0, 0x0) /go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d google.golang.org/grpc/transport.(*http2Client).reader(0xc820489d10) /go/src/google.golang.org/grpc/transport/http2_client.go:757 +0x109 created by google.golang.org/grpc/transport.newHTTP2Client /go/src/google.golang.org/grpc/transport/http2_client.go:200 +0x159a goroutine 51 [chan receive, 9 minutes]: github.com/cockroachdb/cockroach/rpc.NewContext.func1() /go/src/github.com/cockroachdb/cockroach/rpc/context.go:89 +0x57 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201e6000, 0xc82000a4c0) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 52 [chan receive, 9 minutes]: github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func1() /go/src/github.com/cockroachdb/cockroach/util/net.go:47 +0x47 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201e6000, 0xc8201f15a0) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 53 [IO wait, 9 minutes]: net.runtime_pollWait(0x7f88b80e26d8, 0x72, 0x0) /usr/local/go/src/runtime/netpoll.go:160 +0x60 net.(*pollDesc).Wait(0xc8201353a0, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a net.(*pollDesc).WaitRead(0xc8201353a0, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:78 +0x36 net.(*netFD).accept(0xc820135340, 0x0, 0x7f88b802c480, 0xc820230180) /usr/local/go/src/net/fd_unix.go:426 +0x27c net.(*TCPListener).AcceptTCP(0xc8201721e0, 0xc820046ea8, 0x0, 0x0) /usr/local/go/src/net/tcpsock_posix.go:254 +0x4d net.(*TCPListener).Accept(0xc8201721e0, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/net/tcpsock_posix.go:264 +0x3d google.golang.org/grpc.(*Server).Serve(0xc8200aba00, 0x7f88b80e1730, 0xc8201721e0, 0x0, 0x0) /go/src/google.golang.org/grpc/server.go:252 +0x1b5 github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func2() /go/src/github.com/cockroachdb/cockroach/util/net.go:52 +0x3f github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201e6000, 0xc8201f15c0) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 54 [chan receive, 9 minutes]: github.com/cockroachdb/cockroach/rpc.NewContext.func1() /go/src/github.com/cockroachdb/cockroach/rpc/context.go:89 +0x57 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201e6000, 0xc8201f16c0) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 55 [select, 9 minutes]: google.golang.org/grpc.(*Conn).transportMonitor(0xc8201922a0) /go/src/google.golang.org/grpc/clientconn.go:510 +0x1d3 google.golang.org/grpc.NewConn.func1(0xc8201922a0) /go/src/google.golang.org/grpc/clientconn.go:321 +0x1b5 created by google.golang.org/grpc.NewConn /go/src/google.golang.org/grpc/clientconn.go:322 +0x4dd goroutine 56 [semacquire, 9 minutes]: sync.runtime_Semacquire(0xc8201ff7d4) /usr/local/go/src/runtime/sema.go:47 +0x26 sync.(*Mutex).Lock(0xc8201ff7d0) /usr/local/go/src/sync/mutex.go:83 +0x1c4 github.com/cockroachdb/cockroach/rpc.(*AdvancingClock).UnixNano(0xc8201ff7d0, 0x1160680) /go/src/github.com/cockroachdb/cockroach/rpc/context_test.go:202 +0x30 github.com/cockroachdb/cockroach/rpc.(*AdvancingClock).UnixNano-fm(0x2) /go/src/github.com/cockroachdb/cockroach/rpc/context_test.go:105 +0x20 github.com/cockroachdb/cockroach/util/hlc.(*Clock).getPhysicalClock(0xc8201ef080, 0xcf7e28) /go/src/github.com/cockroachdb/cockroach/util/hlc/hlc.go:160 +0x25 github.com/cockroachdb/cockroach/util/hlc.(*Clock).PhysicalNow(0xc8201ef080, 0x0) /go/src/github.com/cockroachdb/cockroach/util/hlc/hlc.go:200 +0x7d github.com/cockroachdb/cockroach/util/hlc.(*Clock).PhysicalTime(0xc8201ef080, 0x0, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/util/hlc/hlc.go:206 +0x35 github.com/cockroachdb/cockroach/rpc.(*Context).runHeartbeat(0xc8201353b0, 0xc820260280, 0xc820642260, 0xf, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/rpc/context.go:175 +0x19c github.com/cockroachdb/cockroach/rpc.(*Context).GRPCDial.func1() /go/src/github.com/cockroachdb/cockroach/rpc/context.go:157 +0x66 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201e6000, 0xc8201f8e40) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 15 [IO wait, 9 minutes]: net.runtime_pollWait(0x7f88b80e2558, 0x72, 0xc820145400) /usr/local/go/src/runtime/netpoll.go:160 +0x60 net.(*pollDesc).Wait(0xc820240060, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a net.(*pollDesc).WaitRead(0xc820240060, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:78 +0x36 net.(*netFD).Read(0xc820240000, 0xc820145400, 0x400, 0x400, 0x0, 0x7f88b80a1028, 0xc82007c080) /usr/local/go/src/net/fd_unix.go:250 +0x23a net.(*conn).Read(0xc820088018, 0xc820145400, 0x400, 0x400, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:172 +0xe4 crypto/tls.(*block).readFromUntil(0xc820194540, 0x7f88b80e2a50, 0xc820088018, 0x5, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:460 +0xcc crypto/tls.(*Conn).readRecord(0xc820462000, 0xcf7e17, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:562 +0x2d1 crypto/tls.(*Conn).Read(0xc820462000, 0xc820496000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:939 +0x167 bufio.(*Reader).fill(0xc820010360) /usr/local/go/src/bufio/bufio.go:97 +0x1e9 bufio.(*Reader).Read(0xc820010360, 0xc82042c038, 0x9, 0x9, 0x19, 0x0, 0x0) /usr/local/go/src/bufio/bufio.go:207 +0x260 io.ReadAtLeast(0x7f88b8030918, 0xc820010360, 0xc82042c038, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0) /usr/local/go/src/io/io.go:297 +0xe6 io.ReadFull(0x7f88b8030918, 0xc820010360, 0xc82042c038, 0x9, 0x9, 0xc8203b3a48, 0x0, 0x0) /usr/local/go/src/io/io.go:315 +0x62 golang.org/x/net/http2.readFrameHeader(0xc82042c038, 0x9, 0x9, 0x7f88b8030918, 0xc820010360, 0x0, 0x0, 0x0, 0x0) /go/src/golang.org/x/net/http2/frame.go:236 +0xa5 golang.org/x/net/http2.(*Framer).ReadFrame(0xc82042c000, 0x0, 0x0, 0x0, 0x0) /go/src/golang.org/x/net/http2/frame.go:463 +0x106 google.golang.org/grpc/transport.(*framer).readFrame(0xc8201fa330, 0x0, 0x0, 0x0, 0x0) /go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d google.golang.org/grpc/transport.(*http2Server).HandleStreams(0xc820236000, 0xc8201fa3f0) /go/src/google.golang.org/grpc/transport/http2_server.go:247 +0x646 google.golang.org/grpc.(*Server).serveStreams(0xc8200aba00, 0x7f88b8030968, 0xc820236000) /go/src/google.golang.org/grpc/server.go:325 +0x159 google.golang.org/grpc.(*Server).serveNewHTTP2Transport(0xc8200aba00, 0x7f88b8030840, 0xc820462000, 0x7f88b80308a0, 0xc8202562c0) /go/src/google.golang.org/grpc/server.go:312 +0x49d google.golang.org/grpc.(*Server).handleRawConn(0xc8200aba00, 0x7f88b80e29f0, 0xc820088018) /go/src/google.golang.org/grpc/server.go:289 +0x4ee created by google.golang.org/grpc.(*Server).Serve /go/src/google.golang.org/grpc/server.go:261 +0x372 goroutine 16 [select, 9 minutes]: google.golang.org/grpc/transport.(*http2Server).controller(0xc820236000) /go/src/google.golang.org/grpc/transport/http2_server.go:620 +0x5da created by google.golang.org/grpc/transport.newHTTP2Server /go/src/google.golang.org/grpc/transport/http2_server.go:134 +0x853 rax 0xfffffffffffffffc rbx 0xffffffff rcx 0x4612c9 rdx 0x80 rdi 0x4 rsi 0x7fff370baed8 rbp 0x11614a0 rsp 0x7fff370bae98 r8 0x11614a0 r9 0xc820020000 r10 0xffffffff r11 0x246 r12 0xc820236000 r13 0xa r14 0xbc91a0 r15 0x8 rip 0x4612c9 rflags 0x246 cs 0x33 fs 0x0 gs 0x0 ERROR: exit status 2 ``` Run Details: ``` 103 runs so far, 0 failures, over 5s 215 runs so far, 0 failures, over 10s 327 runs so far, 0 failures, over 15s 439 runs so far, 0 failures, over 20s 550 runs so far, 0 failures, over 25s 664 runs so far, 0 failures, over 30s 771 runs so far, 0 failures, over 35s 882 runs so far, 0 failures, over 40s 990 runs so far, 0 failures, over 45s 1100 runs so far, 0 failures, over 50s 1209 runs so far, 0 failures, over 55s 1322 runs so far, 0 failures, over 1m0s 1430 runs so far, 0 failures, over 1m5s 1536 runs so far, 0 failures, over 1m10s 1649 runs so far, 0 failures, over 1m15s 1760 runs so far, 0 failures, over 1m20s 1867 runs so far, 0 failures, over 1m25s 1975 runs so far, 0 failures, over 1m30s 2084 runs so far, 0 failures, over 1m35s 2186 runs so far, 0 failures, over 1m40s 2291 runs so far, 0 failures, over 1m45s 2398 runs so far, 0 failures, over 1m50s 2506 runs so far, 0 failures, over 1m55s 2613 runs so far, 0 failures, over 2m0s 2717 runs so far, 0 failures, over 2m5s 2822 runs so far, 0 failures, over 2m10s 2929 runs so far, 0 failures, over 2m15s 3031 runs so far, 0 failures, over 2m20s 3143 runs so far, 0 failures, over 2m25s 3245 runs so far, 0 failures, over 2m30s 3353 runs so far, 0 failures, over 2m35s 3460 runs so far, 0 failures, over 2m40s 3565 runs so far, 0 failures, over 2m45s 3670 runs so far, 0 failures, over 2m50s 3772 runs so far, 0 failures, over 2m55s 3876 runs so far, 0 failures, over 3m0s 3981 runs so far, 0 failures, over 3m5s 4080 runs so far, 0 failures, over 3m10s 4182 runs so far, 0 failures, over 3m15s 4284 runs so far, 0 failures, over 3m20s 4385 runs so far, 0 failures, over 3m25s 4485 runs so far, 0 failures, over 3m30s 4588 runs so far, 0 failures, over 3m35s 4687 runs so far, 0 failures, over 3m40s 4785 runs so far, 0 failures, over 3m45s 4880 runs so far, 0 failures, over 3m50s 4982 runs so far, 0 failures, over 3m55s 5076 runs so far, 0 failures, over 4m0s 5177 runs so far, 0 failures, over 4m5s 5277 runs so far, 0 failures, over 4m10s 5372 runs so far, 0 failures, over 4m15s 5469 runs so far, 0 failures, over 4m20s 5566 runs so far, 0 failures, over 4m25s 5664 runs so far, 0 failures, over 4m30s 5765 runs so far, 0 failures, over 4m35s 5862 runs so far, 0 failures, over 4m40s 5961 runs so far, 0 failures, over 4m45s 6062 runs so far, 0 failures, over 4m50s 6165 runs so far, 0 failures, over 4m55s 6265 runs so far, 0 failures, over 5m0s 6361 runs so far, 0 failures, over 5m5s 6458 runs so far, 0 failures, over 5m10s 6557 runs so far, 0 failures, over 5m15s 6655 runs so far, 0 failures, over 5m20s 6756 runs so far, 0 failures, over 5m25s 6849 runs so far, 0 failures, over 5m30s 6948 runs so far, 0 failures, over 5m35s 7048 runs so far, 0 failures, over 5m40s 7146 runs so far, 0 failures, over 5m45s 7245 runs so far, 0 failures, over 5m50s 7338 runs so far, 0 failures, over 5m55s 7439 runs so far, 0 failures, over 6m0s 7536 runs so far, 0 failures, over 6m5s 7632 runs so far, 0 failures, over 6m10s 7732 runs so far, 0 failures, over 6m15s 7830 runs so far, 0 failures, over 6m20s 7928 runs so far, 0 failures, over 6m25s 8027 runs so far, 0 failures, over 6m30s 8129 runs so far, 0 failures, over 6m35s 8227 runs so far, 0 failures, over 6m40s 8326 runs so far, 0 failures, over 6m45s 8429 runs so far, 0 failures, over 6m50s 8524 runs so far, 0 failures, over 6m55s 8622 runs so far, 0 failures, over 7m0s 8719 runs so far, 0 failures, over 7m5s 8813 runs so far, 0 failures, over 7m10s 8915 runs so far, 0 failures, over 7m15s 9011 runs so far, 0 failures, over 7m20s 9111 runs so far, 0 failures, over 7m25s 9207 runs so far, 0 failures, over 7m30s 9310 runs so far, 0 failures, over 7m35s 9412 runs so far, 0 failures, over 7m40s 9513 runs so far, 0 failures, over 7m45s 9611 runs so far, 0 failures, over 7m50s 9710 runs so far, 0 failures, over 7m55s 9808 runs so far, 0 failures, over 8m0s 9905 runs so far, 0 failures, over 8m5s 10004 runs so far, 0 failures, over 8m10s 10100 runs so far, 0 failures, over 8m15s 10198 runs so far, 0 failures, over 8m20s 10300 runs so far, 0 failures, over 8m25s 10397 runs so far, 0 failures, over 8m30s 10493 runs so far, 0 failures, over 8m35s 10592 runs so far, 0 failures, over 8m40s 10692 runs so far, 0 failures, over 8m45s 10784 runs so far, 0 failures, over 8m50s 10880 runs so far, 0 failures, over 8m55s 10974 runs so far, 0 failures, over 9m0s 11073 runs so far, 0 failures, over 9m5s 11164 runs so far, 0 failures, over 9m10s 11259 runs so far, 0 failures, over 9m15s 11352 runs so far, 0 failures, over 9m20s 11452 runs so far, 0 failures, over 9m25s 11550 runs so far, 0 failures, over 9m30s 11646 runs so far, 0 failures, over 9m35s 11739 runs so far, 0 failures, over 9m40s 11838 runs so far, 0 failures, over 9m45s 11936 runs so far, 0 failures, over 9m50s 12031 runs so far, 0 failures, over 9m55s 12126 runs so far, 0 failures, over 10m0s 12224 runs so far, 0 failures, over 10m5s 12317 runs so far, 0 failures, over 10m10s 12413 runs so far, 0 failures, over 10m15s 12510 runs so far, 0 failures, over 10m20s 12605 runs so far, 0 failures, over 10m25s 12695 runs so far, 0 failures, over 10m30s 12761 runs completed, 1 failures, over 10m33s FAIL ``` Please assign, take a look and update the issue accordingly.
non_process
stress failed test in cockroach rpc rpc test testoffsetmeasurement binary cockroach static tests tar gz sha stress build found a failed test run testoffsetmeasurement sigabrt abort pc m goroutine runtime epollwait usr local go src runtime sys linux s runtime netpoll usr local go src runtime netpoll epoll go runtime findrunnable usr local go src runtime proc go runtime schedule usr local go src runtime proc go runtime usr local go src runtime proc go runtime mcall usr local go src runtime asm s goroutine testing runtests usr local go src testing testing go testing m run usr local go src testing testing go main main github com cockroachdb cockroach rpc test testmain go goroutine runtime goexit usr local go src runtime asm s goroutine sync runtime semacquire usr local go src runtime sema go sync mutex lock usr local go src sync mutex go github com cockroachdb cockroach util hlc clock maxoffset go src github com cockroachdb cockroach util hlc hlc go github com cockroachdb cockroach rpc testoffsetmeasurement go src github com cockroachdb cockroach rpc context test go testing trunner usr local go src testing testing go created by testing runtests usr local go src testing testing go goroutine github com cockroachdb cockroach util log loggingt flushdaemon go src github com cockroachdb cockroach util log clog go created by github com cockroachdb cockroach util log init go src github com cockroachdb cockroach util log clog go goroutine google golang org grpc transport controller go src google golang org grpc transport client go created by google golang org grpc transport go src google golang org grpc transport client go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd read usr local go src net fd unix go net conn read usr local go src net net go crypto tls block readfromuntil usr local go src crypto tls conn go crypto tls conn readrecord usr local go src crypto tls conn go crypto tls conn read usr local go src crypto tls conn go bufio reader fill usr local go src bufio bufio go bufio reader read usr local go src bufio bufio go io readatleast usr local go src io io go io readfull usr local go src io io go golang org x net readframeheader go src golang org x net frame go golang org x net framer readframe go src golang org x net frame go google golang org grpc transport framer readframe go src google golang org grpc transport http util go google golang org grpc transport reader go src google golang org grpc transport client go created by google golang org grpc transport go src google golang org grpc transport client go goroutine github com cockroachdb cockroach rpc newcontext go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine github com cockroachdb cockroach util listenandservegrpc go src github com cockroachdb cockroach util net go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd accept usr local go src net fd unix go net tcplistener accepttcp usr local go src net tcpsock posix go net tcplistener accept usr local go src net tcpsock posix go google golang org grpc server serve go src google golang org grpc server go github com cockroachdb cockroach util listenandservegrpc go src github com cockroachdb cockroach util net go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine github com cockroachdb cockroach rpc newcontext go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine google golang org grpc conn transportmonitor go src google golang org grpc clientconn go google golang org grpc newconn go src google golang org grpc clientconn go created by google golang org grpc newconn go src google golang org grpc clientconn go goroutine sync runtime semacquire usr local go src runtime sema go sync mutex lock usr local go src sync mutex go github com cockroachdb cockroach rpc advancingclock unixnano go src github com cockroachdb cockroach rpc context test go github com cockroachdb cockroach rpc advancingclock unixnano fm go src github com cockroachdb cockroach rpc context test go github com cockroachdb cockroach util hlc clock getphysicalclock go src github com cockroachdb cockroach util hlc hlc go github com cockroachdb cockroach util hlc clock physicalnow go src github com cockroachdb cockroach util hlc hlc go github com cockroachdb cockroach util hlc clock physicaltime go src github com cockroachdb cockroach util hlc hlc go github com cockroachdb cockroach rpc context runheartbeat go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach rpc context grpcdial go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd read usr local go src net fd unix go net conn read usr local go src net net go crypto tls block readfromuntil usr local go src crypto tls conn go crypto tls conn readrecord usr local go src crypto tls conn go crypto tls conn read usr local go src crypto tls conn go bufio reader fill usr local go src bufio bufio go bufio reader read usr local go src bufio bufio go io readatleast usr local go src io io go io readfull usr local go src io io go golang org x net readframeheader go src golang org x net frame go golang org x net framer readframe go src golang org x net frame go google golang org grpc transport framer readframe go src google golang org grpc transport http util go google golang org grpc transport handlestreams go src google golang org grpc transport server go google golang org grpc server servestreams go src google golang org grpc server go google golang org grpc server go src google golang org grpc server go google golang org grpc server handlerawconn go src google golang org grpc server go created by google golang org grpc server serve go src google golang org grpc server go goroutine google golang org grpc transport controller go src google golang org grpc transport server go created by google golang org grpc transport go src google golang org grpc transport server go rax rbx rcx rdx rdi rsi rbp rsp rip rflags cs fs gs error exit status run details runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs completed failures over fail please assign take a look and update the issue accordingly
0
337,052
30,236,605,510
IssuesEvent
2023-07-06 10:39:32
keycloak/keycloak
https://api.github.com/repos/keycloak/keycloak
closed
UserSessionProviderModelTest#testRemoteCachesParallel sessions are not removed after the test
area/testsuite kind/bug area/storage team/store team/continuous-testing
### Before reporting an issue - [X] I have searched existing issues - [X] I have reproduced the issue with the latest release ### Area testsuite ### Describe the bug UserSessionProviderModelTest#testRemoteCachesParallel creates sessions without the realm directly in the cache, meaning they are not cleaned after the test. This cause instability in GHA. ### Version main ### Expected behavior Session are cleaned ### Actual behavior Session are not cleaned ### How to Reproduce? Run UserSessionProviderModelTest#testRemoteCachesParallel with `hot-rod` profile multiple times ### Anything else? _No response_
2.0
UserSessionProviderModelTest#testRemoteCachesParallel sessions are not removed after the test - ### Before reporting an issue - [X] I have searched existing issues - [X] I have reproduced the issue with the latest release ### Area testsuite ### Describe the bug UserSessionProviderModelTest#testRemoteCachesParallel creates sessions without the realm directly in the cache, meaning they are not cleaned after the test. This cause instability in GHA. ### Version main ### Expected behavior Session are cleaned ### Actual behavior Session are not cleaned ### How to Reproduce? Run UserSessionProviderModelTest#testRemoteCachesParallel with `hot-rod` profile multiple times ### Anything else? _No response_
non_process
usersessionprovidermodeltest testremotecachesparallel sessions are not removed after the test before reporting an issue i have searched existing issues i have reproduced the issue with the latest release area testsuite describe the bug usersessionprovidermodeltest testremotecachesparallel creates sessions without the realm directly in the cache meaning they are not cleaned after the test this cause instability in gha version main expected behavior session are cleaned actual behavior session are not cleaned how to reproduce run usersessionprovidermodeltest testremotecachesparallel with hot rod profile multiple times anything else no response
0
721,144
24,819,526,032
IssuesEvent
2022-10-25 15:23:37
AY2223S1-CS2103T-T11-2/tp
https://api.github.com/repos/AY2223S1-CS2103T-T11-2/tp
closed
Add Tests for Task and Status
enhancement Priority.High
The tests we currently have do not cover ```Task``` and ```Status``` effectively. Let's improve the testing by adding more tests for ```Task``` and ```Status```.
1.0
Add Tests for Task and Status - The tests we currently have do not cover ```Task``` and ```Status``` effectively. Let's improve the testing by adding more tests for ```Task``` and ```Status```.
non_process
add tests for task and status the tests we currently have do not cover task and status effectively let s improve the testing by adding more tests for task and status
0
23,897
7,431,133,138
IssuesEvent
2018-03-25 11:40:50
magicDGS/jsr203-http
https://api.github.com/repos/magicDGS/jsr203-http
closed
Add CI automatic testing for windows
DevOps build
Using [Circle-CI](https://circleci.com/), where if I remember correctly I have a free account for public projects.
1.0
Add CI automatic testing for windows - Using [Circle-CI](https://circleci.com/), where if I remember correctly I have a free account for public projects.
non_process
add ci automatic testing for windows using where if i remember correctly i have a free account for public projects
0
27,149
27,753,083,817
IssuesEvent
2023-03-15 22:43:16
matomo-org/matomo
https://api.github.com/repos/matomo-org/matomo
opened
Fix inconsistencies across dropdown elements
Enhancement c: Usability To Triage
Context: Through community feedback we've heard that the UI of Matomo has room for improvement. We are therefore looking at improvements (small and large) that can have an impact on making the platform easier to use, particularly for new users who are struggling the most. Problem: Dropdown elements have visual inconsistencies, leading to an unnecessarily difficult UX Proposed solution: - move icons on dropdowns to after the text - make the icons consistent (e.g. the standard v icon pointing downwards when folded in, and pointing up when folding out) - check that these controllers are consistent in the whole codebase (core + plugins)
True
Fix inconsistencies across dropdown elements - Context: Through community feedback we've heard that the UI of Matomo has room for improvement. We are therefore looking at improvements (small and large) that can have an impact on making the platform easier to use, particularly for new users who are struggling the most. Problem: Dropdown elements have visual inconsistencies, leading to an unnecessarily difficult UX Proposed solution: - move icons on dropdowns to after the text - make the icons consistent (e.g. the standard v icon pointing downwards when folded in, and pointing up when folding out) - check that these controllers are consistent in the whole codebase (core + plugins)
non_process
fix inconsistencies across dropdown elements context through community feedback we ve heard that the ui of matomo has room for improvement we are therefore looking at improvements small and large that can have an impact on making the platform easier to use particularly for new users who are struggling the most problem dropdown elements have visual inconsistencies leading to an unnecessarily difficult ux proposed solution move icons on dropdowns to after the text make the icons consistent e g the standard v icon pointing downwards when folded in and pointing up when folding out check that these controllers are consistent in the whole codebase core plugins
0
226,392
18,015,661,514
IssuesEvent
2021-09-16 13:40:42
nodejs/node-addon-api
https://api.github.com/repos/nodejs/node-addon-api
closed
Methods without corresponding tests
test good first issue stale
Need to document then list of methods that still need tests.
1.0
Methods without corresponding tests - Need to document then list of methods that still need tests.
non_process
methods without corresponding tests need to document then list of methods that still need tests
0
407,808
11,937,435,159
IssuesEvent
2020-04-02 12:10:43
googleapis/nodejs-translate
https://api.github.com/repos/googleapis/nodejs-translate
closed
Synthesis failed for nodejs-translate
api: translation autosynth failure priority: p1 type: bug
Hello! Autosynth couldn't regenerate nodejs-translate. :broken_heart: Here's the output from running `synth.py`: ``` Cloning into 'working_repo'... Switched to a new branch 'autosynth' Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 482, in <module> main() File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 334, in main return _inner_main(temp_dir) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 418, in _inner_main git_source.enumerate_versions(metadata["sources"], pathlib.Path(temp_dir)) KeyError: 'sources' ``` Google internal developers can see the full log [here](https://sponge/fbc7e856-fc71-4dc2-81b5-76163684ca8a).
1.0
Synthesis failed for nodejs-translate - Hello! Autosynth couldn't regenerate nodejs-translate. :broken_heart: Here's the output from running `synth.py`: ``` Cloning into 'working_repo'... Switched to a new branch 'autosynth' Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 482, in <module> main() File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 334, in main return _inner_main(temp_dir) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 418, in _inner_main git_source.enumerate_versions(metadata["sources"], pathlib.Path(temp_dir)) KeyError: 'sources' ``` Google internal developers can see the full log [here](https://sponge/fbc7e856-fc71-4dc2-81b5-76163684ca8a).
non_process
synthesis failed for nodejs translate hello autosynth couldn t regenerate nodejs translate broken heart here s the output from running synth py cloning into working repo switched to a new branch autosynth traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src git autosynth autosynth synth py line in main file tmpfs src git autosynth autosynth synth py line in main return inner main temp dir file tmpfs src git autosynth autosynth synth py line in inner main git source enumerate versions metadata pathlib path temp dir keyerror sources google internal developers can see the full log
0
15,888
20,075,035,674
IssuesEvent
2022-02-04 11:43:30
climatepolicyradar/navigator
https://api.github.com/repos/climatepolicyradar/navigator
opened
Extract text passages using heuristic rules when adding new document
Document processing
When adding a document to the system, in either the bulk load from CCLW or when adding individual documents, the system should extract the text from that document. The text should be extracted using a baseline method using heuristic rules to identify words, sentences and paragraphs. Extracted text passages should be stored against that document as passages in the database.
1.0
Extract text passages using heuristic rules when adding new document - When adding a document to the system, in either the bulk load from CCLW or when adding individual documents, the system should extract the text from that document. The text should be extracted using a baseline method using heuristic rules to identify words, sentences and paragraphs. Extracted text passages should be stored against that document as passages in the database.
process
extract text passages using heuristic rules when adding new document when adding a document to the system in either the bulk load from cclw or when adding individual documents the system should extract the text from that document the text should be extracted using a baseline method using heuristic rules to identify words sentences and paragraphs extracted text passages should be stored against that document as passages in the database
1
371,885
10,987,019,577
IssuesEvent
2019-12-02 08:16:53
yilmazvolkan/purposefulCommunityPlatform
https://api.github.com/repos/yilmazvolkan/purposefulCommunityPlatform
opened
Community Screen
Effort: High Mobile Priority: High Status: In Progress
1. Create join to community screen 1. Create my communities feed screen 1. New functionalities for each user role
1.0
Community Screen - 1. Create join to community screen 1. Create my communities feed screen 1. New functionalities for each user role
non_process
community screen create join to community screen create my communities feed screen new functionalities for each user role
0
324,631
9,906,624,793
IssuesEvent
2019-06-27 14:13:37
python/mypy
https://api.github.com/repos/python/mypy
closed
Tuple slice by variable results in incorrect type
bug priority-1-normal
```python from typing import Tuple sz = 2 x: Tuple[int, ...] = (1, 2)[:sz] ``` ```console $ mypy --version mypy 0.711 $ mypy t.py mt.py:4: error: Incompatible types in assignment (expression has type "int", variable has type "Tuple[int, ...]") ```
1.0
Tuple slice by variable results in incorrect type - ```python from typing import Tuple sz = 2 x: Tuple[int, ...] = (1, 2)[:sz] ``` ```console $ mypy --version mypy 0.711 $ mypy t.py mt.py:4: error: Incompatible types in assignment (expression has type "int", variable has type "Tuple[int, ...]") ```
non_process
tuple slice by variable results in incorrect type python from typing import tuple sz x tuple console mypy version mypy mypy t py mt py error incompatible types in assignment expression has type int variable has type tuple
0
11
2,497,881,994
IssuesEvent
2015-01-07 11:59:09
kendrainitiative/kendra_hub
https://api.github.com/repos/kendrainitiative/kendra_hub
opened
Kendra Social interaction
Architecture High priority
We have talked about integration with social media but I think the fist stage should be enabling discoverability of users within the system. We currently have a Contacts page that lists legal entities and we have enabled user to user communications. We should consider replacing the Legal entities listing (demo) with a directory of users. For this we should consider the following: * Users to mark their account as privet/public or ??. * Can anyone contact anyone or do we need a "friend" like functionality or a my contact list. * Get users to provide other information like company/ industry/ skills ...
1.0
Kendra Social interaction - We have talked about integration with social media but I think the fist stage should be enabling discoverability of users within the system. We currently have a Contacts page that lists legal entities and we have enabled user to user communications. We should consider replacing the Legal entities listing (demo) with a directory of users. For this we should consider the following: * Users to mark their account as privet/public or ??. * Can anyone contact anyone or do we need a "friend" like functionality or a my contact list. * Get users to provide other information like company/ industry/ skills ...
non_process
kendra social interaction we have talked about integration with social media but i think the fist stage should be enabling discoverability of users within the system we currently have a contacts page that lists legal entities and we have enabled user to user communications we should consider replacing the legal entities listing demo with a directory of users for this we should consider the following users to mark their account as privet public or can anyone contact anyone or do we need a friend like functionality or a my contact list get users to provide other information like company industry skills
0
16,677
21,780,783,796
IssuesEvent
2022-05-13 18:37:41
carbon-design-system/ibm-cloud-cognitive
https://api.github.com/repos/carbon-design-system/ibm-cloud-cognitive
closed
Change frequency of dependency updates
dependencies type: process improvement
## What will this achieve? Currently our dependencies are automatically updated via a PR opened by a github action every week. To help with everyone's overall workload currently, we should decrease the frequency of these updates. <!-- e.g. - bug fix - unit testing - review - enhancement - component implementation --> ## How will success be measured? Changing dependency update PRs to possible every other week <!-- e.g. - Will tests be added/passed? - Will design review the new feature? - Is a bug being resolved? --> ## Additional information - Designs - Existing code - etc
1.0
Change frequency of dependency updates - ## What will this achieve? Currently our dependencies are automatically updated via a PR opened by a github action every week. To help with everyone's overall workload currently, we should decrease the frequency of these updates. <!-- e.g. - bug fix - unit testing - review - enhancement - component implementation --> ## How will success be measured? Changing dependency update PRs to possible every other week <!-- e.g. - Will tests be added/passed? - Will design review the new feature? - Is a bug being resolved? --> ## Additional information - Designs - Existing code - etc
process
change frequency of dependency updates what will this achieve currently our dependencies are automatically updated via a pr opened by a github action every week to help with everyone s overall workload currently we should decrease the frequency of these updates e g bug fix unit testing review enhancement component implementation how will success be measured changing dependency update prs to possible every other week e g will tests be added passed will design review the new feature is a bug being resolved additional information designs existing code etc
1
20,694
27,367,118,116
IssuesEvent
2023-02-27 20:07:02
bcgov/upptime
https://api.github.com/repos/bcgov/upptime
opened
CronJob prod-dh-hold-processor-cj is stuck
status-switch status prod-dh-hold-processor-cj
Cron job is taking longer than 10 minutes to complete
1.0
CronJob prod-dh-hold-processor-cj is stuck - Cron job is taking longer than 10 minutes to complete
process
cronjob prod dh hold processor cj is stuck cron job is taking longer than minutes to complete
1
16,965
22,329,826,850
IssuesEvent
2022-06-14 13:43:37
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
ProcessStartInfo wraps quotes arround my `--option=`
area-System.Diagnostics.Process untriaged
### Description I want to use `ArgumentList.Add` to safely add arguments in a secure way but it wraps double quotes around the entire string even when it is an key–value pair option. ### Reproduction Steps ```cs using System.Diagnostics; var info = new ProcessStartInfo("cmd"); info.UseShellExecute = false; info.ArgumentList.Add("/C"); info.ArgumentList.Add("echo"); info.ArgumentList.Add("--author=Alice Smith"); Process.Start(info); ``` ### Expected behavior --author="Alice Smith" ### Actual behavior "--author=Alice Smith" ### Regression? _No response_ ### Known Workarounds _No response_ ### Configuration ``` .NET SDK (reflecting any global.json): Version: 6.0.300 Commit: 8473146e7d Runtime Environment: OS Name: Windows OS Version: 10.0.22000 OS Platform: Windows RID: win10-x64 Base Path: C:\Program Files\dotnet\sdk\6.0.300\ Host (useful for support): Version: 6.0.5 Commit: 70ae3df4a6 ``` ### Other information _No response_
1.0
ProcessStartInfo wraps quotes arround my `--option=` - ### Description I want to use `ArgumentList.Add` to safely add arguments in a secure way but it wraps double quotes around the entire string even when it is an key–value pair option. ### Reproduction Steps ```cs using System.Diagnostics; var info = new ProcessStartInfo("cmd"); info.UseShellExecute = false; info.ArgumentList.Add("/C"); info.ArgumentList.Add("echo"); info.ArgumentList.Add("--author=Alice Smith"); Process.Start(info); ``` ### Expected behavior --author="Alice Smith" ### Actual behavior "--author=Alice Smith" ### Regression? _No response_ ### Known Workarounds _No response_ ### Configuration ``` .NET SDK (reflecting any global.json): Version: 6.0.300 Commit: 8473146e7d Runtime Environment: OS Name: Windows OS Version: 10.0.22000 OS Platform: Windows RID: win10-x64 Base Path: C:\Program Files\dotnet\sdk\6.0.300\ Host (useful for support): Version: 6.0.5 Commit: 70ae3df4a6 ``` ### Other information _No response_
process
processstartinfo wraps quotes arround my option description i want to use argumentlist add to safely add arguments in a secure way but it wraps double quotes around the entire string even when it is an key–value pair option reproduction steps cs using system diagnostics var info new processstartinfo cmd info useshellexecute false info argumentlist add c info argumentlist add echo info argumentlist add author alice smith process start info expected behavior author alice smith actual behavior author alice smith regression no response known workarounds no response configuration net sdk reflecting any global json version commit runtime environment os name windows os version os platform windows rid base path c program files dotnet sdk host useful for support version commit other information no response
1
745,772
26,000,154,118
IssuesEvent
2022-12-20 14:44:35
keycloak/keycloak
https://api.github.com/repos/keycloak/keycloak
opened
[CVE-2022-41881] Denial of Service (DoS) vulnerability in io.netty:netty-codec-haproxy
priority/important area/dependencies status/not-vulnerable kind/cve
### Description ### Detailed paths - _Introduced through_: org.keycloak:keycloak-quarkus-server-app@999-SNAPSHOT › org.keycloak:keycloak-quarkus-server@999-SNAPSHOT › io.quarkus:[quarkus-vertx@2.13.5.Final](mailto:quarkus-vertx@2.13.5.Final) › io.netty:[netty-codec-haproxy@4.1.82.Final](mailto:netty-codec-haproxy@4.1.82.Final) # Overview Affected versions of this package are vulnerable to Denial of Service (DoS) due to a stack overflow that can be triggered by a malformed message containing a deeply nested TLV. (The only limitation on such recursion is that the TLV length cannot exceed `0xffff`.) **NOTE:** The StackOverflowError is caught if HAProxyMessageDecoder is used as part of Netty’s ChannelPipeline. # Details Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its intended and legitimate users. Unlike other vulnerabilities, DoS attacks usually do not aim at breaching security. Rather, they are focused on making websites and services unavailable to genuine users resulting in downtime. One popular Denial of Service vulnerability is DDoS (a Distributed Denial of Service), an attack that attempts to clog network pipes to the system by generating a large volume of traffic from many machines. When it comes to open source libraries, DoS vulnerabilities allow attackers to trigger such a crash or crippling of the service by using a flaw either in the application code or from the use of open source libraries. # Remediation Upgrade io.netty:netty-codec-haproxy to version 4.1.86.Final or higher. # References - [GitHub Commit](https://github.com/netty/netty/commit/cd91cf3c99123bd1e53fd6a1de0e3d1922f05bb2) - [CVE-2022-41881](https://www.cve.org/CVERecord?id=CVE-2022-41881)
1.0
[CVE-2022-41881] Denial of Service (DoS) vulnerability in io.netty:netty-codec-haproxy - ### Description ### Detailed paths - _Introduced through_: org.keycloak:keycloak-quarkus-server-app@999-SNAPSHOT › org.keycloak:keycloak-quarkus-server@999-SNAPSHOT › io.quarkus:[quarkus-vertx@2.13.5.Final](mailto:quarkus-vertx@2.13.5.Final) › io.netty:[netty-codec-haproxy@4.1.82.Final](mailto:netty-codec-haproxy@4.1.82.Final) # Overview Affected versions of this package are vulnerable to Denial of Service (DoS) due to a stack overflow that can be triggered by a malformed message containing a deeply nested TLV. (The only limitation on such recursion is that the TLV length cannot exceed `0xffff`.) **NOTE:** The StackOverflowError is caught if HAProxyMessageDecoder is used as part of Netty’s ChannelPipeline. # Details Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its intended and legitimate users. Unlike other vulnerabilities, DoS attacks usually do not aim at breaching security. Rather, they are focused on making websites and services unavailable to genuine users resulting in downtime. One popular Denial of Service vulnerability is DDoS (a Distributed Denial of Service), an attack that attempts to clog network pipes to the system by generating a large volume of traffic from many machines. When it comes to open source libraries, DoS vulnerabilities allow attackers to trigger such a crash or crippling of the service by using a flaw either in the application code or from the use of open source libraries. # Remediation Upgrade io.netty:netty-codec-haproxy to version 4.1.86.Final or higher. # References - [GitHub Commit](https://github.com/netty/netty/commit/cd91cf3c99123bd1e53fd6a1de0e3d1922f05bb2) - [CVE-2022-41881](https://www.cve.org/CVERecord?id=CVE-2022-41881)
non_process
denial of service dos vulnerability in io netty netty codec haproxy description detailed paths introduced through org keycloak keycloak quarkus server app snapshot › org keycloak keycloak quarkus server snapshot › io quarkus mailto quarkus vertx final  › io netty mailto netty codec haproxy final overview affected versions of this package are vulnerable to denial of service dos due to a stack overflow that can be triggered by a malformed message containing a deeply nested tlv the only limitation on such recursion is that the tlv length cannot exceed  note  the stackoverflowerror is caught if haproxymessagedecoder is used as part of netty’s channelpipeline details denial of service dos describes a family of attacks all aimed at making a system inaccessible to its intended and legitimate users unlike other vulnerabilities dos attacks usually do not aim at breaching security rather they are focused on making websites and services unavailable to genuine users resulting in downtime one popular denial of service vulnerability is ddos a distributed denial of service an attack that attempts to clog network pipes to the system by generating a large volume of traffic from many machines when it comes to open source libraries dos vulnerabilities allow attackers to trigger such a crash or crippling of the service by using a flaw either in the application code or from the use of open source libraries remediation upgrade io netty netty codec haproxy to version final or higher references
0
6,756
9,882,561,376
IssuesEvent
2019-06-24 17:10:20
gunn4r/material-ui-advanced-table
https://api.github.com/repos/gunn4r/material-ui-advanced-table
opened
Switch storybook webpack to use babel-loader for TS compilation
development process
Also make sure HMR and such is working.
1.0
Switch storybook webpack to use babel-loader for TS compilation - Also make sure HMR and such is working.
process
switch storybook webpack to use babel loader for ts compilation also make sure hmr and such is working
1
20,971
27,819,515,451
IssuesEvent
2023-03-19 03:24:54
cse442-at-ub/project_s23-cinco
https://api.github.com/repos/cse442-at-ub/project_s23-cinco
closed
Implement login and register form using the database on my local demo app
IO Task Processing Task Sprint 2
Test VPN into UB servers. Go to "Connect to database" issue to learn more. Start up Apache web server, make sure the two php files login.php and register.php are in your htdocs folder. Execute "npm start". "sudo npm start" if you have insufficent permissions. Enter this URL http://localhost:3000. Verify you can see a login and register form. LOGIN Enter in a random credential, press LOGIN and verify that an error message appears about invalid info. Enter in "host" for username and "123" for password and press LOGIN. Verify you are logged in. Enter in "host" for username and any password in the register form and click register. Confirm you are prevented from registering because the username was taken. REGISTER Enter in credentials that would not be on the database and click Register. Confirm the data was inserted, you can go to the database in phpmyadmin to confirm this. Enter in a random credential not on the database. Confirm you registered successfully. Try to login with that information and confirm you can login.
1.0
Implement login and register form using the database on my local demo app - Test VPN into UB servers. Go to "Connect to database" issue to learn more. Start up Apache web server, make sure the two php files login.php and register.php are in your htdocs folder. Execute "npm start". "sudo npm start" if you have insufficent permissions. Enter this URL http://localhost:3000. Verify you can see a login and register form. LOGIN Enter in a random credential, press LOGIN and verify that an error message appears about invalid info. Enter in "host" for username and "123" for password and press LOGIN. Verify you are logged in. Enter in "host" for username and any password in the register form and click register. Confirm you are prevented from registering because the username was taken. REGISTER Enter in credentials that would not be on the database and click Register. Confirm the data was inserted, you can go to the database in phpmyadmin to confirm this. Enter in a random credential not on the database. Confirm you registered successfully. Try to login with that information and confirm you can login.
process
implement login and register form using the database on my local demo app test vpn into ub servers go to connect to database issue to learn more start up apache web server make sure the two php files login php and register php are in your htdocs folder execute npm start sudo npm start if you have insufficent permissions enter this url verify you can see a login and register form login enter in a random credential press login and verify that an error message appears about invalid info enter in host for username and for password and press login verify you are logged in enter in host for username and any password in the register form and click register confirm you are prevented from registering because the username was taken register enter in credentials that would not be on the database and click register confirm the data was inserted you can go to the database in phpmyadmin to confirm this enter in a random credential not on the database confirm you registered successfully try to login with that information and confirm you can login
1
6,548
9,637,599,586
IssuesEvent
2019-05-16 09:10:27
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
GH labels reorg
area: Process enhancement
Following https://github.com/zephyrproject-rtos/zephyr/pull/15054 (Assuming it is merged), this issue aims at settling other fixes for the labels which were not trivially agreed yet. I list each proposal in a separate point below, if you agree just give it a thumbs-up, if you disagree a down (and create a separate comment for it). Maybe we can handle the discussion just in this fashion.
1.0
GH labels reorg - Following https://github.com/zephyrproject-rtos/zephyr/pull/15054 (Assuming it is merged), this issue aims at settling other fixes for the labels which were not trivially agreed yet. I list each proposal in a separate point below, if you agree just give it a thumbs-up, if you disagree a down (and create a separate comment for it). Maybe we can handle the discussion just in this fashion.
process
gh labels reorg following assuming it is merged this issue aims at settling other fixes for the labels which were not trivially agreed yet i list each proposal in a separate point below if you agree just give it a thumbs up if you disagree a down and create a separate comment for it maybe we can handle the discussion just in this fashion
1
33,678
2,770,905,638
IssuesEvent
2015-05-01 17:52:18
GoogleCloudPlatform/kubernetes
https://api.github.com/repos/GoogleCloudPlatform/kubernetes
closed
Add support for new Docker 1.6 cgroup parent flag
area/isolation cluster/platform/mesos priority/P3 team/node
Docker 1.6 allows setting the parent cgroup which allows us to set the parent for pods, see #5671
1.0
Add support for new Docker 1.6 cgroup parent flag - Docker 1.6 allows setting the parent cgroup which allows us to set the parent for pods, see #5671
non_process
add support for new docker cgroup parent flag docker allows setting the parent cgroup which allows us to set the parent for pods see
0
240,212
26,254,337,749
IssuesEvent
2023-01-05 22:33:48
MValle21/Intelehealth-WebApp
https://api.github.com/repos/MValle21/Intelehealth-WebApp
opened
CVE-2021-23364 (Medium) detected in browserslist-4.14.0.tgz
security vulnerability
## CVE-2021-23364 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>browserslist-4.14.0.tgz</b></p></summary> <p>Share target browsers between different front-end tools, like Autoprefixer, Stylelint and babel-env-preset</p> <p>Library home page: <a href="https://registry.npmjs.org/browserslist/-/browserslist-4.14.0.tgz">https://registry.npmjs.org/browserslist/-/browserslist-4.14.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/browserslist/package.json</p> <p> Dependency Hierarchy: - build-angular-0.1000.6.tgz (Root Library) - :x: **browserslist-4.14.0.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package browserslist from 4.0.0 and before 4.16.5 are vulnerable to Regular Expression Denial of Service (ReDoS) during parsing of queries. <p>Publish Date: 2021-04-28 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23364>CVE-2021-23364</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23364">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23364</a></p> <p>Release Date: 2021-04-28</p> <p>Fix Resolution (browserslist): 4.16.5</p> <p>Direct dependency fix Resolution (@angular-devkit/build-angular): 0.1000.7</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
True
CVE-2021-23364 (Medium) detected in browserslist-4.14.0.tgz - ## CVE-2021-23364 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>browserslist-4.14.0.tgz</b></p></summary> <p>Share target browsers between different front-end tools, like Autoprefixer, Stylelint and babel-env-preset</p> <p>Library home page: <a href="https://registry.npmjs.org/browserslist/-/browserslist-4.14.0.tgz">https://registry.npmjs.org/browserslist/-/browserslist-4.14.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/browserslist/package.json</p> <p> Dependency Hierarchy: - build-angular-0.1000.6.tgz (Root Library) - :x: **browserslist-4.14.0.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package browserslist from 4.0.0 and before 4.16.5 are vulnerable to Regular Expression Denial of Service (ReDoS) during parsing of queries. <p>Publish Date: 2021-04-28 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23364>CVE-2021-23364</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23364">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23364</a></p> <p>Release Date: 2021-04-28</p> <p>Fix Resolution (browserslist): 4.16.5</p> <p>Direct dependency fix Resolution (@angular-devkit/build-angular): 0.1000.7</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
non_process
cve medium detected in browserslist tgz cve medium severity vulnerability vulnerable library browserslist tgz share target browsers between different front end tools like autoprefixer stylelint and babel env preset library home page a href path to dependency file package json path to vulnerable library node modules browserslist package json dependency hierarchy build angular tgz root library x browserslist tgz vulnerable library found in base branch master vulnerability details the package browserslist from and before are vulnerable to regular expression denial of service redos during parsing of queries publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution browserslist direct dependency fix resolution angular devkit build angular rescue worker helmet automatic remediation is available for this issue
0
84,457
3,665,076,143
IssuesEvent
2016-02-19 14:45:18
e-government-ua/iTest
https://api.github.com/repos/e-government-ua/iTest
closed
Исправить ошибки в тестах
priority - High
https://jenkins-new.igov.org.ua/job/_iTest/241/ нужно провести фиксы в основе это замена ФИО и элементов Замена ФИО раньше был Дмитро Олександрович Дубілет стал Володимир Володимирович Білявцев исправить локаторы в пейгобжект
1.0
Исправить ошибки в тестах - https://jenkins-new.igov.org.ua/job/_iTest/241/ нужно провести фиксы в основе это замена ФИО и элементов Замена ФИО раньше был Дмитро Олександрович Дубілет стал Володимир Володимирович Білявцев исправить локаторы в пейгобжект
non_process
исправить ошибки в тестах нужно провести фиксы в основе это замена фио и элементов замена фио раньше был дмитро олександрович дубілет стал володимир володимирович білявцев исправить локаторы в пейгобжект
0
19,031
25,040,926,319
IssuesEvent
2022-11-04 20:40:20
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
opened
Release 5.4.0 - November 2022
P1 type: process release team-OSS
# Status of Bazel 5.4.0 - Expected release date: Next week - [List of release blockers](https://github.com/bazelbuild/bazel/milestone/45) To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone. To cherry-pick a mainline commit into 5.4, simply send a PR against the `release-5.4.0` branch. Task list: - [ ] [Create draft release announcement](https://docs.google.com/document/d/1wDvulLlj4NAlPZamdlEVFORks3YXJonCjyuQMUQEmB0/edit) - [ ] Send for review the release announcement PR: - [ ] Push the release, notify package maintainers: - [ ] Update the documentation - [ ] Push the blog post - [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
1.0
Release 5.4.0 - November 2022 - # Status of Bazel 5.4.0 - Expected release date: Next week - [List of release blockers](https://github.com/bazelbuild/bazel/milestone/45) To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone. To cherry-pick a mainline commit into 5.4, simply send a PR against the `release-5.4.0` branch. Task list: - [ ] [Create draft release announcement](https://docs.google.com/document/d/1wDvulLlj4NAlPZamdlEVFORks3YXJonCjyuQMUQEmB0/edit) - [ ] Send for review the release announcement PR: - [ ] Push the release, notify package maintainers: - [ ] Update the documentation - [ ] Push the blog post - [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
process
release november status of bazel expected release date next week to report a release blocking bug please add a comment with the text bazel io flag to the issue a release manager will triage it and add it to the milestone to cherry pick a mainline commit into simply send a pr against the release branch task list send for review the release announcement pr push the release notify package maintainers update the documentation push the blog post update the
1
350,052
10,477,909,117
IssuesEvent
2019-09-23 22:08:07
spacetx/starfish
https://api.github.com/repos/spacetx/starfish
closed
Cannot do ImageStack.{transform,apply} after ImageStack.{sel,isel}
bug high priority
The resulting ImageStack takes a reference to the backing MPDataArray. When we pass the MPDataArray to the child processes, we reshape the raw byte buffer to the shape of the resulting ImageStack. However, the MPDataArray is sized to reflect the original ImageStack's size, and so the reshape operation fails.
1.0
Cannot do ImageStack.{transform,apply} after ImageStack.{sel,isel} - The resulting ImageStack takes a reference to the backing MPDataArray. When we pass the MPDataArray to the child processes, we reshape the raw byte buffer to the shape of the resulting ImageStack. However, the MPDataArray is sized to reflect the original ImageStack's size, and so the reshape operation fails.
non_process
cannot do imagestack transform apply after imagestack sel isel the resulting imagestack takes a reference to the backing mpdataarray when we pass the mpdataarray to the child processes we reshape the raw byte buffer to the shape of the resulting imagestack however the mpdataarray is sized to reflect the original imagestack s size and so the reshape operation fails
0
36,500
5,065,078,758
IssuesEvent
2016-12-23 10:19:23
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
github.com/cockroachdb/cockroach/pkg/storage: TestTransferRaftLeadership failed under stress
Robot test-failure
SHA: https://github.com/cockroachdb/cockroach/commits/eee8e77c21ceeb4f62b1e59d13caeb467bd2d9df Parameters: ``` COCKROACH_PROPOSER_EVALUATED_KV=false TAGS= GOFLAGS=-race ``` Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=99098&tab=buildLog ``` I161223 09:50:10.598200 27883 storage/store.go:1258 [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available I161223 09:50:10.598480 27883 gossip/gossip.go:292 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:49880" > attrs:<> locality:<> W161223 09:50:10.632861 27883 gossip/gossip.go:1130 [n?] no incoming or outgoing connections I161223 09:50:10.643362 28036 gossip/client.go:125 [n2] started gossip client to 127.0.0.1:49880 I161223 09:50:10.653485 27883 gossip/gossip.go:292 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:54608" > attrs:<> locality:<> W161223 09:50:10.667143 27883 gossip/gossip.go:1130 [n?] no incoming or outgoing connections I161223 09:50:10.691012 27883 storage/store.go:1258 [n3,s3]: failed initial metrics computation: [n3,s3]: system config not yet available I161223 09:50:10.691236 27883 gossip/gossip.go:292 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:56607" > attrs:<> locality:<> I161223 09:50:10.712430 28078 gossip/client.go:125 [n3] started gossip client to 127.0.0.1:49880 I161223 09:50:10.792921 27894 storage/replica_command.go:2389 [s1,r1/1:/M{in-ax},@c4202d2d80] initiating a split of this range at key "a" [r2] I161223 09:50:10.804480 27883 storage/replica_raftstorage.go:468 [s1,r2/1:{"a"-/Max},@c4202d3200] generated preemptive snapshot e672a148 at index 11 I161223 09:50:10.806174 27883 storage/store.go:3278 [s1,r2/1:{"a"-/Max},@c4202d3200] streamed snapshot: kv pairs: 28, log entries: 1 I161223 09:50:10.807256 27898 storage/replica_raftstorage.go:633 [s2,r2/?:{-},@c4200a2000] applying preemptive snapshot at index 11 (id=e672a148, encoded size=3477, 1 rocksdb batches, 1 log entries) I161223 09:50:10.808162 27898 storage/replica_raftstorage.go:641 [s2,r2/?:{"a"-/Max},@c4200a2000] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms] I161223 09:50:10.810311 27883 storage/replica_command.go:3290 [s1,r2/1:{"a"-/Max},@c4202d3200] change replicas: read existing descriptor range_id:2 start_key:"a" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2 W161223 09:50:10.814028 28128 storage/stores.go:218 range not contained in one range: [/Meta2/Max,"a\x00"), but have [/Min,"a") I161223 09:50:10.821230 28162 storage/replica.go:2364 [s1,r2/1:{"a"-/Max},@c4202d3200] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}] W161223 09:50:10.821550 28046 storage/intent_resolver.go:338 [n1,s1,r1/1:{/Min-"a"}]: failed to push during intent resolution: failed to push "change-replica" id=bdb0681d key=/Local/Range/"a"/RangeDescriptor rw=true pri=0.01855749 iso=SERIALIZABLE stat=PENDING epo=0 ts=0.000000123,110 orig=0.000000123,110 max=0.000000123,110 wto=false rop=false I161223 09:50:10.826386 27883 storage/replica_raftstorage.go:468 [s1,r2/1:{"a"-/Max},@c4202d3200] generated preemptive snapshot b108b497 at index 14 I161223 09:50:10.828264 27883 storage/store.go:3278 [s1,r2/1:{"a"-/Max},@c4202d3200] streamed snapshot: kv pairs: 30, log entries: 4 I161223 09:50:10.829301 28096 storage/replica_raftstorage.go:633 [s3,r2/?:{-},@c420192480] applying preemptive snapshot at index 14 (id=b108b497, encoded size=4624, 1 rocksdb batches, 4 log entries) I161223 09:50:10.830490 28096 storage/replica_raftstorage.go:641 [s3,r2/?:{"a"-/Max},@c420192480] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms] I161223 09:50:10.832431 28164 storage/raft_transport.go:437 raft transport stream to node 1 established I161223 09:50:10.832852 27883 storage/replica_command.go:3290 [s1,r2/1:{"a"-/Max},@c4202d3200] change replicas: read existing descriptor range_id:2 start_key:"a" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3 I161223 09:50:10.844503 28195 storage/replica.go:2364 [s1,r2/1:{"a"-/Max},@c4202d3200] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}] I161223 09:50:34.815408 28015 gossip/gossip.go:1144 [n3] node has connected to cluster via gossip I161223 09:50:40.881979 27931 gossip/gossip.go:1144 [n1] node has connected to cluster via gossip I161223 09:50:41.896217 27918 gossip/gossip.go:1144 [n2] node has connected to cluster via gossip I161223 09:50:56.177098 28002 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:54608->127.0.0.1:50563: use of closed network connection I161223 09:50:56.177432 28085 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:56607->127.0.0.1:38822: use of closed network connection W161223 09:50:56.177601 27918 gossip/gossip.go:1130 [n2] no incoming or outgoing connections <autogenerated>:12: storage/client_raft_test.go:2985, condition failed to evaluate within 45s: expected raft leader be 1; got 2 ```
1.0
github.com/cockroachdb/cockroach/pkg/storage: TestTransferRaftLeadership failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/eee8e77c21ceeb4f62b1e59d13caeb467bd2d9df Parameters: ``` COCKROACH_PROPOSER_EVALUATED_KV=false TAGS= GOFLAGS=-race ``` Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=99098&tab=buildLog ``` I161223 09:50:10.598200 27883 storage/store.go:1258 [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available I161223 09:50:10.598480 27883 gossip/gossip.go:292 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:49880" > attrs:<> locality:<> W161223 09:50:10.632861 27883 gossip/gossip.go:1130 [n?] no incoming or outgoing connections I161223 09:50:10.643362 28036 gossip/client.go:125 [n2] started gossip client to 127.0.0.1:49880 I161223 09:50:10.653485 27883 gossip/gossip.go:292 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:54608" > attrs:<> locality:<> W161223 09:50:10.667143 27883 gossip/gossip.go:1130 [n?] no incoming or outgoing connections I161223 09:50:10.691012 27883 storage/store.go:1258 [n3,s3]: failed initial metrics computation: [n3,s3]: system config not yet available I161223 09:50:10.691236 27883 gossip/gossip.go:292 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:56607" > attrs:<> locality:<> I161223 09:50:10.712430 28078 gossip/client.go:125 [n3] started gossip client to 127.0.0.1:49880 I161223 09:50:10.792921 27894 storage/replica_command.go:2389 [s1,r1/1:/M{in-ax},@c4202d2d80] initiating a split of this range at key "a" [r2] I161223 09:50:10.804480 27883 storage/replica_raftstorage.go:468 [s1,r2/1:{"a"-/Max},@c4202d3200] generated preemptive snapshot e672a148 at index 11 I161223 09:50:10.806174 27883 storage/store.go:3278 [s1,r2/1:{"a"-/Max},@c4202d3200] streamed snapshot: kv pairs: 28, log entries: 1 I161223 09:50:10.807256 27898 storage/replica_raftstorage.go:633 [s2,r2/?:{-},@c4200a2000] applying preemptive snapshot at index 11 (id=e672a148, encoded size=3477, 1 rocksdb batches, 1 log entries) I161223 09:50:10.808162 27898 storage/replica_raftstorage.go:641 [s2,r2/?:{"a"-/Max},@c4200a2000] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms] I161223 09:50:10.810311 27883 storage/replica_command.go:3290 [s1,r2/1:{"a"-/Max},@c4202d3200] change replicas: read existing descriptor range_id:2 start_key:"a" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2 W161223 09:50:10.814028 28128 storage/stores.go:218 range not contained in one range: [/Meta2/Max,"a\x00"), but have [/Min,"a") I161223 09:50:10.821230 28162 storage/replica.go:2364 [s1,r2/1:{"a"-/Max},@c4202d3200] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}] W161223 09:50:10.821550 28046 storage/intent_resolver.go:338 [n1,s1,r1/1:{/Min-"a"}]: failed to push during intent resolution: failed to push "change-replica" id=bdb0681d key=/Local/Range/"a"/RangeDescriptor rw=true pri=0.01855749 iso=SERIALIZABLE stat=PENDING epo=0 ts=0.000000123,110 orig=0.000000123,110 max=0.000000123,110 wto=false rop=false I161223 09:50:10.826386 27883 storage/replica_raftstorage.go:468 [s1,r2/1:{"a"-/Max},@c4202d3200] generated preemptive snapshot b108b497 at index 14 I161223 09:50:10.828264 27883 storage/store.go:3278 [s1,r2/1:{"a"-/Max},@c4202d3200] streamed snapshot: kv pairs: 30, log entries: 4 I161223 09:50:10.829301 28096 storage/replica_raftstorage.go:633 [s3,r2/?:{-},@c420192480] applying preemptive snapshot at index 14 (id=b108b497, encoded size=4624, 1 rocksdb batches, 4 log entries) I161223 09:50:10.830490 28096 storage/replica_raftstorage.go:641 [s3,r2/?:{"a"-/Max},@c420192480] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms] I161223 09:50:10.832431 28164 storage/raft_transport.go:437 raft transport stream to node 1 established I161223 09:50:10.832852 27883 storage/replica_command.go:3290 [s1,r2/1:{"a"-/Max},@c4202d3200] change replicas: read existing descriptor range_id:2 start_key:"a" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3 I161223 09:50:10.844503 28195 storage/replica.go:2364 [s1,r2/1:{"a"-/Max},@c4202d3200] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}] I161223 09:50:34.815408 28015 gossip/gossip.go:1144 [n3] node has connected to cluster via gossip I161223 09:50:40.881979 27931 gossip/gossip.go:1144 [n1] node has connected to cluster via gossip I161223 09:50:41.896217 27918 gossip/gossip.go:1144 [n2] node has connected to cluster via gossip I161223 09:50:56.177098 28002 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:54608->127.0.0.1:50563: use of closed network connection I161223 09:50:56.177432 28085 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:56607->127.0.0.1:38822: use of closed network connection W161223 09:50:56.177601 27918 gossip/gossip.go:1130 [n2] no incoming or outgoing connections <autogenerated>:12: storage/client_raft_test.go:2985, condition failed to evaluate within 45s: expected raft leader be 1; got 2 ```
non_process
github com cockroachdb cockroach pkg storage testtransferraftleadership failed under stress sha parameters cockroach proposer evaluated kv false tags goflags race stress build found a failed test storage store go failed initial metrics computation system config not yet available gossip gossip go nodedescriptor set to node id address attrs locality gossip gossip go no incoming or outgoing connections gossip client go started gossip client to gossip gossip go nodedescriptor set to node id address attrs locality gossip gossip go no incoming or outgoing connections storage store go failed initial metrics computation system config not yet available gossip gossip go nodedescriptor set to node id address attrs locality gossip client go started gossip client to storage replica command go initiating a split of this range at key a storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas read existing descriptor range id start key a end key replicas next replica id storage stores go range not contained in one range max a but have min a storage replica go proposing add replica nodeid storeid replicaid storage intent resolver go failed to push during intent resolution failed to push change replica id key local range a rangedescriptor rw true pri iso serializable stat pending epo ts orig max wto false rop false storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage raft transport go raft transport stream to node established storage replica command go change replicas read existing descriptor range id start key a end key replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid gossip gossip go node has connected to cluster via gossip gossip gossip go node has connected to cluster via gossip gossip gossip go node has connected to cluster via gossip vendor google golang org grpc transport server go transport handlestreams failed to read frame read tcp use of closed network connection vendor google golang org grpc transport server go transport handlestreams failed to read frame read tcp use of closed network connection gossip gossip go no incoming or outgoing connections storage client raft test go condition failed to evaluate within expected raft leader be got
0
96,470
10,933,628,627
IssuesEvent
2019-11-24 04:00:33
bigblueberry/SomethingLikeCAS
https://api.github.com/repos/bigblueberry/SomethingLikeCAS
closed
문서화, 코드 리팩토링
Refactoring documentation help wanted
1. 로직이 (필연적으로) 복잡하다. 나같은 뛰어난 재능의 소유자의 머릿속에서는 금방 이해되는 내용이나, 코드가 직관적으로 아름답지 못하고 쓰거나 디버깅에 편하지 않다. 2. 그래서 로직은 보존하면서도 신속히 코드를 아름답게 리팩토링해줘야 한다. 3. 그리고 프로젝트 확장성을 위해서 로직이나 주요 클래스들을 문서화해줘야 한다.
1.0
문서화, 코드 리팩토링 - 1. 로직이 (필연적으로) 복잡하다. 나같은 뛰어난 재능의 소유자의 머릿속에서는 금방 이해되는 내용이나, 코드가 직관적으로 아름답지 못하고 쓰거나 디버깅에 편하지 않다. 2. 그래서 로직은 보존하면서도 신속히 코드를 아름답게 리팩토링해줘야 한다. 3. 그리고 프로젝트 확장성을 위해서 로직이나 주요 클래스들을 문서화해줘야 한다.
non_process
문서화 코드 리팩토링 로직이 필연적으로 복잡하다 나같은 뛰어난 재능의 소유자의 머릿속에서는 금방 이해되는 내용이나 코드가 직관적으로 아름답지 못하고 쓰거나 디버깅에 편하지 않다 그래서 로직은 보존하면서도 신속히 코드를 아름답게 리팩토링해줘야 한다 그리고 프로젝트 확장성을 위해서 로직이나 주요 클래스들을 문서화해줘야 한다
0
100,327
30,673,440,430
IssuesEvent
2023-07-26 01:48:37
microsoft/msquic
https://api.github.com/repos/microsoft/msquic
closed
QUIC tools are compiled with RUNPATH set to an absolute path
Area: Build OS: Ubuntu
### Describe the bug RUNPATH is set to ` /home/runner/work/msquic/msquic/artifacts/bin/linux/x64_Debug_openssl`, which means the tools will search for libmsquic in ` /home/runner/work/msquic/msquic/artifacts/bin/linux/x64_Debug_openssl`. ``` jedihy@yi-devbox:~/good-linux/bin/linux/x64_Debug_openssl$ objdump -p spinquic spinquic: file format elf64-x86-64 Program Header: PHDR off 0x0000000000000040 vaddr 0x0000000000000040 paddr 0x0000000000000040 align 2**3 filesz 0x0000000000000268 memsz 0x0000000000000268 flags r-- INTERP off 0x00000000000002a8 vaddr 0x00000000000002a8 paddr 0x00000000000002a8 align 2**0 filesz 0x000000000000001c memsz 0x000000000000001c flags r-- LOAD off 0x0000000000000000 vaddr 0x0000000000000000 paddr 0x0000000000000000 align 2**12 filesz 0x000000000062a420 memsz 0x000000000062a420 flags r-- LOAD off 0x000000000062b000 vaddr 0x000000000062b000 paddr 0x000000000062b000 align 2**12 filesz 0x000000000051573d memsz 0x000000000051573d flags r-x LOAD off 0x0000000000b41000 vaddr 0x0000000000b41000 paddr 0x0000000000b41000 align 2**12 filesz 0x0000000000118be0 memsz 0x0000000000118be0 flags r-- LOAD off 0x0000000000c59e88 vaddr 0x0000000000c5ae88 paddr 0x0000000000c5ae88 align 2**12 filesz 0x0000000000710d38 memsz 0x0000000000717d68 flags rw- DYNAMIC off 0x0000000000c894e0 vaddr 0x0000000000c8a4e0 paddr 0x0000000000c8a4e0 align 2**3 filesz 0x00000000000002c0 memsz 0x00000000000002c0 flags rw- NOTE off 0x00000000000002c4 vaddr 0x00000000000002c4 paddr 0x00000000000002c4 align 2**2 filesz 0x0000000000000044 memsz 0x0000000000000044 flags r-- EH_FRAME off 0x0000000000c09620 vaddr 0x0000000000c09620 paddr 0x0000000000c09620 align 2**2 filesz 0x000000000000e4c4 memsz 0x000000000000e4c4 flags r-- STACK off 0x0000000000000000 vaddr 0x0000000000000000 paddr 0x0000000000000000 align 2**4 filesz 0x0000000000000000 memsz 0x0000000000000000 flags rw- RELRO off 0x0000000000c59e88 vaddr 0x0000000000c5ae88 paddr 0x0000000000c5ae88 align 2**0 filesz 0x0000000000030178 memsz 0x0000000000030178 flags r-- Dynamic Section: NEEDED libasan.so.5 NEEDED libmsquic.so.2 NEEDED libdl.so.2 NEEDED libatomic.so.1 NEEDED libnuma.so.1 NEEDED libstdc++.so.6 NEEDED libm.so.6 NEEDED libubsan.so.1 NEEDED libgcc_s.so.1 NEEDED libpthread.so.0 NEEDED libc.so.6 RUNPATH /home/runner/work/msquic/msquic/artifacts/bin/linux/x64_Debug_openssl INIT 0x000000000062b000 FINI 0x0000000000b40730 PREINIT_ARRAY 0x0000000000c5ae88 PREINIT_ARRAYSZ 0x0000000000000008 INIT_ARRAY 0x0000000000c5ae90 INIT_ARRAYSZ 0x0000000000000e68 FINI_ARRAY 0x0000000000c5bcf8 FINI_ARRAYSZ 0x0000000000000e08 GNU_HASH 0x0000000000000308 STRTAB 0x0000000000001b58 SYMTAB 0x0000000000000340 STRSZ 0x00000000000010cf SYMENT 0x0000000000000018 DEBUG 0x0000000000000000 PLTGOT 0x0000000000c8a7a0 PLTRELSZ 0x00000000000016c8 PLTREL 0x0000000000000007 JMPREL 0x0000000000628d58 RELA 0x0000000000003030 RELASZ 0x0000000000625d28 RELAENT 0x0000000000000018 FLAGS 0x0000000000000008 FLAGS_1 0x0000000008000001 VERNEED 0x0000000000002e30 VERNEEDNUM 0x0000000000000007 VERSYM 0x0000000000002c28 RELACOUNT 0x0000000000041929 Version References: required from libgcc_s.so.1: 0x0b792650 0x00 25 GCC_3.0 required from libdl.so.2: 0x09691a75 0x00 15 GLIBC_2.2.5 required from libstdc++.so.6: 0x056bafd3 0x00 24 CXXABI_1.3 0x0297f870 0x00 22 GLIBCXX_3.4.20 0x0297f861 0x00 13 GLIBCXX_3.4.11 0x08922974 0x00 08 GLIBCXX_3.4 required from libnuma.so.1: 0x0c45db42 0x00 09 libnuma_1.2 0x0c45db41 0x00 07 libnuma_1.1 required from libpthread.so.0: 0x09691973 0x00 14 GLIBC_2.3.3 0x09691974 0x00 11 GLIBC_2.3.4 0x09691972 0x00 10 GLIBC_2.3.2 0x09691a75 0x00 05 GLIBC_2.2.5 required from libmsquic.so.2: 0x074a8bf3 0x00 04 msquic required from libc.so.6: 0x06969197 0x00 26 GLIBC_2.17 0x06969195 0x00 23 GLIBC_2.15 0x0d696916 0x00 21 GLIBC_2.6 0x0d696919 0x00 20 GLIBC_2.9 0x0d696918 0x00 19 GLIBC_2.8 0x0d696917 0x00 18 GLIBC_2.7 0x0d696914 0x00 17 GLIBC_2.4 0x0d696913 0x00 16 GLIBC_2.3 0x09691972 0x00 12 GLIBC_2.3.2 0x06969185 0x00 06 GLIBC_2.25 0x09691a75 0x00 03 GLIBC_2.2.5 0x09691974 0x00 02 GLIBC_2.3.4 ``` If we run the tools on a machine that's different than the one that builds the tools, it will fail to find the libraries. So, we can't grab the tools from the build artifacts and run it locally. We also can't directly use build artifacts from container builds for test workflow because the paths are different. The build logs seem to indicate that we set rpath to the same folder where tools are located. However, objdump doesn't show rpath. Though, I don't see it in the output of objdump. Instead, I see RUNPATH set to that path. ``` cd /root/msquic/build/linux/x64_openssl/src/tools/pcp && /usr/bin/cmake -E cmake_link_script CMakeFiles/quicpcp.dir/link.txt --verbose=1 /usr/bin/c++ -Og -fno-omit-frame-pointer -ggdb3 CMakeFiles/quicpcp.dir/pcp.cpp.o -o /root/msquic/artifacts/bin/linux/x64_Debug_openssl/quicpcp -Wl,-rpath,/root/msquic/artifacts/bin/linux/x64_Debug_openssl /root/msquic/artifacts/bin/linux/x64_Debug_openssl/libmsquic.so.2.3.0 ../../../obj/Debug/libplatform.a ../../../obj/Debug/liblogging.a ../../../obj/Debug/libcore.a ../../../obj/Debug/libplatform.a ../../../_deps/opensslquic-build/openssl/lib/libssl.a ../../../_deps/opensslquic-build/openssl/lib/libcrypto.a -pthread -ldl /usr/lib/x86_64-linux-gnu/libatomic.so.1 /usr/lib/x86_64-linux-gnu/libnuma.so.1 ``` The ideal solution would be use a special path `$ORIGIN` as the rpath so we can always find the library in the same folder where the binary is. We cannot simply use a relative path here because a relative path measn it's relative to the current working folder. I made a few attempts to set rpath to `$ORIGIN` in `src\tools\CMakeLists.txt` but I couldn't get it to work. ### Affected OS - [ ] Windows - [X] Linux - [ ] macOS - [ ] Other (specify below) ### Additional OS information _No response_ ### MsQuic version main ### Steps taken to reproduce bug 1. Build tools in one machine. 2. Run the tools in the other machine. ### Expected behavior Tools should work. ### Actual outcome They don't work. ### Additional details _No response_
1.0
QUIC tools are compiled with RUNPATH set to an absolute path - ### Describe the bug RUNPATH is set to ` /home/runner/work/msquic/msquic/artifacts/bin/linux/x64_Debug_openssl`, which means the tools will search for libmsquic in ` /home/runner/work/msquic/msquic/artifacts/bin/linux/x64_Debug_openssl`. ``` jedihy@yi-devbox:~/good-linux/bin/linux/x64_Debug_openssl$ objdump -p spinquic spinquic: file format elf64-x86-64 Program Header: PHDR off 0x0000000000000040 vaddr 0x0000000000000040 paddr 0x0000000000000040 align 2**3 filesz 0x0000000000000268 memsz 0x0000000000000268 flags r-- INTERP off 0x00000000000002a8 vaddr 0x00000000000002a8 paddr 0x00000000000002a8 align 2**0 filesz 0x000000000000001c memsz 0x000000000000001c flags r-- LOAD off 0x0000000000000000 vaddr 0x0000000000000000 paddr 0x0000000000000000 align 2**12 filesz 0x000000000062a420 memsz 0x000000000062a420 flags r-- LOAD off 0x000000000062b000 vaddr 0x000000000062b000 paddr 0x000000000062b000 align 2**12 filesz 0x000000000051573d memsz 0x000000000051573d flags r-x LOAD off 0x0000000000b41000 vaddr 0x0000000000b41000 paddr 0x0000000000b41000 align 2**12 filesz 0x0000000000118be0 memsz 0x0000000000118be0 flags r-- LOAD off 0x0000000000c59e88 vaddr 0x0000000000c5ae88 paddr 0x0000000000c5ae88 align 2**12 filesz 0x0000000000710d38 memsz 0x0000000000717d68 flags rw- DYNAMIC off 0x0000000000c894e0 vaddr 0x0000000000c8a4e0 paddr 0x0000000000c8a4e0 align 2**3 filesz 0x00000000000002c0 memsz 0x00000000000002c0 flags rw- NOTE off 0x00000000000002c4 vaddr 0x00000000000002c4 paddr 0x00000000000002c4 align 2**2 filesz 0x0000000000000044 memsz 0x0000000000000044 flags r-- EH_FRAME off 0x0000000000c09620 vaddr 0x0000000000c09620 paddr 0x0000000000c09620 align 2**2 filesz 0x000000000000e4c4 memsz 0x000000000000e4c4 flags r-- STACK off 0x0000000000000000 vaddr 0x0000000000000000 paddr 0x0000000000000000 align 2**4 filesz 0x0000000000000000 memsz 0x0000000000000000 flags rw- RELRO off 0x0000000000c59e88 vaddr 0x0000000000c5ae88 paddr 0x0000000000c5ae88 align 2**0 filesz 0x0000000000030178 memsz 0x0000000000030178 flags r-- Dynamic Section: NEEDED libasan.so.5 NEEDED libmsquic.so.2 NEEDED libdl.so.2 NEEDED libatomic.so.1 NEEDED libnuma.so.1 NEEDED libstdc++.so.6 NEEDED libm.so.6 NEEDED libubsan.so.1 NEEDED libgcc_s.so.1 NEEDED libpthread.so.0 NEEDED libc.so.6 RUNPATH /home/runner/work/msquic/msquic/artifacts/bin/linux/x64_Debug_openssl INIT 0x000000000062b000 FINI 0x0000000000b40730 PREINIT_ARRAY 0x0000000000c5ae88 PREINIT_ARRAYSZ 0x0000000000000008 INIT_ARRAY 0x0000000000c5ae90 INIT_ARRAYSZ 0x0000000000000e68 FINI_ARRAY 0x0000000000c5bcf8 FINI_ARRAYSZ 0x0000000000000e08 GNU_HASH 0x0000000000000308 STRTAB 0x0000000000001b58 SYMTAB 0x0000000000000340 STRSZ 0x00000000000010cf SYMENT 0x0000000000000018 DEBUG 0x0000000000000000 PLTGOT 0x0000000000c8a7a0 PLTRELSZ 0x00000000000016c8 PLTREL 0x0000000000000007 JMPREL 0x0000000000628d58 RELA 0x0000000000003030 RELASZ 0x0000000000625d28 RELAENT 0x0000000000000018 FLAGS 0x0000000000000008 FLAGS_1 0x0000000008000001 VERNEED 0x0000000000002e30 VERNEEDNUM 0x0000000000000007 VERSYM 0x0000000000002c28 RELACOUNT 0x0000000000041929 Version References: required from libgcc_s.so.1: 0x0b792650 0x00 25 GCC_3.0 required from libdl.so.2: 0x09691a75 0x00 15 GLIBC_2.2.5 required from libstdc++.so.6: 0x056bafd3 0x00 24 CXXABI_1.3 0x0297f870 0x00 22 GLIBCXX_3.4.20 0x0297f861 0x00 13 GLIBCXX_3.4.11 0x08922974 0x00 08 GLIBCXX_3.4 required from libnuma.so.1: 0x0c45db42 0x00 09 libnuma_1.2 0x0c45db41 0x00 07 libnuma_1.1 required from libpthread.so.0: 0x09691973 0x00 14 GLIBC_2.3.3 0x09691974 0x00 11 GLIBC_2.3.4 0x09691972 0x00 10 GLIBC_2.3.2 0x09691a75 0x00 05 GLIBC_2.2.5 required from libmsquic.so.2: 0x074a8bf3 0x00 04 msquic required from libc.so.6: 0x06969197 0x00 26 GLIBC_2.17 0x06969195 0x00 23 GLIBC_2.15 0x0d696916 0x00 21 GLIBC_2.6 0x0d696919 0x00 20 GLIBC_2.9 0x0d696918 0x00 19 GLIBC_2.8 0x0d696917 0x00 18 GLIBC_2.7 0x0d696914 0x00 17 GLIBC_2.4 0x0d696913 0x00 16 GLIBC_2.3 0x09691972 0x00 12 GLIBC_2.3.2 0x06969185 0x00 06 GLIBC_2.25 0x09691a75 0x00 03 GLIBC_2.2.5 0x09691974 0x00 02 GLIBC_2.3.4 ``` If we run the tools on a machine that's different than the one that builds the tools, it will fail to find the libraries. So, we can't grab the tools from the build artifacts and run it locally. We also can't directly use build artifacts from container builds for test workflow because the paths are different. The build logs seem to indicate that we set rpath to the same folder where tools are located. However, objdump doesn't show rpath. Though, I don't see it in the output of objdump. Instead, I see RUNPATH set to that path. ``` cd /root/msquic/build/linux/x64_openssl/src/tools/pcp && /usr/bin/cmake -E cmake_link_script CMakeFiles/quicpcp.dir/link.txt --verbose=1 /usr/bin/c++ -Og -fno-omit-frame-pointer -ggdb3 CMakeFiles/quicpcp.dir/pcp.cpp.o -o /root/msquic/artifacts/bin/linux/x64_Debug_openssl/quicpcp -Wl,-rpath,/root/msquic/artifacts/bin/linux/x64_Debug_openssl /root/msquic/artifacts/bin/linux/x64_Debug_openssl/libmsquic.so.2.3.0 ../../../obj/Debug/libplatform.a ../../../obj/Debug/liblogging.a ../../../obj/Debug/libcore.a ../../../obj/Debug/libplatform.a ../../../_deps/opensslquic-build/openssl/lib/libssl.a ../../../_deps/opensslquic-build/openssl/lib/libcrypto.a -pthread -ldl /usr/lib/x86_64-linux-gnu/libatomic.so.1 /usr/lib/x86_64-linux-gnu/libnuma.so.1 ``` The ideal solution would be use a special path `$ORIGIN` as the rpath so we can always find the library in the same folder where the binary is. We cannot simply use a relative path here because a relative path measn it's relative to the current working folder. I made a few attempts to set rpath to `$ORIGIN` in `src\tools\CMakeLists.txt` but I couldn't get it to work. ### Affected OS - [ ] Windows - [X] Linux - [ ] macOS - [ ] Other (specify below) ### Additional OS information _No response_ ### MsQuic version main ### Steps taken to reproduce bug 1. Build tools in one machine. 2. Run the tools in the other machine. ### Expected behavior Tools should work. ### Actual outcome They don't work. ### Additional details _No response_
non_process
quic tools are compiled with runpath set to an absolute path describe the bug runpath is set to home runner work msquic msquic artifacts bin linux debug openssl which means the tools will search for libmsquic in home runner work msquic msquic artifacts bin linux debug openssl jedihy yi devbox good linux bin linux debug openssl objdump p spinquic spinquic file format program header phdr off vaddr paddr align filesz memsz flags r interp off vaddr paddr align filesz memsz flags r load off vaddr paddr align filesz memsz flags r load off vaddr paddr align filesz memsz flags r x load off vaddr paddr align filesz memsz flags r load off vaddr paddr align filesz memsz flags rw dynamic off vaddr paddr align filesz memsz flags rw note off vaddr paddr align filesz memsz flags r eh frame off vaddr paddr align filesz memsz flags r stack off vaddr paddr align filesz memsz flags rw relro off vaddr paddr align filesz memsz flags r dynamic section needed libasan so needed libmsquic so needed libdl so needed libatomic so needed libnuma so needed libstdc so needed libm so needed libubsan so needed libgcc s so needed libpthread so needed libc so runpath home runner work msquic msquic artifacts bin linux debug openssl init fini preinit array preinit arraysz init array init arraysz fini array fini arraysz gnu hash strtab symtab strsz syment debug pltgot pltrelsz pltrel jmprel rela relasz relaent flags flags verneed verneednum versym relacount version references required from libgcc s so gcc required from libdl so glibc required from libstdc so cxxabi glibcxx glibcxx glibcxx required from libnuma so libnuma libnuma required from libpthread so glibc glibc glibc glibc required from libmsquic so msquic required from libc so glibc glibc glibc glibc glibc glibc glibc glibc glibc glibc glibc glibc if we run the tools on a machine that s different than the one that builds the tools it will fail to find the libraries so we can t grab the tools from the build artifacts and run it locally we also can t directly use build artifacts from container builds for test workflow because the paths are different the build logs seem to indicate that we set rpath to the same folder where tools are located however objdump doesn t show rpath though i don t see it in the output of objdump instead i see runpath set to that path cd root msquic build linux openssl src tools pcp usr bin cmake e cmake link script cmakefiles quicpcp dir link txt verbose usr bin c og fno omit frame pointer cmakefiles quicpcp dir pcp cpp o o root msquic artifacts bin linux debug openssl quicpcp wl rpath root msquic artifacts bin linux debug openssl root msquic artifacts bin linux debug openssl libmsquic so obj debug libplatform a obj debug liblogging a obj debug libcore a obj debug libplatform a deps opensslquic build openssl lib libssl a deps opensslquic build openssl lib libcrypto a pthread ldl usr lib linux gnu libatomic so usr lib linux gnu libnuma so the ideal solution would be use a special path origin as the rpath so we can always find the library in the same folder where the binary is we cannot simply use a relative path here because a relative path measn it s relative to the current working folder i made a few attempts to set rpath to origin in src tools cmakelists txt but i couldn t get it to work affected os windows linux macos other specify below additional os information no response msquic version main steps taken to reproduce bug build tools in one machine run the tools in the other machine expected behavior tools should work actual outcome they don t work additional details no response
0
174,475
14,483,851,358
IssuesEvent
2020-12-10 15:38:31
AlinTudi98/webCrawler
https://api.github.com/repos/AlinTudi98/webCrawler
closed
Documentatie clasa PageCrawler
documentation
Documentarea clasei **PageCrawler**, a membrilor si metodelor prezente in cadrul acesteia.
1.0
Documentatie clasa PageCrawler - Documentarea clasei **PageCrawler**, a membrilor si metodelor prezente in cadrul acesteia.
non_process
documentatie clasa pagecrawler documentarea clasei pagecrawler a membrilor si metodelor prezente in cadrul acesteia
0
18,143
24,186,573,391
IssuesEvent
2022-09-23 13:47:27
cloudfoundry/korifi
https://api.github.com/repos/cloudfoundry/korifi
closed
[Feature]: Developer can push apps using the top-level `command` field in the manifest
Top-level process config
### Background **As a** developer **I want** top-level process configuration in manifests to be supported **So that** I can use shortcut `cf push` flags like `-c`, `-i`, `-m` etc. ### Acceptance Criteria * **GIVEN** I have the sources of an application (e.g. `tests/smoke/assets/test-node-app`) **AND** `manifest.yml` looks like this: ```yaml --- applications: - name: my-app command: sleep 123 ``` **WHEN I** `cf push` **THEN I** see the push succeeds with an output similar to this: ``` name: test requested state: started routes: test.vcap.me last uploaded: Mon 29 Aug 16:28:36 UTC 2022 stack: cflinuxfs3 buildpacks: name version detect output buildpack name nodejs_buildpack 1.7.61 nodejs nodejs type: web sidecars: instances: 1/1 memory usage: 256M start command: sleep 123 state since cpu memory disk details #0 running 2022-08-29T16:28:54Z 1.6% 42.3M of 256M 115.7M of 1G ``` * **GIVEN** I have the same app with the following manifest: ```yaml --- applications: - name: my-app command: sleep 456 processes: type: web command: sleep 123 ``` **WHEN I** `cf push` **THEN I** see the push succeeds with the same output as above
1.0
[Feature]: Developer can push apps using the top-level `command` field in the manifest - ### Background **As a** developer **I want** top-level process configuration in manifests to be supported **So that** I can use shortcut `cf push` flags like `-c`, `-i`, `-m` etc. ### Acceptance Criteria * **GIVEN** I have the sources of an application (e.g. `tests/smoke/assets/test-node-app`) **AND** `manifest.yml` looks like this: ```yaml --- applications: - name: my-app command: sleep 123 ``` **WHEN I** `cf push` **THEN I** see the push succeeds with an output similar to this: ``` name: test requested state: started routes: test.vcap.me last uploaded: Mon 29 Aug 16:28:36 UTC 2022 stack: cflinuxfs3 buildpacks: name version detect output buildpack name nodejs_buildpack 1.7.61 nodejs nodejs type: web sidecars: instances: 1/1 memory usage: 256M start command: sleep 123 state since cpu memory disk details #0 running 2022-08-29T16:28:54Z 1.6% 42.3M of 256M 115.7M of 1G ``` * **GIVEN** I have the same app with the following manifest: ```yaml --- applications: - name: my-app command: sleep 456 processes: type: web command: sleep 123 ``` **WHEN I** `cf push` **THEN I** see the push succeeds with the same output as above
process
developer can push apps using the top level command field in the manifest background as a developer i want top level process configuration in manifests to be supported so that i can use shortcut cf push flags like c i m etc acceptance criteria given i have the sources of an application e g tests smoke assets test node app and manifest yml looks like this yaml applications name my app command sleep when i cf push then i see the push succeeds with an output similar to this name test requested state started routes test vcap me last uploaded mon aug utc stack buildpacks name version detect output buildpack name nodejs buildpack nodejs nodejs type web sidecars instances memory usage start command sleep state since cpu memory disk details running of of given i have the same app with the following manifest yaml applications name my app command sleep processes type web command sleep when i cf push then i see the push succeeds with the same output as above
1
102,546
8,848,442,071
IssuesEvent
2019-01-08 07:00:35
vavr-io/vavr
https://api.github.com/repos/vavr-io/vavr
closed
Shrinking in PropertyCheck: find a minimal counter example
feature «vavr-test»
If a property is falsified with a counter example of a specific size then branch in a subroutine, which tries to find a minimum counter example. This can be accomplished by subsequently run the test with sizes from 0 (or 1) to the actual size. I did this manually for finding a RedBlackTree bug. It would be helpful, if this is done by the property check framework out of the box.
1.0
Shrinking in PropertyCheck: find a minimal counter example - If a property is falsified with a counter example of a specific size then branch in a subroutine, which tries to find a minimum counter example. This can be accomplished by subsequently run the test with sizes from 0 (or 1) to the actual size. I did this manually for finding a RedBlackTree bug. It would be helpful, if this is done by the property check framework out of the box.
non_process
shrinking in propertycheck find a minimal counter example if a property is falsified with a counter example of a specific size then branch in a subroutine which tries to find a minimum counter example this can be accomplished by subsequently run the test with sizes from or to the actual size i did this manually for finding a redblacktree bug it would be helpful if this is done by the property check framework out of the box
0
8,188
11,386,902,519
IssuesEvent
2020-01-29 14:08:20
googleapis/google-cloud-cpp-common
https://api.github.com/repos/googleapis/google-cloud-cpp-common
closed
CI builds are slow due to long Docker image build times
type: process
I've noticed a number of CI builds taking an abnormally long time to run... usual had been ~20 minutes but occasionally builds are taking almost an hour to complete. I looked at logs from some of these and the bulk of the time (even in "normal" ~20 minute builds) appears to be building a Docker image. These should be cached, so perhaps the cache is misconfigured. [example 1](https://source.cloud.google.com/results/invocations/2406779a-54b3-4a46-b9a1-e198db684e2f/targets/cloud-cpp%2Fgithub%2Fgoogle-cloud-cpp-common%2Fmaster%2Fdocker%2Fninja-presubmit/log) - 50m21s overall, 47m36s spent here: ``` Creating Docker image with all the development tools Thu Jan 23 12:57:42 PST 2020. Logging to cmake-out/ci-ubuntu-install-18.04-ninja/create-build-docker-image.log ================================================================ Detecting the branch name Thu Jan 23 13:45:18 PST 2020. ``` [example 2](https://source.cloud.google.com/results/invocations/5284f571-23d2-4d54-ab64-e058fce99eba/targets/cloud-cpp%2Fgithub%2Fgoogle-cloud-cpp-common%2Fmaster%2Fdocker%2Fcheck-api-presubmit/log) - 50m45s overall, 47m39s spent here: ``` Creating Docker image with all the development tools Mon Jan 27 10:07:32 PST 2020. Logging to cmake-out/ci-ubuntu-install-18.04-check-api/create-build-docker-image.log ================================================================ Detecting the branch name Mon Jan 27 10:55:11 PST 2020. ``` [example 3](https://source.cloud.google.com/results/invocations/68c7223b-51ed-456d-bb07-61cedd958455/targets/cloud-cpp%2Fgithub%2Fgoogle-cloud-cpp-common%2Fmaster%2Fdocker%2Fninja-presubmit/log) - 21m01s overall, 18m23s here: ``` Creating Docker image with all the development tools Thu Jan 23 13:52:07 PST 2020. Logging to cmake-out/ci-ubuntu-install-18.04-ninja/create-build-docker-image.log ================================================================ Detecting the branch name Thu Jan 23 14:10:30 PST 2020. ```
1.0
CI builds are slow due to long Docker image build times - I've noticed a number of CI builds taking an abnormally long time to run... usual had been ~20 minutes but occasionally builds are taking almost an hour to complete. I looked at logs from some of these and the bulk of the time (even in "normal" ~20 minute builds) appears to be building a Docker image. These should be cached, so perhaps the cache is misconfigured. [example 1](https://source.cloud.google.com/results/invocations/2406779a-54b3-4a46-b9a1-e198db684e2f/targets/cloud-cpp%2Fgithub%2Fgoogle-cloud-cpp-common%2Fmaster%2Fdocker%2Fninja-presubmit/log) - 50m21s overall, 47m36s spent here: ``` Creating Docker image with all the development tools Thu Jan 23 12:57:42 PST 2020. Logging to cmake-out/ci-ubuntu-install-18.04-ninja/create-build-docker-image.log ================================================================ Detecting the branch name Thu Jan 23 13:45:18 PST 2020. ``` [example 2](https://source.cloud.google.com/results/invocations/5284f571-23d2-4d54-ab64-e058fce99eba/targets/cloud-cpp%2Fgithub%2Fgoogle-cloud-cpp-common%2Fmaster%2Fdocker%2Fcheck-api-presubmit/log) - 50m45s overall, 47m39s spent here: ``` Creating Docker image with all the development tools Mon Jan 27 10:07:32 PST 2020. Logging to cmake-out/ci-ubuntu-install-18.04-check-api/create-build-docker-image.log ================================================================ Detecting the branch name Mon Jan 27 10:55:11 PST 2020. ``` [example 3](https://source.cloud.google.com/results/invocations/68c7223b-51ed-456d-bb07-61cedd958455/targets/cloud-cpp%2Fgithub%2Fgoogle-cloud-cpp-common%2Fmaster%2Fdocker%2Fninja-presubmit/log) - 21m01s overall, 18m23s here: ``` Creating Docker image with all the development tools Thu Jan 23 13:52:07 PST 2020. Logging to cmake-out/ci-ubuntu-install-18.04-ninja/create-build-docker-image.log ================================================================ Detecting the branch name Thu Jan 23 14:10:30 PST 2020. ```
process
ci builds are slow due to long docker image build times i ve noticed a number of ci builds taking an abnormally long time to run usual had been minutes but occasionally builds are taking almost an hour to complete i looked at logs from some of these and the bulk of the time even in normal minute builds appears to be building a docker image these should be cached so perhaps the cache is misconfigured overall spent here creating docker image with all the development tools thu jan pst logging to cmake out ci ubuntu install ninja create build docker image log detecting the branch name thu jan pst overall spent here creating docker image with all the development tools mon jan pst logging to cmake out ci ubuntu install check api create build docker image log detecting the branch name mon jan pst overall here creating docker image with all the development tools thu jan pst logging to cmake out ci ubuntu install ninja create build docker image log detecting the branch name thu jan pst
1
17,384
23,201,351,851
IssuesEvent
2022-08-01 21:54:23
celo-org/celo-monorepo
https://api.github.com/repos/celo-org/celo-monorepo
closed
ODIS E2E tests to run in CI
release-process infra-and-monitoring ci Component: ODIS Component: Identity Deprioritised
As an ODIS developer, I should be able to easily identify E2E issues before production deployment E2E tests will provide us better protection against regression going forward. Need to investigate how best to run this since it will require a deployment.
1.0
ODIS E2E tests to run in CI - As an ODIS developer, I should be able to easily identify E2E issues before production deployment E2E tests will provide us better protection against regression going forward. Need to investigate how best to run this since it will require a deployment.
process
odis tests to run in ci as an odis developer i should be able to easily identify issues before production deployment tests will provide us better protection against regression going forward need to investigate how best to run this since it will require a deployment
1
666,417
22,354,829,413
IssuesEvent
2022-06-15 14:50:30
why-science/demo
https://api.github.com/repos/why-science/demo
opened
Edit welcome email - Remove info about modules assigned
high priority
Since itadmin@whyscience assigned modules after an account is created modules are no longer automatically assigned. Remove the text circled in the welcome emails for Coach and instructor ![image](https://user-images.githubusercontent.com/70287522/173857370-4e24bc03-fc56-4dcf-8ba2-41b5c5bf5df2.png)
1.0
Edit welcome email - Remove info about modules assigned - Since itadmin@whyscience assigned modules after an account is created modules are no longer automatically assigned. Remove the text circled in the welcome emails for Coach and instructor ![image](https://user-images.githubusercontent.com/70287522/173857370-4e24bc03-fc56-4dcf-8ba2-41b5c5bf5df2.png)
non_process
edit welcome email remove info about modules assigned since itadmin whyscience assigned modules after an account is created modules are no longer automatically assigned remove the text circled in the welcome emails for coach and instructor
0
740,655
25,761,858,313
IssuesEvent
2022-12-08 21:17:02
fgpv-vpgf/contributed-plugins
https://api.github.com/repos/fgpv-vpgf/contributed-plugins
opened
Swiper bug beneath expanded WMS legends
bug priority - high
When expanding the legend of WMS services and briging the swiper beneath the layer panel (open or close) it freezes the swiper. It will not work anymore. Unless you hide back the legend. ![image](https://user-images.githubusercontent.com/91547284/206568597-38982238-c412-404f-bc95-62f709364951.png) ![image](https://user-images.githubusercontent.com/91547284/206568614-c06c2eb2-6fa0-4733-a212-db1488fe1122.png) Hiding the legend brings the swiper back to life: ![image](https://user-images.githubusercontent.com/91547284/206568862-36a9fed8-364f-489a-90be-55589c3892f4.png) ![image](https://user-images.githubusercontent.com/91547284/206568999-a4bb77aa-1d81-4c81-8439-e0d679c293ac.png) The issue was first raised with in a config using a structured legend type with an expanded legend by default (see attached).
1.0
Swiper bug beneath expanded WMS legends - When expanding the legend of WMS services and briging the swiper beneath the layer panel (open or close) it freezes the swiper. It will not work anymore. Unless you hide back the legend. ![image](https://user-images.githubusercontent.com/91547284/206568597-38982238-c412-404f-bc95-62f709364951.png) ![image](https://user-images.githubusercontent.com/91547284/206568614-c06c2eb2-6fa0-4733-a212-db1488fe1122.png) Hiding the legend brings the swiper back to life: ![image](https://user-images.githubusercontent.com/91547284/206568862-36a9fed8-364f-489a-90be-55589c3892f4.png) ![image](https://user-images.githubusercontent.com/91547284/206568999-a4bb77aa-1d81-4c81-8439-e0d679c293ac.png) The issue was first raised with in a config using a structured legend type with an expanded legend by default (see attached).
non_process
swiper bug beneath expanded wms legends when expanding the legend of wms services and briging the swiper beneath the layer panel open or close it freezes the swiper it will not work anymore unless you hide back the legend hiding the legend brings the swiper back to life the issue was first raised with in a config using a structured legend type with an expanded legend by default see attached
0
22,708
32,035,474,149
IssuesEvent
2023-09-22 15:03:07
googleapis/python-gsuiteaddons
https://api.github.com/repos/googleapis/python-gsuiteaddons
closed
Warning: a recent release failed
type: process api: gsuiteaddons
The following release PRs may have failed: * #51 - The release job was triggered, but has not reported back success.
1.0
Warning: a recent release failed - The following release PRs may have failed: * #51 - The release job was triggered, but has not reported back success.
process
warning a recent release failed the following release prs may have failed the release job was triggered but has not reported back success
1
274,245
29,951,063,649
IssuesEvent
2023-06-23 01:10:13
dreamboy9/fuchsia
https://api.github.com/repos/dreamboy9/fuchsia
opened
WS-2023-0196 (Medium) detected in multiple libraries
Mend: dependency security vulnerability
## WS-2023-0196 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>memoffset-0.6.1.crate</b>, <b>memoffset-0.5.4.crate</b>, <b>memoffset-0.5.3.crate</b>, <b>memoffset-0.2.1.crate</b>, <b>memoffset-0.5.1.crate</b>, <b>memoffset-0.5.5.crate</b></p></summary> <p> <details><summary><b>memoffset-0.6.1.crate</b></p></summary> <p>offset_of functionality for Rust structs.</p> <p>Library home page: <a href="https://crates.io/api/v1/crates/memoffset/0.6.1/download">https://crates.io/api/v1/crates/memoffset/0.6.1/download</a></p> <p> Dependency Hierarchy: - criterion-0.3.4.crate (Root Library) - rayon-1.5.0.crate - rayon-core-1.9.0.crate - crossbeam-deque-0.8.0.crate - crossbeam-epoch-0.9.3.crate - :x: **memoffset-0.6.1.crate** (Vulnerable Library) </details> <details><summary><b>memoffset-0.5.4.crate</b></p></summary> <p>offset_of functionality for Rust structs.</p> <p>Library home page: <a href="https://crates.io/api/v1/crates/memoffset/0.5.4/download">https://crates.io/api/v1/crates/memoffset/0.5.4/download</a></p> <p> Dependency Hierarchy: - tokio-threadpool-0.1.18.crate (Root Library) - crossbeam-deque-0.7.3.crate - crossbeam-epoch-0.8.2.crate - :x: **memoffset-0.5.4.crate** (Vulnerable Library) </details> <details><summary><b>memoffset-0.5.3.crate</b></p></summary> <p>offset_of functionality for Rust structs.</p> <p>Library home page: <a href="https://crates.io/api/v1/crates/memoffset/0.5.3/download">https://crates.io/api/v1/crates/memoffset/0.5.3/download</a></p> <p> Dependency Hierarchy: - :x: **memoffset-0.5.3.crate** (Vulnerable Library) </details> <details><summary><b>memoffset-0.2.1.crate</b></p></summary> <p>offset_of functionality for Rust structs.</p> <p>Library home page: <a href="https://crates.io/api/v1/crates/memoffset/0.2.1/download">https://crates.io/api/v1/crates/memoffset/0.2.1/download</a></p> <p> Dependency Hierarchy: - criterion-0.3.0.crate (Root Library) - rayon-1.1.0.crate - crossbeam-deque-0.6.3.crate - crossbeam-epoch-0.7.1.crate - :x: **memoffset-0.2.1.crate** (Vulnerable Library) </details> <details><summary><b>memoffset-0.5.1.crate</b></p></summary> <p>offset_of functionality for Rust structs.</p> <p>Library home page: <a href="https://crates.io/api/v1/crates/memoffset/0.5.1/download">https://crates.io/api/v1/crates/memoffset/0.5.1/download</a></p> <p> Dependency Hierarchy: - criterion-0.3.0.crate (Root Library) - rayon-1.2.0.crate - crossbeam-deque-0.7.1.crate - crossbeam-epoch-0.7.2.crate - :x: **memoffset-0.5.1.crate** (Vulnerable Library) </details> <details><summary><b>memoffset-0.5.5.crate</b></p></summary> <p>offset_of functionality for Rust structs.</p> <p>Library home page: <a href="https://crates.io/api/v1/crates/memoffset/0.5.5/download">https://crates.io/api/v1/crates/memoffset/0.5.5/download</a></p> <p> Dependency Hierarchy: - criterion-0.3.2.crate (Root Library) - rayon-1.3.1.crate - rayon-core-1.7.1.crate - crossbeam-deque-0.7.3.crate - crossbeam-epoch-0.8.2.crate - :x: **memoffset-0.5.5.crate** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/dreamboy9/fuchsia/commit/4ec0c406a28f193fe6e7376ee7696cca0532d4ba">4ec0c406a28f193fe6e7376ee7696cca0532d4ba</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> memoffset allows reading uninitialized memory <p>Publish Date: 2023-06-22 <p>URL: <a href=https://github.com/Gilnaa/memoffset/commit/576166bb63d238d97d5a4979d3484bc82c9bde3e>WS-2023-0196</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-wfg4-322g-9vqv">https://github.com/advisories/GHSA-wfg4-322g-9vqv</a></p> <p>Release Date: 2023-06-22</p> <p>Fix Resolution: memoffset - 0.6.2 </p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2023-0196 (Medium) detected in multiple libraries - ## WS-2023-0196 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>memoffset-0.6.1.crate</b>, <b>memoffset-0.5.4.crate</b>, <b>memoffset-0.5.3.crate</b>, <b>memoffset-0.2.1.crate</b>, <b>memoffset-0.5.1.crate</b>, <b>memoffset-0.5.5.crate</b></p></summary> <p> <details><summary><b>memoffset-0.6.1.crate</b></p></summary> <p>offset_of functionality for Rust structs.</p> <p>Library home page: <a href="https://crates.io/api/v1/crates/memoffset/0.6.1/download">https://crates.io/api/v1/crates/memoffset/0.6.1/download</a></p> <p> Dependency Hierarchy: - criterion-0.3.4.crate (Root Library) - rayon-1.5.0.crate - rayon-core-1.9.0.crate - crossbeam-deque-0.8.0.crate - crossbeam-epoch-0.9.3.crate - :x: **memoffset-0.6.1.crate** (Vulnerable Library) </details> <details><summary><b>memoffset-0.5.4.crate</b></p></summary> <p>offset_of functionality for Rust structs.</p> <p>Library home page: <a href="https://crates.io/api/v1/crates/memoffset/0.5.4/download">https://crates.io/api/v1/crates/memoffset/0.5.4/download</a></p> <p> Dependency Hierarchy: - tokio-threadpool-0.1.18.crate (Root Library) - crossbeam-deque-0.7.3.crate - crossbeam-epoch-0.8.2.crate - :x: **memoffset-0.5.4.crate** (Vulnerable Library) </details> <details><summary><b>memoffset-0.5.3.crate</b></p></summary> <p>offset_of functionality for Rust structs.</p> <p>Library home page: <a href="https://crates.io/api/v1/crates/memoffset/0.5.3/download">https://crates.io/api/v1/crates/memoffset/0.5.3/download</a></p> <p> Dependency Hierarchy: - :x: **memoffset-0.5.3.crate** (Vulnerable Library) </details> <details><summary><b>memoffset-0.2.1.crate</b></p></summary> <p>offset_of functionality for Rust structs.</p> <p>Library home page: <a href="https://crates.io/api/v1/crates/memoffset/0.2.1/download">https://crates.io/api/v1/crates/memoffset/0.2.1/download</a></p> <p> Dependency Hierarchy: - criterion-0.3.0.crate (Root Library) - rayon-1.1.0.crate - crossbeam-deque-0.6.3.crate - crossbeam-epoch-0.7.1.crate - :x: **memoffset-0.2.1.crate** (Vulnerable Library) </details> <details><summary><b>memoffset-0.5.1.crate</b></p></summary> <p>offset_of functionality for Rust structs.</p> <p>Library home page: <a href="https://crates.io/api/v1/crates/memoffset/0.5.1/download">https://crates.io/api/v1/crates/memoffset/0.5.1/download</a></p> <p> Dependency Hierarchy: - criterion-0.3.0.crate (Root Library) - rayon-1.2.0.crate - crossbeam-deque-0.7.1.crate - crossbeam-epoch-0.7.2.crate - :x: **memoffset-0.5.1.crate** (Vulnerable Library) </details> <details><summary><b>memoffset-0.5.5.crate</b></p></summary> <p>offset_of functionality for Rust structs.</p> <p>Library home page: <a href="https://crates.io/api/v1/crates/memoffset/0.5.5/download">https://crates.io/api/v1/crates/memoffset/0.5.5/download</a></p> <p> Dependency Hierarchy: - criterion-0.3.2.crate (Root Library) - rayon-1.3.1.crate - rayon-core-1.7.1.crate - crossbeam-deque-0.7.3.crate - crossbeam-epoch-0.8.2.crate - :x: **memoffset-0.5.5.crate** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/dreamboy9/fuchsia/commit/4ec0c406a28f193fe6e7376ee7696cca0532d4ba">4ec0c406a28f193fe6e7376ee7696cca0532d4ba</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> memoffset allows reading uninitialized memory <p>Publish Date: 2023-06-22 <p>URL: <a href=https://github.com/Gilnaa/memoffset/commit/576166bb63d238d97d5a4979d3484bc82c9bde3e>WS-2023-0196</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-wfg4-322g-9vqv">https://github.com/advisories/GHSA-wfg4-322g-9vqv</a></p> <p>Release Date: 2023-06-22</p> <p>Fix Resolution: memoffset - 0.6.2 </p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
ws medium detected in multiple libraries ws medium severity vulnerability vulnerable libraries memoffset crate memoffset crate memoffset crate memoffset crate memoffset crate memoffset crate memoffset crate offset of functionality for rust structs library home page a href dependency hierarchy criterion crate root library rayon crate rayon core crate crossbeam deque crate crossbeam epoch crate x memoffset crate vulnerable library memoffset crate offset of functionality for rust structs library home page a href dependency hierarchy tokio threadpool crate root library crossbeam deque crate crossbeam epoch crate x memoffset crate vulnerable library memoffset crate offset of functionality for rust structs library home page a href dependency hierarchy x memoffset crate vulnerable library memoffset crate offset of functionality for rust structs library home page a href dependency hierarchy criterion crate root library rayon crate crossbeam deque crate crossbeam epoch crate x memoffset crate vulnerable library memoffset crate offset of functionality for rust structs library home page a href dependency hierarchy criterion crate root library rayon crate crossbeam deque crate crossbeam epoch crate x memoffset crate vulnerable library memoffset crate offset of functionality for rust structs library home page a href dependency hierarchy criterion crate root library rayon crate rayon core crate crossbeam deque crate crossbeam epoch crate x memoffset crate vulnerable library found in head commit a href found in base branch master vulnerability details memoffset allows reading uninitialized memory publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution memoffset step up your open source security game with mend
0
7,217
10,347,033,702
IssuesEvent
2019-09-04 16:27:01
googleapis/nodejs-bigquery
https://api.github.com/repos/googleapis/nodejs-bigquery
closed
bigquery: confirm federated sheets support range property
type: process
For some time, the BigQuery backend has supported an additional "range" property when defining a federated table against Google Drive (via GoogleSheetsOptions). Please verify this property is available to users of the library. With the next discovery release, the "beta" label for this functionality will be removed. Docstring for the range field: [Optional] Range of a sheet to query from. Only used when non-empty. Typical format: sheet_name!top_left_cell_id:bottom_right_cell_id For example: sheet1!A1:B20
1.0
bigquery: confirm federated sheets support range property - For some time, the BigQuery backend has supported an additional "range" property when defining a federated table against Google Drive (via GoogleSheetsOptions). Please verify this property is available to users of the library. With the next discovery release, the "beta" label for this functionality will be removed. Docstring for the range field: [Optional] Range of a sheet to query from. Only used when non-empty. Typical format: sheet_name!top_left_cell_id:bottom_right_cell_id For example: sheet1!A1:B20
process
bigquery confirm federated sheets support range property for some time the bigquery backend has supported an additional range property when defining a federated table against google drive via googlesheetsoptions please verify this property is available to users of the library with the next discovery release the beta label for this functionality will be removed docstring for the range field range of a sheet to query from only used when non empty typical format sheet name top left cell id bottom right cell id for example
1
19,355
25,489,587,748
IssuesEvent
2022-11-26 22:13:43
OpenAsar/arrpc
https://api.github.com/repos/OpenAsar/arrpc
closed
add process scanning?
enhancement process
undecided on whether to add or not as it isn't really RPC but loosely related (not in `discord_rpc` but in `discord_game_utils` for Discord's own Native modules, for example). would expand the scope quite a bit but might be worth it. **please react/comment if you want!** it would be optional/disableable if added to eliminate privacy/etc concerns.
1.0
add process scanning? - undecided on whether to add or not as it isn't really RPC but loosely related (not in `discord_rpc` but in `discord_game_utils` for Discord's own Native modules, for example). would expand the scope quite a bit but might be worth it. **please react/comment if you want!** it would be optional/disableable if added to eliminate privacy/etc concerns.
process
add process scanning undecided on whether to add or not as it isn t really rpc but loosely related not in discord rpc but in discord game utils for discord s own native modules for example would expand the scope quite a bit but might be worth it please react comment if you want it would be optional disableable if added to eliminate privacy etc concerns
1
17,155
22,716,204,372
IssuesEvent
2022-07-06 02:27:13
quark-engine/quark-engine
https://api.github.com/repos/quark-engine/quark-engine
closed
Update README for the recently released features.
issue-processing-state-06
Recently, the team added many features to Quark (Rule Viewer, Web Report, and RadioContrast API). To illustrate the power of these features, we documented them in README with examples. However, it also causes some problems with the file. 1. Outdated information. - For example, the command introduced in the Detail Report section is wrong. 2. Lengthy content. - Users need to scroll eight times to find the installation steps. 3. Unclear layout. - The file does not have an overview of the Quark features. Hence, we may need a complete update on this file, including **1) cleaning up outdated information**, **2) simplifying the content**, and **3) adjusting the layout**.
1.0
Update README for the recently released features. - Recently, the team added many features to Quark (Rule Viewer, Web Report, and RadioContrast API). To illustrate the power of these features, we documented them in README with examples. However, it also causes some problems with the file. 1. Outdated information. - For example, the command introduced in the Detail Report section is wrong. 2. Lengthy content. - Users need to scroll eight times to find the installation steps. 3. Unclear layout. - The file does not have an overview of the Quark features. Hence, we may need a complete update on this file, including **1) cleaning up outdated information**, **2) simplifying the content**, and **3) adjusting the layout**.
process
update readme for the recently released features recently the team added many features to quark rule viewer web report and radiocontrast api to illustrate the power of these features we documented them in readme with examples however it also causes some problems with the file outdated information for example the command introduced in the detail report section is wrong lengthy content users need to scroll eight times to find the installation steps unclear layout the file does not have an overview of the quark features hence we may need a complete update on this file including cleaning up outdated information simplifying the content and adjusting the layout
1
80,716
15,586,320,047
IssuesEvent
2021-03-18 01:40:39
soumya132/java-code
https://api.github.com/repos/soumya132/java-code
closed
CVE-2018-14718 (High) detected in jackson-databind-2.8.1.jar - autoclosed
security vulnerability
## CVE-2018-14718 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.1.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/java-code/pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.1/jackson-databind-2.8.1.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-jersey-1.4.0.RELEASE.jar (Root Library) - :x: **jackson-databind-2.8.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/soumya132/java-code/commit/bd6deafa717543d14242a80f30b2189c4dfe4f6c">bd6deafa717543d14242a80f30b2189c4dfe4f6c</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.7 might allow remote attackers to execute arbitrary code by leveraging failure to block the slf4j-ext class from polymorphic deserialization. <p>Publish Date: 2019-01-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14718>CVE-2018-14718</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-14718">https://nvd.nist.gov/vuln/detail/CVE-2018-14718</a></p> <p>Release Date: 2019-01-02</p> <p>Fix Resolution: 2.9.7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-14718 (High) detected in jackson-databind-2.8.1.jar - autoclosed - ## CVE-2018-14718 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.1.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/java-code/pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.1/jackson-databind-2.8.1.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-jersey-1.4.0.RELEASE.jar (Root Library) - :x: **jackson-databind-2.8.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/soumya132/java-code/commit/bd6deafa717543d14242a80f30b2189c4dfe4f6c">bd6deafa717543d14242a80f30b2189c4dfe4f6c</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.7 might allow remote attackers to execute arbitrary code by leveraging failure to block the slf4j-ext class from polymorphic deserialization. <p>Publish Date: 2019-01-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14718>CVE-2018-14718</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-14718">https://nvd.nist.gov/vuln/detail/CVE-2018-14718</a></p> <p>Release Date: 2019-01-02</p> <p>Fix Resolution: 2.9.7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm java code pom xml path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter jersey release jar root library x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before might allow remote attackers to execute arbitrary code by leveraging failure to block the ext class from polymorphic deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
116,872
15,024,485,786
IssuesEvent
2021-02-01 19:42:18
filecoin-project/slate
https://api.github.com/repos/filecoin-project/slate
closed
Add autoplay as a configurable option when viewing videos.
Design
There should be a checkbox when you're editing one of your videos, that checkbox should indicate whether or not you want the video to play automatically or wait.
1.0
Add autoplay as a configurable option when viewing videos. - There should be a checkbox when you're editing one of your videos, that checkbox should indicate whether or not you want the video to play automatically or wait.
non_process
add autoplay as a configurable option when viewing videos there should be a checkbox when you re editing one of your videos that checkbox should indicate whether or not you want the video to play automatically or wait
0
27,479
4,056,863,996
IssuesEvent
2016-05-24 20:06:28
pydata/pandas
https://api.github.com/repos/pydata/pandas
closed
Idea: use df.index/df.columns names to automatically choose axis along which to broadcast
API Design Indexing
In writing some math code in pandas, I find it necessary to do things like df2 = df.sub(ser, axis='columns') instead of the shorter and more intuitive df2 = df - ser in order to control the axis along which the series is broadcast. I think it would be a big improvement syntactically if pandas would automatically broadcast down the axis that didn't have a matching name. Example: df = pd.DataFrame(np.random.rand(3, 2), columns=['a', 'b']) df.index.name = 'dim0' df.columns.name = 'dim1' df dim1 a b dim0 0 0.755744 0.321682 1 0.915464 0.413154 2 0.647672 0.457927 subtract: df - df['a'] # does not give the desired result a b 0 1 2 dim0 0 NaN NaN NaN NaN NaN 1 NaN NaN NaN NaN NaN 2 NaN NaN NaN NaN NaN subtract, specifying which axis to match on (broadcasting happens on the other axis): df.sub(df['a'], axis='index') # gives the desired result dim1 a b dim0 0 0.0 -0.434062 1 0.0 -0.502310 2 0.0 -0.189745 I am suggesting that the "-" operator would look at the names of the indices in the operands and match on the axis that has the same name in the two operands. By way of motivation, I'm doing mass spectral matching of compounds, so I could name my indices 'chemical' and 'mass'.
1.0
Idea: use df.index/df.columns names to automatically choose axis along which to broadcast - In writing some math code in pandas, I find it necessary to do things like df2 = df.sub(ser, axis='columns') instead of the shorter and more intuitive df2 = df - ser in order to control the axis along which the series is broadcast. I think it would be a big improvement syntactically if pandas would automatically broadcast down the axis that didn't have a matching name. Example: df = pd.DataFrame(np.random.rand(3, 2), columns=['a', 'b']) df.index.name = 'dim0' df.columns.name = 'dim1' df dim1 a b dim0 0 0.755744 0.321682 1 0.915464 0.413154 2 0.647672 0.457927 subtract: df - df['a'] # does not give the desired result a b 0 1 2 dim0 0 NaN NaN NaN NaN NaN 1 NaN NaN NaN NaN NaN 2 NaN NaN NaN NaN NaN subtract, specifying which axis to match on (broadcasting happens on the other axis): df.sub(df['a'], axis='index') # gives the desired result dim1 a b dim0 0 0.0 -0.434062 1 0.0 -0.502310 2 0.0 -0.189745 I am suggesting that the "-" operator would look at the names of the indices in the operands and match on the axis that has the same name in the two operands. By way of motivation, I'm doing mass spectral matching of compounds, so I could name my indices 'chemical' and 'mass'.
non_process
idea use df index df columns names to automatically choose axis along which to broadcast in writing some math code in pandas i find it necessary to do things like df sub ser axis columns instead of the shorter and more intuitive df ser in order to control the axis along which the series is broadcast i think it would be a big improvement syntactically if pandas would automatically broadcast down the axis that didn t have a matching name example df pd dataframe np random rand columns df index name df columns name df a b subtract df df does not give the desired result a b nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan subtract specifying which axis to match on broadcasting happens on the other axis df sub df axis index gives the desired result a b i am suggesting that the operator would look at the names of the indices in the operands and match on the axis that has the same name in the two operands by way of motivation i m doing mass spectral matching of compounds so i could name my indices chemical and mass
0
339,242
30,385,737,606
IssuesEvent
2023-07-13 00:24:41
redhat-developer/odo
https://api.github.com/repos/redhat-developer/odo
closed
Pipeline as a code on IBMCloud for test
area/testing lifecycle/stale lifecycle/rotten area/infra needs-triage
/kind tests /area infra As a QE I want to create a script to automate Pipelines on IBM Cloud. **NOTES**: - The latest IBM Cloud modules for Ansible now include the IBM Cloud toolchain, with which we could manage our jobs as code instead of through the UI. See https://github.com/IBM-Cloud/ansible-collection-ibm/commit/5e275599630c4a228b3f2b8d2029ce51bf774a65
1.0
Pipeline as a code on IBMCloud for test - /kind tests /area infra As a QE I want to create a script to automate Pipelines on IBM Cloud. **NOTES**: - The latest IBM Cloud modules for Ansible now include the IBM Cloud toolchain, with which we could manage our jobs as code instead of through the UI. See https://github.com/IBM-Cloud/ansible-collection-ibm/commit/5e275599630c4a228b3f2b8d2029ce51bf774a65
non_process
pipeline as a code on ibmcloud for test kind tests area infra as a qe i want to create a script to automate pipelines on ibm cloud notes the latest ibm cloud modules for ansible now include the ibm cloud toolchain with which we could manage our jobs as code instead of through the ui see
0
268,681
23,387,989,643
IssuesEvent
2022-08-11 15:13:21
bitcoin/bitcoin
https://api.github.com/repos/bitcoin/bitcoin
closed
test, ci: Use the documented way to test with ThreadSanitizer
Tests
According to the [ThreadSanitizer docs](https://clang.llvm.org/docs/ThreadSanitizer.html#current-status): > C++11 threading is supported with **llvm libc++**. To use the ThreadSanitizer in the documented way we should build from depends with `clang++ -stdlib=libc++` (see #18820). Related to #19024.
1.0
test, ci: Use the documented way to test with ThreadSanitizer - According to the [ThreadSanitizer docs](https://clang.llvm.org/docs/ThreadSanitizer.html#current-status): > C++11 threading is supported with **llvm libc++**. To use the ThreadSanitizer in the documented way we should build from depends with `clang++ -stdlib=libc++` (see #18820). Related to #19024.
non_process
test ci use the documented way to test with threadsanitizer according to the c threading is supported with llvm libc to use the threadsanitizer in the documented way we should build from depends with clang stdlib libc see related to
0
10,423
26,960,366,525
IssuesEvent
2023-02-08 17:42:09
NIEM/NTAC
https://api.github.com/repos/NIEM/NTAC
closed
Combining JSON Schemas
NIEM 6 Architecture
Can we combine JSON schemas? Dr. Scott couldn't, but Christina has had success. Christina: Need to add the extra stuff like `choice` to the metamodel to support community content. Dr. Scott: Is there more than just like a sub group? Christina, Jim Cabral, Mike Hulme all say yes. Christina: Can add keywords to JSON Schema.
1.0
Combining JSON Schemas - Can we combine JSON schemas? Dr. Scott couldn't, but Christina has had success. Christina: Need to add the extra stuff like `choice` to the metamodel to support community content. Dr. Scott: Is there more than just like a sub group? Christina, Jim Cabral, Mike Hulme all say yes. Christina: Can add keywords to JSON Schema.
non_process
combining json schemas can we combine json schemas dr scott couldn t but christina has had success christina need to add the extra stuff like choice to the metamodel to support community content dr scott is there more than just like a sub group christina jim cabral mike hulme all say yes christina can add keywords to json schema
0
299,480
22,608,391,468
IssuesEvent
2022-06-29 15:01:26
gefyrahq/gefyra
https://api.github.com/repos/gefyrahq/gefyra
closed
Add guide for docker desktop
documentation
The getting started guide does only partially work for docker desktop since there is no open port or ingress installed. Some docs for that would probably really help.
1.0
Add guide for docker desktop - The getting started guide does only partially work for docker desktop since there is no open port or ingress installed. Some docs for that would probably really help.
non_process
add guide for docker desktop the getting started guide does only partially work for docker desktop since there is no open port or ingress installed some docs for that would probably really help
0
350,804
31,932,334,427
IssuesEvent
2023-09-19 08:17:04
unifyai/ivy
https://api.github.com/repos/unifyai/ivy
reopened
Fix jax_random.test_jax_poisson
JAX Frontend Sub Task Failing Test
| | | |---|---| |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5648283567"><img src=https://img.shields.io/badge/-success-success></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/5639189771/job/15274082116"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5639189771/job/15274082116"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/5639189771/job/15274082116"><img src=https://img.shields.io/badge/-success-success></a> |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5639189771/job/15274082116"><img src=https://img.shields.io/badge/-success-success></a>
1.0
Fix jax_random.test_jax_poisson - | | | |---|---| |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5648283567"><img src=https://img.shields.io/badge/-success-success></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/5639189771/job/15274082116"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5639189771/job/15274082116"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/5639189771/job/15274082116"><img src=https://img.shields.io/badge/-success-success></a> |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5639189771/job/15274082116"><img src=https://img.shields.io/badge/-success-success></a>
non_process
fix jax random test jax poisson tensorflow a href src jax a href src numpy a href src torch a href src paddle a href src
0
7,357
10,500,736,717
IssuesEvent
2019-09-26 11:12:32
nodejs/security-wg
https://api.github.com/repos/nodejs/security-wg
closed
Update on program eligibility settings
process
This is mostly an FYI but an important one since it came as a feedback from a hacker submitting a report on H1 and expecting a bounty. Apparently, now that we have the bounty supported when creating new scopes of projects we need to manually toggle off the "eligible for bounty". I don't think a whole lot of us has access to do that but keep in mind anyway to make sure it isn't turned on. ![image](https://user-images.githubusercontent.com/316371/65362597-e0997a00-dbbc-11e9-9482-b486b091ba3c.png) On the same note, I went ahead and enabled/created the following scopes for bounties per what's currently supported: ![image](https://user-images.githubusercontent.com/316371/65362680-3c640300-dbbd-11e9-806f-c17c27f4b38b.png) -- We'll go ahead and close this issue in a week's time as this is more of an announcement to make sure everyone are on the loop.
1.0
Update on program eligibility settings - This is mostly an FYI but an important one since it came as a feedback from a hacker submitting a report on H1 and expecting a bounty. Apparently, now that we have the bounty supported when creating new scopes of projects we need to manually toggle off the "eligible for bounty". I don't think a whole lot of us has access to do that but keep in mind anyway to make sure it isn't turned on. ![image](https://user-images.githubusercontent.com/316371/65362597-e0997a00-dbbc-11e9-9482-b486b091ba3c.png) On the same note, I went ahead and enabled/created the following scopes for bounties per what's currently supported: ![image](https://user-images.githubusercontent.com/316371/65362680-3c640300-dbbd-11e9-806f-c17c27f4b38b.png) -- We'll go ahead and close this issue in a week's time as this is more of an announcement to make sure everyone are on the loop.
process
update on program eligibility settings this is mostly an fyi but an important one since it came as a feedback from a hacker submitting a report on and expecting a bounty apparently now that we have the bounty supported when creating new scopes of projects we need to manually toggle off the eligible for bounty i don t think a whole lot of us has access to do that but keep in mind anyway to make sure it isn t turned on on the same note i went ahead and enabled created the following scopes for bounties per what s currently supported we ll go ahead and close this issue in a week s time as this is more of an announcement to make sure everyone are on the loop
1
20,536
27,191,856,339
IssuesEvent
2023-02-19 22:02:07
lynnandtonic/nestflix.fun
https://api.github.com/repos/lynnandtonic/nestflix.fun
closed
Add Starblast Five from "Reboot" (Screenshots added)
suggested title in process
Please add as much of the following info as you can: Title: Starblast Five Type (film/tv show): TV show - sci fi Film or show in which it appears: Reboot Is the parent film/show streaming anywhere? Yes - Hulu About when in the parent film/show does it appear? Ep. 1x01 - "Step Right Up" Actual footage of the film/show can be seen (yes/no)? Yes Timestamp: 4:01 - 4:11 Cast: Bree Marie Jensen ![Starblast Five 1](https://user-images.githubusercontent.com/88982629/202834953-2e0addef-84ae-474f-a96b-2eef98b96115.jpg) ![Starblast Five 2](https://user-images.githubusercontent.com/88982629/202834955-5b803da2-e024-4792-a394-4bcacdec5971.jpg) ![Starblast Five 3](https://user-images.githubusercontent.com/88982629/202834956-1df9cedb-b59c-4c19-897b-31f9f8db5300.jpg)
1.0
Add Starblast Five from "Reboot" (Screenshots added) - Please add as much of the following info as you can: Title: Starblast Five Type (film/tv show): TV show - sci fi Film or show in which it appears: Reboot Is the parent film/show streaming anywhere? Yes - Hulu About when in the parent film/show does it appear? Ep. 1x01 - "Step Right Up" Actual footage of the film/show can be seen (yes/no)? Yes Timestamp: 4:01 - 4:11 Cast: Bree Marie Jensen ![Starblast Five 1](https://user-images.githubusercontent.com/88982629/202834953-2e0addef-84ae-474f-a96b-2eef98b96115.jpg) ![Starblast Five 2](https://user-images.githubusercontent.com/88982629/202834955-5b803da2-e024-4792-a394-4bcacdec5971.jpg) ![Starblast Five 3](https://user-images.githubusercontent.com/88982629/202834956-1df9cedb-b59c-4c19-897b-31f9f8db5300.jpg)
process
add starblast five from reboot screenshots added please add as much of the following info as you can title starblast five type film tv show tv show sci fi film or show in which it appears reboot is the parent film show streaming anywhere yes hulu about when in the parent film show does it appear ep step right up actual footage of the film show can be seen yes no yes timestamp cast bree marie jensen
1
241,197
26,256,698,249
IssuesEvent
2023-01-06 01:49:27
terranceguz/terranceguz-atom
https://api.github.com/repos/terranceguz/terranceguz-atom
opened
WS-2018-0650 (High) detected in useragent-2.3.0.tgz
security vulnerability
## WS-2018-0650 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>useragent-2.3.0.tgz</b></p></summary> <p>Fastest, most accurate & effecient user agent string parser, uses Browserscope's research for parsing</p> <p>Library home page: <a href="https://registry.npmjs.org/useragent/-/useragent-2.3.0.tgz">https://registry.npmjs.org/useragent/-/useragent-2.3.0.tgz</a></p> <p>Path to dependency file: /repo-with-submodules/You-Dont-Need-jQuery/package.json</p> <p>Path to vulnerable library: /tmp/git/atom/spec/fixtures/git/repo-with-submodules/You-Dont-Need-jQuery/node_modules/useragent/package.json</p> <p> Dependency Hierarchy: - karma-0.13.22.tgz (Root Library) - :x: **useragent-2.3.0.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Regular Expression Denial of Service (ReDoS) vulnerability was found in useragent through 2.3.0. <p>Publish Date: 2018-02-27 <p>URL: <a href=https://hackerone.com/reports/320159>WS-2018-0650</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/WS-2018-0650">https://nvd.nist.gov/vuln/detail/WS-2018-0650</a></p> <p>Release Date: 2018-02-27</p> <p>Fix Resolution: NorDroN.AngularTemplate - 0.1.6;dotnetng.template - 1.0.0.4;JetBrains.Rider.Frontend5 - 213.0.20211008.154703-eap03;MIDIator.WebClient - 1.0.105</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2018-0650 (High) detected in useragent-2.3.0.tgz - ## WS-2018-0650 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>useragent-2.3.0.tgz</b></p></summary> <p>Fastest, most accurate & effecient user agent string parser, uses Browserscope's research for parsing</p> <p>Library home page: <a href="https://registry.npmjs.org/useragent/-/useragent-2.3.0.tgz">https://registry.npmjs.org/useragent/-/useragent-2.3.0.tgz</a></p> <p>Path to dependency file: /repo-with-submodules/You-Dont-Need-jQuery/package.json</p> <p>Path to vulnerable library: /tmp/git/atom/spec/fixtures/git/repo-with-submodules/You-Dont-Need-jQuery/node_modules/useragent/package.json</p> <p> Dependency Hierarchy: - karma-0.13.22.tgz (Root Library) - :x: **useragent-2.3.0.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Regular Expression Denial of Service (ReDoS) vulnerability was found in useragent through 2.3.0. <p>Publish Date: 2018-02-27 <p>URL: <a href=https://hackerone.com/reports/320159>WS-2018-0650</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/WS-2018-0650">https://nvd.nist.gov/vuln/detail/WS-2018-0650</a></p> <p>Release Date: 2018-02-27</p> <p>Fix Resolution: NorDroN.AngularTemplate - 0.1.6;dotnetng.template - 1.0.0.4;JetBrains.Rider.Frontend5 - 213.0.20211008.154703-eap03;MIDIator.WebClient - 1.0.105</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
ws high detected in useragent tgz ws high severity vulnerability vulnerable library useragent tgz fastest most accurate effecient user agent string parser uses browserscope s research for parsing library home page a href path to dependency file repo with submodules you dont need jquery package json path to vulnerable library tmp git atom spec fixtures git repo with submodules you dont need jquery node modules useragent package json dependency hierarchy karma tgz root library x useragent tgz vulnerable library vulnerability details regular expression denial of service redos vulnerability was found in useragent through publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution nordron angulartemplate dotnetng template jetbrains rider midiator webclient step up your open source security game with mend
0
65,784
7,919,667,316
IssuesEvent
2018-07-04 18:12:43
MicrosoftDocs/feedback
https://api.github.com/repos/MicrosoftDocs/feedback
closed
Some search results contain markup
assigned-to-pm by design logged-request
E.g. https://docs.microsoft.com/en-us/dotnet/api/?view=netframework-4.7&term=ImeProcessedKey shows the following in the description: Gets the keyboard key referenced by the event, if the key will be processed by an [!INCLUDE[TLA#tla_ime](~/includes/tlasharptla-ime-md.md)]. which includes something I think shouldn't be part of the search result. ---------- ⚠ Idea migrated from UserVoice **Created By:** Anonymous **Created On:** 2018/02/26 13:30:08 +0000 **Votes at Migration:** 1 **Supporters at Migration:** 1
1.0
Some search results contain markup - E.g. https://docs.microsoft.com/en-us/dotnet/api/?view=netframework-4.7&term=ImeProcessedKey shows the following in the description: Gets the keyboard key referenced by the event, if the key will be processed by an [!INCLUDE[TLA#tla_ime](~/includes/tlasharptla-ime-md.md)]. which includes something I think shouldn't be part of the search result. ---------- ⚠ Idea migrated from UserVoice **Created By:** Anonymous **Created On:** 2018/02/26 13:30:08 +0000 **Votes at Migration:** 1 **Supporters at Migration:** 1
non_process
some search results contain markup e g shows the following in the description gets the keyboard key referenced by the event if the key will be processed by an includes tlasharptla ime md md which includes something i think shouldn t be part of the search result ⚠ idea migrated from uservoice created by anonymous created on votes at migration supporters at migration
0
19,151
25,228,229,722
IssuesEvent
2022-11-14 17:32:45
pystatgen/sgkit
https://api.github.com/repos/pystatgen/sgkit
opened
Hypothesis VCF strategies
process + tools IO
It would be useful to use [Hypothesis](https://hypothesis.readthedocs.io/en/latest/index.html) to generate VCF files for testing `vcf_to_zarr` and the forthcoming `zarr_to_vcf` (#924) functions. There is already test code in sgkit for [generating VCF files](https://github.com/pystatgen/sgkit/blob/main/sgkit/tests/io/vcf/vcf_generator.py), but currently this is only used to generate a [static VCF file](https://github.com/pystatgen/sgkit/blob/main/sgkit/tests/io/vcf/data/all_fields.vcf). It should be possible to repurpose the generator to use Hypothesis, which excels at finding tests cases humans don't think of. It might be worth putting this in its own project at some point as it should be standalone and may be generally useful. (@ravwojdyla suggested something similar in #23.)
1.0
Hypothesis VCF strategies - It would be useful to use [Hypothesis](https://hypothesis.readthedocs.io/en/latest/index.html) to generate VCF files for testing `vcf_to_zarr` and the forthcoming `zarr_to_vcf` (#924) functions. There is already test code in sgkit for [generating VCF files](https://github.com/pystatgen/sgkit/blob/main/sgkit/tests/io/vcf/vcf_generator.py), but currently this is only used to generate a [static VCF file](https://github.com/pystatgen/sgkit/blob/main/sgkit/tests/io/vcf/data/all_fields.vcf). It should be possible to repurpose the generator to use Hypothesis, which excels at finding tests cases humans don't think of. It might be worth putting this in its own project at some point as it should be standalone and may be generally useful. (@ravwojdyla suggested something similar in #23.)
process
hypothesis vcf strategies it would be useful to use to generate vcf files for testing vcf to zarr and the forthcoming zarr to vcf functions there is already test code in sgkit for but currently this is only used to generate a it should be possible to repurpose the generator to use hypothesis which excels at finding tests cases humans don t think of it might be worth putting this in its own project at some point as it should be standalone and may be generally useful ravwojdyla suggested something similar in
1
14,178
17,089,088,379
IssuesEvent
2021-07-08 15:11:24
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Why an Environment can only have resources like Kubernetes or a Virtual Machine?
cba devops-cicd-process/tech devops/prod product-feedback
Hi what if my env is only a single web app or multiple web apps? Shouldn't i write my pipeline as code(not only build but also release stages)? What if my Staging env was composed by multiple web apps? It would be nice to have full history divided by resource. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 77d95db6-9983-7346-d0eb-4b7443e4e252 * Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087 * Content: [Environment - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops) * Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
Why an Environment can only have resources like Kubernetes or a Virtual Machine? - Hi what if my env is only a single web app or multiple web apps? Shouldn't i write my pipeline as code(not only build but also release stages)? What if my Staging env was composed by multiple web apps? It would be nice to have full history divided by resource. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 77d95db6-9983-7346-d0eb-4b7443e4e252 * Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087 * Content: [Environment - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops) * Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
why an environment can only have resources like kubernetes or a virtual machine hi what if my env is only a single web app or multiple web apps shouldn t i write my pipeline as code not only build but also release stages what if my staging env was composed by multiple web apps it would be nice to have full history divided by resource document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
5,831
8,665,880,233
IssuesEvent
2018-11-29 01:20:31
cityofaustin/techstack
https://api.github.com/repos/cityofaustin/techstack
closed
Draft process page for Plan Review Checklist
Content Type: Process Page Department: Public Health Site Content Size: S Team: Content
Current: chrome-extension://oemmndcbldboiebfnladdacbdfmadadm/http://austintexas.gov/sites/default/files/files/Health/Environmental/FeeChanges/Plan_Review_Checklist_PS_100117.pdf Drive:
1.0
Draft process page for Plan Review Checklist - Current: chrome-extension://oemmndcbldboiebfnladdacbdfmadadm/http://austintexas.gov/sites/default/files/files/Health/Environmental/FeeChanges/Plan_Review_Checklist_PS_100117.pdf Drive:
process
draft process page for plan review checklist current chrome extension oemmndcbldboiebfnladdacbdfmadadm drive
1
14,189
17,093,799,915
IssuesEvent
2021-07-08 21:30:34
googleapis/python-storage
https://api.github.com/repos/googleapis/python-storage
closed
Use 'testing/constraints-{python_version}.txt' to test lower bound dependencies
api: storage type: process
The files exist, but `noxfile.py` does not use them.
1.0
Use 'testing/constraints-{python_version}.txt' to test lower bound dependencies - The files exist, but `noxfile.py` does not use them.
process
use testing constraints python version txt to test lower bound dependencies the files exist but noxfile py does not use them
1
299,106
25,882,304,764
IssuesEvent
2022-12-14 12:08:55
camunda/zeebe
https://api.github.com/repos/camunda/zeebe
closed
Use regular `StandaloneGateway` in `ClusteringRule`
kind/toil area/test
**Description** Currently, `ClusteringRule` has custom code to build and configure a Gateway. This can lead to drift between test and production code, as seen in https://github.com/camunda/zeebe/pull/9669 which would have been caught by tests if the `ClusteringRule` would just use the regular `StandaloneGateway`, for example like we do in the `StandaloneGatewaySecurityTest`.
1.0
Use regular `StandaloneGateway` in `ClusteringRule` - **Description** Currently, `ClusteringRule` has custom code to build and configure a Gateway. This can lead to drift between test and production code, as seen in https://github.com/camunda/zeebe/pull/9669 which would have been caught by tests if the `ClusteringRule` would just use the regular `StandaloneGateway`, for example like we do in the `StandaloneGatewaySecurityTest`.
non_process
use regular standalonegateway in clusteringrule description currently clusteringrule has custom code to build and configure a gateway this can lead to drift between test and production code as seen in which would have been caught by tests if the clusteringrule would just use the regular standalonegateway for example like we do in the standalonegatewaysecuritytest
0
715,909
24,614,710,108
IssuesEvent
2022-10-15 06:38:31
HaDuve/TravelCostNative
https://api.github.com/repos/HaDuve/TravelCostNative
opened
Feedback von Tina umsetzen
1 - High Priority Frontend MetaIssue
- [ ] Categorie/Overview button ist nicht intuitiv - [ ] Einladungs-system ist unklar (Einladung muss weggehen, sobald man joined) - [ ] Profilbild ändern als "coming soon" deklarieren
1.0
Feedback von Tina umsetzen - - [ ] Categorie/Overview button ist nicht intuitiv - [ ] Einladungs-system ist unklar (Einladung muss weggehen, sobald man joined) - [ ] Profilbild ändern als "coming soon" deklarieren
non_process
feedback von tina umsetzen categorie overview button ist nicht intuitiv einladungs system ist unklar einladung muss weggehen sobald man joined profilbild ändern als coming soon deklarieren
0
290,989
25,112,991,094
IssuesEvent
2022-11-08 22:21:12
elastic/kibana
https://api.github.com/repos/elastic/kibana
opened
Failing test: Chrome X-Pack UI Plugin Functional Tests.x-pack/test/plugin_functional/test_suites/resolver - Resolver test app when the user is interacting with the node with ID: firstChild when the user hovers over the primary button when the user has clicked the primary button (which selects the node.) should render as expected
failed-test
A test failed on a tracked branch ``` Error: expected 0.09618055555555556 to be below 0.096 at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11) at Assertion.lessThan.Assertion.below (node_modules/@kbn/expect/expect.js:336:8) at Function.lessThan (node_modules/@kbn/expect/expect.js:531:15) at Context.<anonymous> (x-pack/test/plugin_functional/test_suites/resolver/index.ts:94:175) at Object.apply (node_modules/@kbn/test/target_node/src/functional_test_runner/lib/mocha/wrap_function.js:78:16) ``` First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/23337#01845924-f632-4937-94f6-efc627327f75) <!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Plugin Functional Tests.x-pack/test/plugin_functional/test_suites/resolver","test.name":"Resolver test app when the user is interacting with the node with ID: firstChild when the user hovers over the primary button when the user has clicked the primary button (which selects the node.) should render as expected","test.failCount":1}} -->
1.0
Failing test: Chrome X-Pack UI Plugin Functional Tests.x-pack/test/plugin_functional/test_suites/resolver - Resolver test app when the user is interacting with the node with ID: firstChild when the user hovers over the primary button when the user has clicked the primary button (which selects the node.) should render as expected - A test failed on a tracked branch ``` Error: expected 0.09618055555555556 to be below 0.096 at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11) at Assertion.lessThan.Assertion.below (node_modules/@kbn/expect/expect.js:336:8) at Function.lessThan (node_modules/@kbn/expect/expect.js:531:15) at Context.<anonymous> (x-pack/test/plugin_functional/test_suites/resolver/index.ts:94:175) at Object.apply (node_modules/@kbn/test/target_node/src/functional_test_runner/lib/mocha/wrap_function.js:78:16) ``` First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/23337#01845924-f632-4937-94f6-efc627327f75) <!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Plugin Functional Tests.x-pack/test/plugin_functional/test_suites/resolver","test.name":"Resolver test app when the user is interacting with the node with ID: firstChild when the user hovers over the primary button when the user has clicked the primary button (which selects the node.) should render as expected","test.failCount":1}} -->
non_process
failing test chrome x pack ui plugin functional tests x pack test plugin functional test suites resolver resolver test app when the user is interacting with the node with id firstchild when the user hovers over the primary button when the user has clicked the primary button which selects the node should render as expected a test failed on a tracked branch error expected to be below at assertion assert node modules kbn expect expect js at assertion lessthan assertion below node modules kbn expect expect js at function lessthan node modules kbn expect expect js at context x pack test plugin functional test suites resolver index ts at object apply node modules kbn test target node src functional test runner lib mocha wrap function js first failure
0
633,095
20,244,769,984
IssuesEvent
2022-02-14 12:43:38
Domiii/dbux
https://api.github.com/repos/Domiii/dbux
closed
Fix async error stack (part 13)
bug priority
* [x] fix: When error is thrown from `async` function, previous `await` nodes are tagged as errors * [x] test w/ `sequelize` -> `findOrCreate-parallel` * The circles mark nodes where error was thrown. The other nodes (BEFORE the actual error) should not be "on fire": * ![image](https://user-images.githubusercontent.com/282899/151777451-63fe16d4-5c63-4193-b640-c52ce682e665.png)
1.0
Fix async error stack (part 13) - * [x] fix: When error is thrown from `async` function, previous `await` nodes are tagged as errors * [x] test w/ `sequelize` -> `findOrCreate-parallel` * The circles mark nodes where error was thrown. The other nodes (BEFORE the actual error) should not be "on fire": * ![image](https://user-images.githubusercontent.com/282899/151777451-63fe16d4-5c63-4193-b640-c52ce682e665.png)
non_process
fix async error stack part fix when error is thrown from async function previous await nodes are tagged as errors test w sequelize findorcreate parallel the circles mark nodes where error was thrown the other nodes before the actual error should not be on fire
0
6,193
9,104,583,293
IssuesEvent
2019-02-20 18:33:27
w3c/webauthn
https://api.github.com/repos/w3c/webauthn
closed
update working draft to be "level 2"
type:editorial type:process
update working draft to be "level 2" do this along with issue #1138
1.0
update working draft to be "level 2" - update working draft to be "level 2" do this along with issue #1138
process
update working draft to be level update working draft to be level do this along with issue
1
591,905
17,864,971,295
IssuesEvent
2021-09-06 08:20:36
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.youtube.com - see bug description
browser-firefox priority-critical engine-gecko
<!-- @browser: Firefox 91.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/85789 --> **URL**: https://www.youtube.com/ **Browser / Version**: Firefox 91.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: Video freezes then picks up, audio is fine **Steps to Reproduce**: Youtube videos are freezing, seems to happen in higher quality options. Audio isn't affected <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.youtube.com - see bug description - <!-- @browser: Firefox 91.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/85789 --> **URL**: https://www.youtube.com/ **Browser / Version**: Firefox 91.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: Video freezes then picks up, audio is fine **Steps to Reproduce**: Youtube videos are freezing, seems to happen in higher quality options. Audio isn't affected <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
see bug description url browser version firefox operating system windows tested another browser yes chrome problem type something else description video freezes then picks up audio is fine steps to reproduce youtube videos are freezing seems to happen in higher quality options audio isn t affected browser configuration none from with ❤️
0
71,495
3,358,567,771
IssuesEvent
2015-11-19 10:10:39
readium/readium-sdk
https://api.github.com/repos/readium/readium-sdk
closed
WKWebView window.parent/top = self, new epubReadingSystem method, requires changing deep iframe injection routine
enhancement priority high [critical] Shared-JS
See: https://github.com/readium/readium-sdk/commit/669659c2e63129b818ce13c88709fc446c776e10#commitcomment-10473931
1.0
WKWebView window.parent/top = self, new epubReadingSystem method, requires changing deep iframe injection routine - See: https://github.com/readium/readium-sdk/commit/669659c2e63129b818ce13c88709fc446c776e10#commitcomment-10473931
non_process
wkwebview window parent top self new epubreadingsystem method requires changing deep iframe injection routine see
0
98,300
20,670,572,047
IssuesEvent
2022-03-10 01:22:55
OctopusDeploy/Issues
https://api.github.com/repos/OctopusDeploy/Issues
closed
ARC not supported in Git projects. Warning message is not clear about this.
kind/bug state/happening team/config-as-code
### Team - [X] I've assigned a team label to this issue ### Severity _No response_ ### Version 2022.1 ### Latest Version _No response_ ### What happened? After converting to a project to use Git, the [ARC](https://octopus.com/docs/projects/project-triggers/automatic-release-creation) settings are automatically cleared. ARC is not supported for Git projects, so this can not be configured again. There is a warning message that explains this on the convert page, but it's not clear that ARC is not supported at all in Git projects. ![Octopus version control settings page showing warning Converting a project with Automatic Release Creation (ARC) enabled is supported, but these ARC settings will be cleared](https://user-images.githubusercontent.com/4549037/156665957-4cd6bf62-189b-48bb-bf7e-fe7321e4cadb.png) ### Reproduction 1. Create a new project 2. Enable ARC on the project 3. Convert project to Git 4. ARC settings have been cleared ### Error and Stacktrace _No response_ ### More Information _No response_ ### Workaround _No response_
1.0
ARC not supported in Git projects. Warning message is not clear about this. - ### Team - [X] I've assigned a team label to this issue ### Severity _No response_ ### Version 2022.1 ### Latest Version _No response_ ### What happened? After converting to a project to use Git, the [ARC](https://octopus.com/docs/projects/project-triggers/automatic-release-creation) settings are automatically cleared. ARC is not supported for Git projects, so this can not be configured again. There is a warning message that explains this on the convert page, but it's not clear that ARC is not supported at all in Git projects. ![Octopus version control settings page showing warning Converting a project with Automatic Release Creation (ARC) enabled is supported, but these ARC settings will be cleared](https://user-images.githubusercontent.com/4549037/156665957-4cd6bf62-189b-48bb-bf7e-fe7321e4cadb.png) ### Reproduction 1. Create a new project 2. Enable ARC on the project 3. Convert project to Git 4. ARC settings have been cleared ### Error and Stacktrace _No response_ ### More Information _No response_ ### Workaround _No response_
non_process
arc not supported in git projects warning message is not clear about this team i ve assigned a team label to this issue severity no response version latest version no response what happened after converting to a project to use git the settings are automatically cleared arc is not supported for git projects so this can not be configured again there is a warning message that explains this on the convert page but it s not clear that arc is not supported at all in git projects reproduction create a new project enable arc on the project convert project to git arc settings have been cleared error and stacktrace no response more information no response workaround no response
0
22,283
30,833,601,016
IssuesEvent
2023-08-02 05:12:09
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
Rewind persisted db to remove corrupted entries
question log-processing on-disk
I managed to run my background analytics job twice at the same time and now I have a bunch of data for "hits" that appears to be double counted (there is an obvious segment of the chart where the "hits" line is twice the height of before and after). I was wondering if it's possible for me to rewind the persisted database to some date so I can re-run goaccess over the original logfiles to reprocess that data? Thanks so much!
1.0
Rewind persisted db to remove corrupted entries - I managed to run my background analytics job twice at the same time and now I have a bunch of data for "hits" that appears to be double counted (there is an obvious segment of the chart where the "hits" line is twice the height of before and after). I was wondering if it's possible for me to rewind the persisted database to some date so I can re-run goaccess over the original logfiles to reprocess that data? Thanks so much!
process
rewind persisted db to remove corrupted entries i managed to run my background analytics job twice at the same time and now i have a bunch of data for hits that appears to be double counted there is an obvious segment of the chart where the hits line is twice the height of before and after i was wondering if it s possible for me to rewind the persisted database to some date so i can re run goaccess over the original logfiles to reprocess that data thanks so much
1
14,215
17,136,057,234
IssuesEvent
2021-07-13 02:24:28
parcel-bundler/parcel
https://api.github.com/repos/parcel-bundler/parcel
closed
Changes in stylusrc imported files not being picked up
:bug: Bug CSS Preprocessing ✨ Parcel 2 💰 Cache
<!--- Thanks for filing an issue 😄 ! Before you submit, please read the following: Search open/closed issues before submitting since someone might have asked the same thing before! --> # 🐛 bug report <!--- Provide a general summary of the issue here --> I have a stylus file that I use to declare variables / functions. I include that file in the `imports` section of my stylusrc, so it's included automatically at the top of each styl file. I noticed that changes to this file don't trigger reloads on my server. Also I noticed that the contents of these files are not considered when looking up cached compilations. ## 🎛 Configuration (.babelrc, package.json, cli command) <!--- If describing a bug, tell us what your babel configuration looks like --> Vanilla parcel configuration, but here is my .stylusrc.js ```js const path = require('path') module.exports = { "imports": ["global.styl"], "paths": [path.join(__dirname, "src")] } ``` CLI command `parcel index.html --open` ## 🤔 Expected Behavior <!--- Tell us what should happen --> Changes in stylus global includes should trigger reloads Changes in stylus global includes should factor into which cached compilation gets served. ## 😯 Current Behavior <!--- Tell us what happens instead of the expected behavior --> Changes to stylus files included this way don't reload my dev server. Also currently, the cache isn't operating as expected. I can, say, add `html { color: red; }` in my config-included file. I can see this change if I then make a change to a stylus file that appears in the normal dependency graph (say, one pulled in by my js / html). I can just add a comment or something. If I then delete that comment, the page reloads as if a recompilation has occurred, but it will have reverted my red text. Will include an example repo below. <!--- If you are seeing an error, please include the full error message and stack trace --> ## 💁 Possible Solution I notice in the stylus transformer that we're doing some work with the paths. Maybe we can also iterate through the imports configuration and add them to the watchlist / dependency tree? (I'm not super familiar with the source, sorry if that's not helpful) <!--- Not obligatory, but suggest a fix/reason for the bug --> ## 🔦 Context <!--- How has this issue affected you? What are you trying to accomplish? --> Just trying to use the dev server. Having to keep restarting the server to get my updates. <!--- Providing context helps us come up with a solution that is most useful in the real world --> ## 💻 Code Sample <!-- Please provide a code repository, gist, code snippet or sample files to reproduce the issue --> Here's a minimum recreation https://github.com/conlanpatrek/parcel-stylus-bug ## 🌍 Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> | Software | Version(s) | | ---------------- | ---------- | | Parcel | 2.0.0-nightly.623 | Node | v14.15.4 | Yarn | 1.22.10 | Operating System | macOS 10.14.6 (I know) <!-- Love parcel? Please consider supporting our collective: 👉 https://opencollective.com/parcel/donate -->
1.0
Changes in stylusrc imported files not being picked up - <!--- Thanks for filing an issue 😄 ! Before you submit, please read the following: Search open/closed issues before submitting since someone might have asked the same thing before! --> # 🐛 bug report <!--- Provide a general summary of the issue here --> I have a stylus file that I use to declare variables / functions. I include that file in the `imports` section of my stylusrc, so it's included automatically at the top of each styl file. I noticed that changes to this file don't trigger reloads on my server. Also I noticed that the contents of these files are not considered when looking up cached compilations. ## 🎛 Configuration (.babelrc, package.json, cli command) <!--- If describing a bug, tell us what your babel configuration looks like --> Vanilla parcel configuration, but here is my .stylusrc.js ```js const path = require('path') module.exports = { "imports": ["global.styl"], "paths": [path.join(__dirname, "src")] } ``` CLI command `parcel index.html --open` ## 🤔 Expected Behavior <!--- Tell us what should happen --> Changes in stylus global includes should trigger reloads Changes in stylus global includes should factor into which cached compilation gets served. ## 😯 Current Behavior <!--- Tell us what happens instead of the expected behavior --> Changes to stylus files included this way don't reload my dev server. Also currently, the cache isn't operating as expected. I can, say, add `html { color: red; }` in my config-included file. I can see this change if I then make a change to a stylus file that appears in the normal dependency graph (say, one pulled in by my js / html). I can just add a comment or something. If I then delete that comment, the page reloads as if a recompilation has occurred, but it will have reverted my red text. Will include an example repo below. <!--- If you are seeing an error, please include the full error message and stack trace --> ## 💁 Possible Solution I notice in the stylus transformer that we're doing some work with the paths. Maybe we can also iterate through the imports configuration and add them to the watchlist / dependency tree? (I'm not super familiar with the source, sorry if that's not helpful) <!--- Not obligatory, but suggest a fix/reason for the bug --> ## 🔦 Context <!--- How has this issue affected you? What are you trying to accomplish? --> Just trying to use the dev server. Having to keep restarting the server to get my updates. <!--- Providing context helps us come up with a solution that is most useful in the real world --> ## 💻 Code Sample <!-- Please provide a code repository, gist, code snippet or sample files to reproduce the issue --> Here's a minimum recreation https://github.com/conlanpatrek/parcel-stylus-bug ## 🌍 Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> | Software | Version(s) | | ---------------- | ---------- | | Parcel | 2.0.0-nightly.623 | Node | v14.15.4 | Yarn | 1.22.10 | Operating System | macOS 10.14.6 (I know) <!-- Love parcel? Please consider supporting our collective: 👉 https://opencollective.com/parcel/donate -->
process
changes in stylusrc imported files not being picked up thanks for filing an issue 😄 before you submit please read the following search open closed issues before submitting since someone might have asked the same thing before 🐛 bug report i have a stylus file that i use to declare variables functions i include that file in the imports section of my stylusrc so it s included automatically at the top of each styl file i noticed that changes to this file don t trigger reloads on my server also i noticed that the contents of these files are not considered when looking up cached compilations 🎛 configuration babelrc package json cli command vanilla parcel configuration but here is my stylusrc js js const path require path module exports imports paths cli command parcel index html open 🤔 expected behavior changes in stylus global includes should trigger reloads changes in stylus global includes should factor into which cached compilation gets served 😯 current behavior changes to stylus files included this way don t reload my dev server also currently the cache isn t operating as expected i can say add html color red in my config included file i can see this change if i then make a change to a stylus file that appears in the normal dependency graph say one pulled in by my js html i can just add a comment or something if i then delete that comment the page reloads as if a recompilation has occurred but it will have reverted my red text will include an example repo below 💁 possible solution i notice in the stylus transformer that we re doing some work with the paths maybe we can also iterate through the imports configuration and add them to the watchlist dependency tree i m not super familiar with the source sorry if that s not helpful 🔦 context just trying to use the dev server having to keep restarting the server to get my updates 💻 code sample here s a minimum recreation 🌍 your environment software version s parcel nightly node yarn operating system macos i know love parcel please consider supporting our collective 👉
1
12,176
14,741,948,458
IssuesEvent
2021-01-07 11:26:18
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Payment Deletion Error
anc-process anp-not prioritized ant-bug
In GitLab by @pchaudhary on Feb 27, 2019, 04:31 There is an error while we delete a payment which was printed on the invoice & the billing cycle of that invoice is reverted. This issue comes up if the account's invoice is very first.
1.0
Payment Deletion Error - In GitLab by @pchaudhary on Feb 27, 2019, 04:31 There is an error while we delete a payment which was printed on the invoice & the billing cycle of that invoice is reverted. This issue comes up if the account's invoice is very first.
process
payment deletion error in gitlab by pchaudhary on feb there is an error while we delete a payment which was printed on the invoice the billing cycle of that invoice is reverted this issue comes up if the account s invoice is very first
1
528,756
15,374,011,204
IssuesEvent
2021-03-02 13:19:45
netdata/netdata-cloud
https://api.github.com/repos/netdata/netdata-cloud
closed
[BUG] Node View not loading due to Javascript errors
bug priority/high visualizations-team-bugs
**Describe the bug** Node page not loading **To Reproduce** Click on a node detail like https://app.netdata.cloud/spaces/production-iqdkpt4/rooms/general/nodes/483ee1be-3cbb-47a0-a913-7d00facd5c96 **Expected behavior** Page should render **Error logs** [Error] TypeError: undefined is not an object (evaluating 'Object(o.a)(r,n)') hasOwnProperty (funzione anonima) — main-344126e7afd18628ea00.js:199:1483564 (funzione anonima) — main-344126e7afd18628ea00.js:236:1708661 (funzione anonima) — main-344126e7afd18628ea00.js:199:1454311 (funzione anonima) — main-344126e7afd18628ea00.js:236:1708668 (funzione anonima) — common~main~netdata_dashboard.e5eb42184e81835fbe6b.js:1:13762 v — common~main~netdata_dashboard.e5eb42184e81835fbe6b.js:1:11769 (funzione anonima) — main-344126e7afd18628ea00.js:199:1619585 (funzione anonima) — main-344126e7afd18628ea00.js:199:1612626 f — main-344126e7afd18628ea00.js:199:1610279 y — main-344126e7afd18628ea00.js:199:1610468 p — main-344126e7afd18628ea00.js:199:1610328 (funzione anonima) — main-344126e7afd18628ea00.js:199:1612581 (funzione anonima) — main-344126e7afd18628ea00.js:199:1617305 m — main-344126e7afd18628ea00.js:199:1618263 p — main-344126e7afd18628ea00.js:199:1617774 c — main-344126e7afd18628ea00.js:199:1618046 promiseReactionJob (funzione anonima) (main-344126e7afd18628ea00.js:408:3552814) y (main-344126e7afd18628ea00.js:2:233276) y (main-344126e7afd18628ea00.js:199:1616469) i (main-344126e7afd18628ea00.js:199:1615983) (funzione anonima) (main-344126e7afd18628ea00.js:199:1616070) p (main-344126e7afd18628ea00.js:199:1617838) c (main-344126e7afd18628ea00.js:199:1618046) n (main-344126e7afd18628ea00.js:2:233931) c (main-344126e7afd18628ea00.js:199:1618046) y (main-344126e7afd18628ea00.js:199:1616568) i (main-344126e7afd18628ea00.js:199:1615983) (funzione anonima) (main-344126e7afd18628ea00.js:199:1616070) y (main-344126e7afd18628ea00.js:199:1616568) i (main-344126e7afd18628ea00.js:199:1615983) (funzione anonima) (main-344126e7afd18628ea00.js:199:1616070) y (main-344126e7afd18628ea00.js:199:1616568) i (main-344126e7afd18628ea00.js:199:1615983) (funzione anonima) (main-344126e7afd18628ea00.js:199:1616070) p (main-344126e7afd18628ea00.js:199:1617838) c (main-344126e7afd18628ea00.js:199:1618046) (funzione anonima) (main-344126e7afd18628ea00.js:199:1612652) f (main-344126e7afd18628ea00.js:199:1610279) y (main-344126e7afd18628ea00.js:199:1610468) p (main-344126e7afd18628ea00.js:199:1610328) (funzione anonima) (main-344126e7afd18628ea00.js:199:1612581) (funzione anonima) (main-344126e7afd18628ea00.js:199:1617305) m (main-344126e7afd18628ea00.js:199:1618263) p (main-344126e7afd18628ea00.js:199:1617774) c (main-344126e7afd18628ea00.js:199:1618046) promiseReactionJob [Error] TypeError: undefined is not an object (evaluating 'Object(o.a)(r,n)') (funzione anonima) (main-344126e7afd18628ea00.js:408:3552814) y (main-344126e7afd18628ea00.js:2:233276) y (main-344126e7afd18628ea00.js:199:1616469) i (main-344126e7afd18628ea00.js:199:1615983) (funzione anonima) (main-344126e7afd18628ea00.js:199:1616070) p (main-344126e7afd18628ea00.js:199:1617838) c (main-344126e7afd18628ea00.js:199:1618046) n (main-344126e7afd18628ea00.js:2:233931) c (main-344126e7afd18628ea00.js:199:1618046) y (main-344126e7afd18628ea00.js:199:1616568) i (main-344126e7afd18628ea00.js:199:1615983) (funzione anonima) (main-344126e7afd18628ea00.js:199:1616070) y (main-344126e7afd18628ea00.js:199:1616568) i (main-344126e7afd18628ea00.js:199:1615983) (funzione anonima) (main-344126e7afd18628ea00.js:199:1616070) y (main-344126e7afd18628ea00.js:199:1616568) i (main-344126e7afd18628ea00.js:199:1615983) (funzione anonima) (main-344126e7afd18628ea00.js:199:1616070) p (main-344126e7afd18628ea00.js:199:1617838) c (main-344126e7afd18628ea00.js:199:1618046) (funzione anonima) (main-344126e7afd18628ea00.js:199:1612652) f (main-344126e7afd18628ea00.js:199:1610279) y (main-344126e7afd18628ea00.js:199:1610468) p (main-344126e7afd18628ea00.js:199:1610328) (funzione anonima) (main-344126e7afd18628ea00.js:199:1612581) (funzione anonima) (main-344126e7afd18628ea00.js:199:1617305) m (main-344126e7afd18628ea00.js:199:1618263) p (main-344126e7afd18628ea00.js:199:1617774) c (main-344126e7afd18628ea00.js:199:1618046) promiseReactionJob **Desktop (please complete the following information):** - OS: Mac OS Big Sur - Browser Safari - Version latest
1.0
[BUG] Node View not loading due to Javascript errors - **Describe the bug** Node page not loading **To Reproduce** Click on a node detail like https://app.netdata.cloud/spaces/production-iqdkpt4/rooms/general/nodes/483ee1be-3cbb-47a0-a913-7d00facd5c96 **Expected behavior** Page should render **Error logs** [Error] TypeError: undefined is not an object (evaluating 'Object(o.a)(r,n)') hasOwnProperty (funzione anonima) — main-344126e7afd18628ea00.js:199:1483564 (funzione anonima) — main-344126e7afd18628ea00.js:236:1708661 (funzione anonima) — main-344126e7afd18628ea00.js:199:1454311 (funzione anonima) — main-344126e7afd18628ea00.js:236:1708668 (funzione anonima) — common~main~netdata_dashboard.e5eb42184e81835fbe6b.js:1:13762 v — common~main~netdata_dashboard.e5eb42184e81835fbe6b.js:1:11769 (funzione anonima) — main-344126e7afd18628ea00.js:199:1619585 (funzione anonima) — main-344126e7afd18628ea00.js:199:1612626 f — main-344126e7afd18628ea00.js:199:1610279 y — main-344126e7afd18628ea00.js:199:1610468 p — main-344126e7afd18628ea00.js:199:1610328 (funzione anonima) — main-344126e7afd18628ea00.js:199:1612581 (funzione anonima) — main-344126e7afd18628ea00.js:199:1617305 m — main-344126e7afd18628ea00.js:199:1618263 p — main-344126e7afd18628ea00.js:199:1617774 c — main-344126e7afd18628ea00.js:199:1618046 promiseReactionJob (funzione anonima) (main-344126e7afd18628ea00.js:408:3552814) y (main-344126e7afd18628ea00.js:2:233276) y (main-344126e7afd18628ea00.js:199:1616469) i (main-344126e7afd18628ea00.js:199:1615983) (funzione anonima) (main-344126e7afd18628ea00.js:199:1616070) p (main-344126e7afd18628ea00.js:199:1617838) c (main-344126e7afd18628ea00.js:199:1618046) n (main-344126e7afd18628ea00.js:2:233931) c (main-344126e7afd18628ea00.js:199:1618046) y (main-344126e7afd18628ea00.js:199:1616568) i (main-344126e7afd18628ea00.js:199:1615983) (funzione anonima) (main-344126e7afd18628ea00.js:199:1616070) y (main-344126e7afd18628ea00.js:199:1616568) i (main-344126e7afd18628ea00.js:199:1615983) (funzione anonima) (main-344126e7afd18628ea00.js:199:1616070) y (main-344126e7afd18628ea00.js:199:1616568) i (main-344126e7afd18628ea00.js:199:1615983) (funzione anonima) (main-344126e7afd18628ea00.js:199:1616070) p (main-344126e7afd18628ea00.js:199:1617838) c (main-344126e7afd18628ea00.js:199:1618046) (funzione anonima) (main-344126e7afd18628ea00.js:199:1612652) f (main-344126e7afd18628ea00.js:199:1610279) y (main-344126e7afd18628ea00.js:199:1610468) p (main-344126e7afd18628ea00.js:199:1610328) (funzione anonima) (main-344126e7afd18628ea00.js:199:1612581) (funzione anonima) (main-344126e7afd18628ea00.js:199:1617305) m (main-344126e7afd18628ea00.js:199:1618263) p (main-344126e7afd18628ea00.js:199:1617774) c (main-344126e7afd18628ea00.js:199:1618046) promiseReactionJob [Error] TypeError: undefined is not an object (evaluating 'Object(o.a)(r,n)') (funzione anonima) (main-344126e7afd18628ea00.js:408:3552814) y (main-344126e7afd18628ea00.js:2:233276) y (main-344126e7afd18628ea00.js:199:1616469) i (main-344126e7afd18628ea00.js:199:1615983) (funzione anonima) (main-344126e7afd18628ea00.js:199:1616070) p (main-344126e7afd18628ea00.js:199:1617838) c (main-344126e7afd18628ea00.js:199:1618046) n (main-344126e7afd18628ea00.js:2:233931) c (main-344126e7afd18628ea00.js:199:1618046) y (main-344126e7afd18628ea00.js:199:1616568) i (main-344126e7afd18628ea00.js:199:1615983) (funzione anonima) (main-344126e7afd18628ea00.js:199:1616070) y (main-344126e7afd18628ea00.js:199:1616568) i (main-344126e7afd18628ea00.js:199:1615983) (funzione anonima) (main-344126e7afd18628ea00.js:199:1616070) y (main-344126e7afd18628ea00.js:199:1616568) i (main-344126e7afd18628ea00.js:199:1615983) (funzione anonima) (main-344126e7afd18628ea00.js:199:1616070) p (main-344126e7afd18628ea00.js:199:1617838) c (main-344126e7afd18628ea00.js:199:1618046) (funzione anonima) (main-344126e7afd18628ea00.js:199:1612652) f (main-344126e7afd18628ea00.js:199:1610279) y (main-344126e7afd18628ea00.js:199:1610468) p (main-344126e7afd18628ea00.js:199:1610328) (funzione anonima) (main-344126e7afd18628ea00.js:199:1612581) (funzione anonima) (main-344126e7afd18628ea00.js:199:1617305) m (main-344126e7afd18628ea00.js:199:1618263) p (main-344126e7afd18628ea00.js:199:1617774) c (main-344126e7afd18628ea00.js:199:1618046) promiseReactionJob **Desktop (please complete the following information):** - OS: Mac OS Big Sur - Browser Safari - Version latest
non_process
node view not loading due to javascript errors describe the bug node page not loading to reproduce click on a node detail like expected behavior page should render error logs typeerror undefined is not an object evaluating object o a r n hasownproperty funzione anonima — main js funzione anonima — main js funzione anonima — main js funzione anonima — main js funzione anonima — common main netdata dashboard js v — common main netdata dashboard js funzione anonima — main js funzione anonima — main js f — main js y — main js p — main js funzione anonima — main js funzione anonima — main js m — main js p — main js c — main js promisereactionjob funzione anonima main js y main js y main js i main js funzione anonima main js p main js c main js n main js c main js y main js i main js funzione anonima main js y main js i main js funzione anonima main js y main js i main js funzione anonima main js p main js c main js funzione anonima main js f main js y main js p main js funzione anonima main js funzione anonima main js m main js p main js c main js promisereactionjob typeerror undefined is not an object evaluating object o a r n funzione anonima main js y main js y main js i main js funzione anonima main js p main js c main js n main js c main js y main js i main js funzione anonima main js y main js i main js funzione anonima main js y main js i main js funzione anonima main js p main js c main js funzione anonima main js f main js y main js p main js funzione anonima main js funzione anonima main js m main js p main js c main js promisereactionjob desktop please complete the following information os mac os big sur browser safari version latest
0
14,781
18,054,918,129
IssuesEvent
2021-09-20 06:47:16
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Print process start time to processing logs while running
Processing Feature Request
**Feature description.** With long-running processing algorithms it would be nice to see the start time of the process in the processing log while the process is still running. Currently when the process finishes it does tell how long the process took, but if you have a long running process and do something else while the process is running, you might not remember when the process started (i.e. how long it has already been running. **Additional context** This is the screen where I would like to see it: ![image](https://user-images.githubusercontent.com/13899537/126447903-da08ed1b-fa0f-4110-8097-7d12554a7711.png) So a really minor thing. Just a print saying something like `"process started at 21-07-2021 09:05:00"`
1.0
Print process start time to processing logs while running - **Feature description.** With long-running processing algorithms it would be nice to see the start time of the process in the processing log while the process is still running. Currently when the process finishes it does tell how long the process took, but if you have a long running process and do something else while the process is running, you might not remember when the process started (i.e. how long it has already been running. **Additional context** This is the screen where I would like to see it: ![image](https://user-images.githubusercontent.com/13899537/126447903-da08ed1b-fa0f-4110-8097-7d12554a7711.png) So a really minor thing. Just a print saying something like `"process started at 21-07-2021 09:05:00"`
process
print process start time to processing logs while running feature description with long running processing algorithms it would be nice to see the start time of the process in the processing log while the process is still running currently when the process finishes it does tell how long the process took but if you have a long running process and do something else while the process is running you might not remember when the process started i e how long it has already been running additional context this is the screen where i would like to see it so a really minor thing just a print saying something like process started at
1
255,448
19,303,175,078
IssuesEvent
2021-12-13 08:44:37
ReceiptHero/docsHero
https://api.github.com/repos/ReceiptHero/docsHero
closed
Change Swagger UI to Redoc
documentation
Miika suggested to use [Redoc](https://github.com/Redocly/redoc) for present open API spec in documentation
1.0
Change Swagger UI to Redoc - Miika suggested to use [Redoc](https://github.com/Redocly/redoc) for present open API spec in documentation
non_process
change swagger ui to redoc miika suggested to use for present open api spec in documentation
0
3,750
6,733,151,362
IssuesEvent
2017-10-18 14:00:13
york-region-tpss/stp
https://api.github.com/repos/york-region-tpss/stp
closed
Watering Assignment - Add Options Create New and Continue to Work
enhancement process workflow ui ux
Add option to Discard, Temporarily Save options the Current Watering Assignment. Creating a new card after clicking the watering assignment master card. "Create New" will pull directly from etrans table and combined with the on-hold number from lastest assignment. "Continue to work" will pull from the temp table for both tree data and comments, on-hold numbers.
1.0
Watering Assignment - Add Options Create New and Continue to Work - Add option to Discard, Temporarily Save options the Current Watering Assignment. Creating a new card after clicking the watering assignment master card. "Create New" will pull directly from etrans table and combined with the on-hold number from lastest assignment. "Continue to work" will pull from the temp table for both tree data and comments, on-hold numbers.
process
watering assignment add options create new and continue to work add option to discard temporarily save options the current watering assignment creating a new card after clicking the watering assignment master card create new will pull directly from etrans table and combined with the on hold number from lastest assignment continue to work will pull from the temp table for both tree data and comments on hold numbers
1
8,290
11,456,197,318
IssuesEvent
2020-02-06 20:41:59
kubeflow/testing
https://api.github.com/repos/kubeflow/testing
closed
Upgrade kubectl to 1.14 in test-worker
area/engprod kind/process lifecycle/stale priority/p1
Tracking bug for upgrading kubectl to 1.14 in test-worker. This is currently blocked on kubeflow/testing#508 the katib breaks with the newer version of kubectl so we need to fix the test first. We had previously tried to upgrade the image to one with the latest version of kubectl and it broke. See https://github.com/kubeflow/testing/issues/501#issuecomment-547742939 Rolling back the latest image to gcr.kubeflow-ci/test-worker@sha256:dd559f89b3cbd926ec563559995f25025eecc6290b3146f17f82d2f084d07ee2 Earlier today we had set it to gcr.io/kubeflow-ci/test-worker:v20191029-fd24db8-e3b0c Related PR: kubeflow/testing#500
1.0
Upgrade kubectl to 1.14 in test-worker - Tracking bug for upgrading kubectl to 1.14 in test-worker. This is currently blocked on kubeflow/testing#508 the katib breaks with the newer version of kubectl so we need to fix the test first. We had previously tried to upgrade the image to one with the latest version of kubectl and it broke. See https://github.com/kubeflow/testing/issues/501#issuecomment-547742939 Rolling back the latest image to gcr.kubeflow-ci/test-worker@sha256:dd559f89b3cbd926ec563559995f25025eecc6290b3146f17f82d2f084d07ee2 Earlier today we had set it to gcr.io/kubeflow-ci/test-worker:v20191029-fd24db8-e3b0c Related PR: kubeflow/testing#500
process
upgrade kubectl to in test worker tracking bug for upgrading kubectl to in test worker this is currently blocked on kubeflow testing the katib breaks with the newer version of kubectl so we need to fix the test first we had previously tried to upgrade the image to one with the latest version of kubectl and it broke see rolling back the latest image to gcr kubeflow ci test worker earlier today we had set it to gcr io kubeflow ci test worker related pr kubeflow testing
1
597,204
18,157,939,615
IssuesEvent
2021-09-27 05:51:22
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
Disabling NET_CONFIG_AUTO_INIT does not require calling net_config_init() manually in application as mentioned in Zephyr Network Configuration Library documentation
question priority: low area: Networking
**1.** I have disabled NET_CONFIG_AUTO_INIT in prj.conf file as I want to do network configuration and initialization in application. As per documentation "If you want to use the network configuration library but without automatic initialization, you can call net_config_init() manually". Reference: https://docs.zephyrproject.org/2.6.0/reference/networking/net_config.html **But networking is working fine without calling net_config_init() manually, So the documentation contradicts with code behavior.** In application I am using both IP allocation methods alternatively, i.e DHCP and static, and running TCP echo server. For DHCP initialization below function are called: iface = net_if_get_default(); net_dhcpv4_start( iface ); For Static IP initialization below function is called: net_if_ipv4_addr_add( net_if_get_default(), &ipv4_addr, NET_ADDR_MANUAL, 0U ); Not calling net_config_init() or any other initialization function in application but still networking is getting initialized when application starts, despite of this I tried calling net_config_init() in application but its definition is not found if NET_CONFIG_SETTINGS is disabled. **Please help me to understand how the networking is getting initialized when NET_CONFIG_AUTO_INIT is disabled, net_config_init() not called manually and what configuration parameters can be controlled from application in this case. I am not calling net_config_init(), will it impact anything in networking? What is the right way to configure networking in application? **. Network configuration in prj.conf file: CONFIG_NETWORKING=y CONFIG_NET_IPV4=y CONFIG_NET_IPV6=n CONFIG_NET_TCP=y CONFIG_NET_SOCKETS=y CONFIG_NET_SOCKETS_POSIX_NAMES=y CONFIG_TEST_RANDOM_GENERATOR=y CONFIG_NET_CONFIG_SETTINGS=n CONFIG_NET_CONFIG_AUTO_INIT=n **2.** **I am getting below error messages every time while running the application and also when ethernet cable is unplugged and plugged back. It is not impacting the functionality but what is the cause of these errors and how to fix it?** ![image](https://user-images.githubusercontent.com/88422239/134144096-24ae0900-9798-4d12-9216-4f8c1f79a934.png) **3.** **How much time is expected to get the interface completely up when ethernet cable is connected while application is running** instead of connecting cable before starting application. I have observed that it takes few seconds to get the interface up when Ethernet cable is connected after starting the application. **Environment:** - OS: Windows - IDE : Visual Studio Code-PlatformIO - Zephyr version: 2.6.0
1.0
Disabling NET_CONFIG_AUTO_INIT does not require calling net_config_init() manually in application as mentioned in Zephyr Network Configuration Library documentation - **1.** I have disabled NET_CONFIG_AUTO_INIT in prj.conf file as I want to do network configuration and initialization in application. As per documentation "If you want to use the network configuration library but without automatic initialization, you can call net_config_init() manually". Reference: https://docs.zephyrproject.org/2.6.0/reference/networking/net_config.html **But networking is working fine without calling net_config_init() manually, So the documentation contradicts with code behavior.** In application I am using both IP allocation methods alternatively, i.e DHCP and static, and running TCP echo server. For DHCP initialization below function are called: iface = net_if_get_default(); net_dhcpv4_start( iface ); For Static IP initialization below function is called: net_if_ipv4_addr_add( net_if_get_default(), &ipv4_addr, NET_ADDR_MANUAL, 0U ); Not calling net_config_init() or any other initialization function in application but still networking is getting initialized when application starts, despite of this I tried calling net_config_init() in application but its definition is not found if NET_CONFIG_SETTINGS is disabled. **Please help me to understand how the networking is getting initialized when NET_CONFIG_AUTO_INIT is disabled, net_config_init() not called manually and what configuration parameters can be controlled from application in this case. I am not calling net_config_init(), will it impact anything in networking? What is the right way to configure networking in application? **. Network configuration in prj.conf file: CONFIG_NETWORKING=y CONFIG_NET_IPV4=y CONFIG_NET_IPV6=n CONFIG_NET_TCP=y CONFIG_NET_SOCKETS=y CONFIG_NET_SOCKETS_POSIX_NAMES=y CONFIG_TEST_RANDOM_GENERATOR=y CONFIG_NET_CONFIG_SETTINGS=n CONFIG_NET_CONFIG_AUTO_INIT=n **2.** **I am getting below error messages every time while running the application and also when ethernet cable is unplugged and plugged back. It is not impacting the functionality but what is the cause of these errors and how to fix it?** ![image](https://user-images.githubusercontent.com/88422239/134144096-24ae0900-9798-4d12-9216-4f8c1f79a934.png) **3.** **How much time is expected to get the interface completely up when ethernet cable is connected while application is running** instead of connecting cable before starting application. I have observed that it takes few seconds to get the interface up when Ethernet cable is connected after starting the application. **Environment:** - OS: Windows - IDE : Visual Studio Code-PlatformIO - Zephyr version: 2.6.0
non_process
disabling net config auto init does not require calling net config init manually in application as mentioned in zephyr network configuration library documentation i have disabled net config auto init in prj conf file as i want to do network configuration and initialization in application as per documentation if you want to use the network configuration library but without automatic initialization you can call net config init manually reference but networking is working fine without calling net config init manually so the documentation contradicts with code behavior in application i am using both ip allocation methods alternatively i e dhcp and static and running tcp echo server for dhcp initialization below function are called iface net if get default net start iface for static ip initialization below function is called net if addr add net if get default addr net addr manual not calling net config init or any other initialization function in application but still networking is getting initialized when application starts despite of this i tried calling net config init in application but its definition is not found if net config settings is disabled please help me to understand how the networking is getting initialized when net config auto init is disabled net config init not called manually and what configuration parameters can be controlled from application in this case i am not calling net config init will it impact anything in networking what is the right way to configure networking in application network configuration in prj conf file config networking y config net y config net n config net tcp y config net sockets y config net sockets posix names y config test random generator y config net config settings n config net config auto init n i am getting below error messages every time while running the application and also when ethernet cable is unplugged and plugged back it is not impacting the functionality but what is the cause of these errors and how to fix it how much time is expected to get the interface completely up when ethernet cable is connected while application is running instead of connecting cable before starting application i have observed that it takes few seconds to get the interface up when ethernet cable is connected after starting the application environment os windows ide visual studio code platformio zephyr version
0
7,558
10,679,433,026
IssuesEvent
2019-10-21 19:15:53
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Failed to install script on Windows Server 2016 Datacenter Azure VM
Pri1 automation/svc cxp process-automation/subsvc product-question triaged
Looks like the 'New-OnPremiseHybridWorker' is not being maintained actively. I am having issues to run the script on an Azure VM. I have created an issue on the project. Haven't heard anything. Could you please help me? https://github.com/azureautomation/runbooks/issues/59 --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 3092ee3a-3c57-cc53-186b-b454e7d9d190 * Version Independent ID: 6f6b5a07-397a-98a2-6091-941244a77837 * Content: [Azure Automation Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-hybrid-runbook-worker#installing-a-hybrid-runbook-worker) * Content Source: [articles/automation/automation-hybrid-runbook-worker.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-hybrid-runbook-worker.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @bobbytreed * Microsoft Alias: **robreed**
1.0
Failed to install script on Windows Server 2016 Datacenter Azure VM - Looks like the 'New-OnPremiseHybridWorker' is not being maintained actively. I am having issues to run the script on an Azure VM. I have created an issue on the project. Haven't heard anything. Could you please help me? https://github.com/azureautomation/runbooks/issues/59 --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 3092ee3a-3c57-cc53-186b-b454e7d9d190 * Version Independent ID: 6f6b5a07-397a-98a2-6091-941244a77837 * Content: [Azure Automation Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-hybrid-runbook-worker#installing-a-hybrid-runbook-worker) * Content Source: [articles/automation/automation-hybrid-runbook-worker.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-hybrid-runbook-worker.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @bobbytreed * Microsoft Alias: **robreed**
process
failed to install script on windows server datacenter azure vm looks like the new onpremisehybridworker is not being maintained actively i am having issues to run the script on an azure vm i have created an issue on the project haven t heard anything could you please help me document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login bobbytreed microsoft alias robreed
1
9,740
12,733,130,386
IssuesEvent
2020-06-25 11:43:39
prisma/prisma-client-js
https://api.github.com/repos/prisma/prisma-client-js
closed
Transaction API is not writing the transaction
bug/2-confirmed kind/bug process/candidate team/engines
<!-- Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client. Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports --> ## Bug description <!-- A clear and concise description of what the bug is. --> The transaction API is not writing data to database even though I am getting result back in the transaction object ![image](https://user-images.githubusercontent.com/22195362/85574883-c2354300-b654-11ea-875f-e9e86508cf11.png) ![image](https://user-images.githubusercontent.com/22195362/85574901-c6616080-b654-11ea-899f-ec4276305caa.png) ## How to reproduce See https://github.com/harshit-test-org/prisma-internal-transaction-issue ## Expected behavior It should write data to the database. ## Environment & setup <!-- In which environment does the problem occur --> - OS: MacOS 10.15.5 - Database: Postgres 11 - Prisma version: 2.2.0-dev.1 - Node.js version: 12.16.2
1.0
Transaction API is not writing the transaction - <!-- Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client. Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports --> ## Bug description <!-- A clear and concise description of what the bug is. --> The transaction API is not writing data to database even though I am getting result back in the transaction object ![image](https://user-images.githubusercontent.com/22195362/85574883-c2354300-b654-11ea-875f-e9e86508cf11.png) ![image](https://user-images.githubusercontent.com/22195362/85574901-c6616080-b654-11ea-899f-ec4276305caa.png) ## How to reproduce See https://github.com/harshit-test-org/prisma-internal-transaction-issue ## Expected behavior It should write data to the database. ## Environment & setup <!-- In which environment does the problem occur --> - OS: MacOS 10.15.5 - Database: Postgres 11 - Prisma version: 2.2.0-dev.1 - Node.js version: 12.16.2
process
transaction api is not writing the transaction thanks for helping us improve prisma 🙏 please follow the sections in the template and provide as much information as possible about your problem e g by setting the debug environment variable and enabling additional logging output in prisma client learn more about writing proper bug reports here bug description the transaction api is not writing data to database even though i am getting result back in the transaction object how to reproduce see expected behavior it should write data to the database environment setup os macos database postgres prisma version dev node js version
1
314,621
23,530,718,561
IssuesEvent
2022-08-19 15:04:05
cryostatio/cryostat
https://api.github.com/repos/cryostatio/cryostat
reopened
[Story] Pluggable Discovery API
question high-priority feat needs-documentation
Parent: https://github.com/cryostatio/cryostat/issues/936 Internally we currently have the `MergingPlatformClient` that serves this goal, but only for the hardcoded and included implementations within Cryostat. We need to generify this and expose it as an authenticated API so that API clients can perform their own platform-specific discovery of environment and target nodes and then publish that information to Cryostat, where Cryostat will then store it in its internal database. This API should allow for Create, Read, Update, Delete semantics on the node resources. This will serve parent 1 and 5, and should address portions of 2 as well. - [x] Implement Pluggable Discovery API (draft: #1026) - [ ] Document Pluggable Discovery API endpoints
1.0
[Story] Pluggable Discovery API - Parent: https://github.com/cryostatio/cryostat/issues/936 Internally we currently have the `MergingPlatformClient` that serves this goal, but only for the hardcoded and included implementations within Cryostat. We need to generify this and expose it as an authenticated API so that API clients can perform their own platform-specific discovery of environment and target nodes and then publish that information to Cryostat, where Cryostat will then store it in its internal database. This API should allow for Create, Read, Update, Delete semantics on the node resources. This will serve parent 1 and 5, and should address portions of 2 as well. - [x] Implement Pluggable Discovery API (draft: #1026) - [ ] Document Pluggable Discovery API endpoints
non_process
pluggable discovery api parent internally we currently have the mergingplatformclient that serves this goal but only for the hardcoded and included implementations within cryostat we need to generify this and expose it as an authenticated api so that api clients can perform their own platform specific discovery of environment and target nodes and then publish that information to cryostat where cryostat will then store it in its internal database this api should allow for create read update delete semantics on the node resources this will serve parent and and should address portions of as well implement pluggable discovery api draft document pluggable discovery api endpoints
0
22,701
32,010,047,677
IssuesEvent
2023-09-21 17:20:25
winter-telescope/mirar
https://api.github.com/repos/winter-telescope/mirar
opened
[FEATURE] Distance to crossmatches
enhancement processors
**Is your feature request related to a problem? Please describe.** I'm always frustrated when I cannot see in alerts how far a detection is from a crossmatched source. **Describe the solution you'd like** Modify XMatch to also annotate this info.
1.0
[FEATURE] Distance to crossmatches - **Is your feature request related to a problem? Please describe.** I'm always frustrated when I cannot see in alerts how far a detection is from a crossmatched source. **Describe the solution you'd like** Modify XMatch to also annotate this info.
process
distance to crossmatches is your feature request related to a problem please describe i m always frustrated when i cannot see in alerts how far a detection is from a crossmatched source describe the solution you d like modify xmatch to also annotate this info
1
353,619
10,555,065,308
IssuesEvent
2019-10-03 20:56:04
Th3-Fr3d/pmdbs
https://api.github.com/repos/Th3-Fr3d/pmdbs
closed
Ensure AutomatedTaskFramework.Execute() Thread Safety
bug high priority
Confirmed race condition: AutomatedTaskFramework.Tasks.Execute() is called by the scheduling thread as well as the task management thread
1.0
Ensure AutomatedTaskFramework.Execute() Thread Safety - Confirmed race condition: AutomatedTaskFramework.Tasks.Execute() is called by the scheduling thread as well as the task management thread
non_process
ensure automatedtaskframework execute thread safety confirmed race condition automatedtaskframework tasks execute is called by the scheduling thread as well as the task management thread
0
672,272
22,798,412,641
IssuesEvent
2022-07-11 01:37:15
Elice-SW-2-Team14/Animal-Hospital
https://api.github.com/repos/Elice-SW-2-Team14/Animal-Hospital
closed
[FE] 병원 회원가입 디자인 구현
❗️high-priority 🖥 Frontend
## 🔨 기능 설명 병원 회원가입 디자인 구현 ## 📑 완료 조건 병원 회원가입 디자인 완성 ## 💭 관련 백로그 [[FE] 회원가입 페이지]-[디자인]-[기초 디자인 구성] ## 💭 예상 작업 시간 2h
1.0
[FE] 병원 회원가입 디자인 구현 - ## 🔨 기능 설명 병원 회원가입 디자인 구현 ## 📑 완료 조건 병원 회원가입 디자인 완성 ## 💭 관련 백로그 [[FE] 회원가입 페이지]-[디자인]-[기초 디자인 구성] ## 💭 예상 작업 시간 2h
non_process
병원 회원가입 디자인 구현 🔨 기능 설명 병원 회원가입 디자인 구현 📑 완료 조건 병원 회원가입 디자인 완성 💭 관련 백로그 회원가입 페이지 💭 예상 작업 시간
0
16,993
22,357,275,891
IssuesEvent
2022-06-15 16:46:15
bridgetownrb/bridgetown
https://api.github.com/repos/bridgetownrb/bridgetown
opened
Replace old-school `options = {}` arguments with `**options` where appropriate
process ruby3
Except for cases where we explicitly expect a standard hash to be supplied as an argument rather than kwarg-style calls, we should standardize around the more modern destructuring syntax.
1.0
Replace old-school `options = {}` arguments with `**options` where appropriate - Except for cases where we explicitly expect a standard hash to be supplied as an argument rather than kwarg-style calls, we should standardize around the more modern destructuring syntax.
process
replace old school options arguments with options where appropriate except for cases where we explicitly expect a standard hash to be supplied as an argument rather than kwarg style calls we should standardize around the more modern destructuring syntax
1
18,577
24,558,369,866
IssuesEvent
2022-10-12 17:52:15
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] [QA] Alternative thumbnail image is not getting displayed in the participant manager
Bug P0 Participant manager Process: Fixed
Alternative thumbnail image is not getting displayed in the participant manager ![image](https://user-images.githubusercontent.com/86007179/195331559-68ecdf27-960e-40ce-953c-c2a4649706e9.png)
1.0
[PM] [QA] Alternative thumbnail image is not getting displayed in the participant manager - Alternative thumbnail image is not getting displayed in the participant manager ![image](https://user-images.githubusercontent.com/86007179/195331559-68ecdf27-960e-40ce-953c-c2a4649706e9.png)
process
alternative thumbnail image is not getting displayed in the participant manager alternative thumbnail image is not getting displayed in the participant manager
1
8,716
11,853,909,582
IssuesEvent
2020-03-24 23:11:38
MicrosoftDocs/vsts-docs
https://api.github.com/repos/MicrosoftDocs/vsts-docs
closed
Could you add a condition for running jobs in parallel?
Pri1 devops-cicd-process/tech devops/prod
I wanted to run two jobs in a stage in parallel, if this is something common, could you add an option for it? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 3f151218-9a11-0078-e038-f96198a76143 * Version Independent ID: 09c4d032-62f3-d97c-79d7-6fbfd89910e9 * Content: [Conditions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/conditions.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/conditions.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
Could you add a condition for running jobs in parallel? - I wanted to run two jobs in a stage in parallel, if this is something common, could you add an option for it? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 3f151218-9a11-0078-e038-f96198a76143 * Version Independent ID: 09c4d032-62f3-d97c-79d7-6fbfd89910e9 * Content: [Conditions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/conditions.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/conditions.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
could you add a condition for running jobs in parallel i wanted to run two jobs in a stage in parallel if this is something common could you add an option for it document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
57,518
7,064,738,463
IssuesEvent
2018-01-06 11:09:57
mapbox/mapbox-navigation-ios
https://api.github.com/repos/mapbox/mapbox-navigation-ios
opened
Add a gesture for repeating an instruction
- feature topic: accessibility topic: design topic: voice
Some navigation applications repeat the last spoken instruction when you tap on the turn banner. This way, you can slam your thumb on the top part of the screen without taking your eyes off the road. Nowadays, tapping our turn banner brings down the step table, so we’d have to choose a different gesture for repeating the instruction. Long-pressing, maybe? 🤔 /cc @mapbox/navigation-ios
1.0
Add a gesture for repeating an instruction - Some navigation applications repeat the last spoken instruction when you tap on the turn banner. This way, you can slam your thumb on the top part of the screen without taking your eyes off the road. Nowadays, tapping our turn banner brings down the step table, so we’d have to choose a different gesture for repeating the instruction. Long-pressing, maybe? 🤔 /cc @mapbox/navigation-ios
non_process
add a gesture for repeating an instruction some navigation applications repeat the last spoken instruction when you tap on the turn banner this way you can slam your thumb on the top part of the screen without taking your eyes off the road nowadays tapping our turn banner brings down the step table so we’d have to choose a different gesture for repeating the instruction long pressing maybe 🤔 cc mapbox navigation ios
0
5,837
8,666,590,049
IssuesEvent
2018-11-29 04:59:33
nodejs/node
https://api.github.com/repos/nodejs/node
closed
cat: write error: Resource temporarily unavailable
libuv process
* **Version**: `v6.11.5` * **Platform**: `Linux blade9 4.12.12-gentoo #2 SMP Tue Nov 21 19:16:19 CET 2017 x86_64 Intel(R) Xeon(R) CPU E5450 @ 3.00GHz GenuineIntel GNU/Linux` * **Subsystem**: async stdio? Problem is related to the following issue: https://github.com/nodejs/node-v0.x-archive/issues/3027 Steps to reproduce with `uglifyjs` (2.4.10) - an error occurs randomly: Gets some JS files: ```bash mkdir node-test; cd node-test; wget -q 'https://code.jquery.com/jquery-migrate-3.0.1.js' wget -q 'https://code.jquery.com/jquery-3.3.1.min.js' wre@blade9 ~/node-test $ ls -l total 112 -rw-r--r-- 1 wre wre 86927 Jan 20 18:26 jquery-3.3.1.min.js -rw-r--r-- 1 wre wre 17813 Sep 27 2017 jquery-migrate-3.0.1.js ``` Call `uglifyjs` and try to capture the output: ```bash wre@blade9 ~/node-test $ (cat jquery-migrate-3.0.1.js | uglifyjs 2>/dev/null; cat jquery-3.3.1.min.js)|(cat - >/dev/null) cat: write error: Resource temporarily unavailable wre@blade9 ~/node-test $ (cat jquery-migrate-3.0.1.js | uglifyjs 2>/dev/null; cat jquery-3.3.1.min.js)|(cat - >/dev/null) wre@blade9 ~/node-test $ (cat jquery-migrate-3.0.1.js | uglifyjs 2>/dev/null; cat jquery-3.3.1.min.js)|(cat - >/dev/null) wre@blade9 ~/node-test $ (cat jquery-migrate-3.0.1.js | uglifyjs 2>/dev/null; cat jquery-3.3.1.min.js)|(cat - >/dev/null) cat: write error: Resource temporarily unavailable wre@blade9 ~/node-test $ (cat jquery-migrate-3.0.1.js | uglifyjs 2>/dev/null; cat jquery-3.3.1.min.js)|(cat - >/dev/null) wre@blade9 ~/node-test $ (cat jquery-migrate-3.0.1.js | uglifyjs 2>/dev/null; cat jquery-3.3.1.min.js)|(cat - >/dev/null) cat: write error: Resource temporarily unavailable ``` Workaround using `mktemp`: ```bash (cat jquery-migrate-3.0.1.js | (TMP="$(mktemp)"; uglifyjs 1>"$TMP" 2>/dev/null; cat "$TMP"; rm "$TMP"); cat jquery-3.3.1.min.js)|(cat - >/dev/null) ``` Do you know a better way to make those pipes work?
1.0
cat: write error: Resource temporarily unavailable - * **Version**: `v6.11.5` * **Platform**: `Linux blade9 4.12.12-gentoo #2 SMP Tue Nov 21 19:16:19 CET 2017 x86_64 Intel(R) Xeon(R) CPU E5450 @ 3.00GHz GenuineIntel GNU/Linux` * **Subsystem**: async stdio? Problem is related to the following issue: https://github.com/nodejs/node-v0.x-archive/issues/3027 Steps to reproduce with `uglifyjs` (2.4.10) - an error occurs randomly: Gets some JS files: ```bash mkdir node-test; cd node-test; wget -q 'https://code.jquery.com/jquery-migrate-3.0.1.js' wget -q 'https://code.jquery.com/jquery-3.3.1.min.js' wre@blade9 ~/node-test $ ls -l total 112 -rw-r--r-- 1 wre wre 86927 Jan 20 18:26 jquery-3.3.1.min.js -rw-r--r-- 1 wre wre 17813 Sep 27 2017 jquery-migrate-3.0.1.js ``` Call `uglifyjs` and try to capture the output: ```bash wre@blade9 ~/node-test $ (cat jquery-migrate-3.0.1.js | uglifyjs 2>/dev/null; cat jquery-3.3.1.min.js)|(cat - >/dev/null) cat: write error: Resource temporarily unavailable wre@blade9 ~/node-test $ (cat jquery-migrate-3.0.1.js | uglifyjs 2>/dev/null; cat jquery-3.3.1.min.js)|(cat - >/dev/null) wre@blade9 ~/node-test $ (cat jquery-migrate-3.0.1.js | uglifyjs 2>/dev/null; cat jquery-3.3.1.min.js)|(cat - >/dev/null) wre@blade9 ~/node-test $ (cat jquery-migrate-3.0.1.js | uglifyjs 2>/dev/null; cat jquery-3.3.1.min.js)|(cat - >/dev/null) cat: write error: Resource temporarily unavailable wre@blade9 ~/node-test $ (cat jquery-migrate-3.0.1.js | uglifyjs 2>/dev/null; cat jquery-3.3.1.min.js)|(cat - >/dev/null) wre@blade9 ~/node-test $ (cat jquery-migrate-3.0.1.js | uglifyjs 2>/dev/null; cat jquery-3.3.1.min.js)|(cat - >/dev/null) cat: write error: Resource temporarily unavailable ``` Workaround using `mktemp`: ```bash (cat jquery-migrate-3.0.1.js | (TMP="$(mktemp)"; uglifyjs 1>"$TMP" 2>/dev/null; cat "$TMP"; rm "$TMP"); cat jquery-3.3.1.min.js)|(cat - >/dev/null) ``` Do you know a better way to make those pipes work?
process
cat write error resource temporarily unavailable version platform linux gentoo smp tue nov cet intel r xeon r cpu genuineintel gnu linux subsystem async stdio problem is related to the following issue steps to reproduce with uglifyjs an error occurs randomly gets some js files bash mkdir node test cd node test wget q wget q wre node test ls l total rw r r wre wre jan jquery min js rw r r wre wre sep jquery migrate js call uglifyjs and try to capture the output bash wre node test cat jquery migrate js uglifyjs dev null cat jquery min js cat dev null cat write error resource temporarily unavailable wre node test cat jquery migrate js uglifyjs dev null cat jquery min js cat dev null wre node test cat jquery migrate js uglifyjs dev null cat jquery min js cat dev null wre node test cat jquery migrate js uglifyjs dev null cat jquery min js cat dev null cat write error resource temporarily unavailable wre node test cat jquery migrate js uglifyjs dev null cat jquery min js cat dev null wre node test cat jquery migrate js uglifyjs dev null cat jquery min js cat dev null cat write error resource temporarily unavailable workaround using mktemp bash cat jquery migrate js tmp mktemp uglifyjs tmp dev null cat tmp rm tmp cat jquery min js cat dev null do you know a better way to make those pipes work
1
258,615
22,332,449,341
IssuesEvent
2022-06-14 15:31:15
brave/brave-browser
https://api.github.com/repos/brave/brave-browser
closed
Brave is crashing when doing a hard refresh or selecting a new tor connection for this site in the TOR window
bug feature/tor QA/Yes release-notes/exclude QA/Test-Plan-Specified regression OS/Desktop
<!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue. PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE. INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED--> ## Description <!--Provide a brief description of the issue--> The issue is reported by @MadhaviSeelam Brave is crashing when doing a hard refresh or selecting a new tor connection for this site in the TOR window ## Steps to Reproduce <!--Please add a series of steps to reproduce the issue--> 1. Clean profile 1.41.59 2. Open a TOR window 3. Open `check.torproject.org` 4. Confirm the TOR network is connected and `check.torproject.org` is opened 5. Do a hard refresh by clicking on Ctrl+Shift+R or select the `New Tor connection for this site` in the hamburger menu 6. Brave crashes ## Actual result: <!--Please add screenshots if needed--> Brave crashes ``` Jun 14 12:43:30.000 [notice] Owning controller connection has closed -- exiting now. ``` ## Expected result: Should not crash ## Reproduces how often: <!--[Easily reproduced/Intermittent issue/No steps to reproduce]--> 100% ## Brave version (brave://version info) <!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details--> Brave | 1.41.59 Chromium: 103.0.5060.42 (Official Build) nightly (64-bit) -- | -- Revision | de0d840bf9439c31bd86bf74f065c31fdf9b208d-refs/branch-heads/5060@{#667} OS | Windows 10 Version 21H2 (Build 19044.1706) ## Version/Channel Information: <!--Does this issue happen on any other channels? Or is it specific to a certain channel?--> - Can you reproduce this issue with the current release? No - Can you reproduce this issue with the beta channel? No - Can you reproduce this issue with the nightly channel? Yes ## Other Additional Information: - Does the issue resolve itself when disabling Brave Shields? NA - Does the issue resolve itself when disabling Brave Rewards? NA - Is the issue reproducible on the latest version of Chrome? NA ## Miscellaneous Information: <!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue--> cc: @brave/qa-team @rebron @mkarolin @simonhong @emerick
1.0
Brave is crashing when doing a hard refresh or selecting a new tor connection for this site in the TOR window - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue. PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE. INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED--> ## Description <!--Provide a brief description of the issue--> The issue is reported by @MadhaviSeelam Brave is crashing when doing a hard refresh or selecting a new tor connection for this site in the TOR window ## Steps to Reproduce <!--Please add a series of steps to reproduce the issue--> 1. Clean profile 1.41.59 2. Open a TOR window 3. Open `check.torproject.org` 4. Confirm the TOR network is connected and `check.torproject.org` is opened 5. Do a hard refresh by clicking on Ctrl+Shift+R or select the `New Tor connection for this site` in the hamburger menu 6. Brave crashes ## Actual result: <!--Please add screenshots if needed--> Brave crashes ``` Jun 14 12:43:30.000 [notice] Owning controller connection has closed -- exiting now. ``` ## Expected result: Should not crash ## Reproduces how often: <!--[Easily reproduced/Intermittent issue/No steps to reproduce]--> 100% ## Brave version (brave://version info) <!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details--> Brave | 1.41.59 Chromium: 103.0.5060.42 (Official Build) nightly (64-bit) -- | -- Revision | de0d840bf9439c31bd86bf74f065c31fdf9b208d-refs/branch-heads/5060@{#667} OS | Windows 10 Version 21H2 (Build 19044.1706) ## Version/Channel Information: <!--Does this issue happen on any other channels? Or is it specific to a certain channel?--> - Can you reproduce this issue with the current release? No - Can you reproduce this issue with the beta channel? No - Can you reproduce this issue with the nightly channel? Yes ## Other Additional Information: - Does the issue resolve itself when disabling Brave Shields? NA - Does the issue resolve itself when disabling Brave Rewards? NA - Is the issue reproducible on the latest version of Chrome? NA ## Miscellaneous Information: <!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue--> cc: @brave/qa-team @rebron @mkarolin @simonhong @emerick
non_process
brave is crashing when doing a hard refresh or selecting a new tor connection for this site in the tor window have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description the issue is reported by madhaviseelam brave is crashing when doing a hard refresh or selecting a new tor connection for this site in the tor window steps to reproduce clean profile open a tor window open check torproject org confirm the tor network is connected and check torproject org is opened do a hard refresh by clicking on ctrl shift r or select the new tor connection for this site in the hamburger menu brave crashes actual result brave crashes jun owning controller connection has closed exiting now expected result should not crash reproduces how often brave version brave version info brave chromium   official build  nightly  bit revision refs branch heads os windows  version build version channel information can you reproduce this issue with the current release no can you reproduce this issue with the beta channel no can you reproduce this issue with the nightly channel yes other additional information does the issue resolve itself when disabling brave shields na does the issue resolve itself when disabling brave rewards na is the issue reproducible on the latest version of chrome na miscellaneous information cc brave qa team rebron mkarolin simonhong emerick
0
71,660
30,914,124,965
IssuesEvent
2023-08-05 04:03:32
Zahlungsmittel/Zahlungsmittel
https://api.github.com/repos/Zahlungsmittel/Zahlungsmittel
opened
[CLOSED] 1137 publisher id as input field on register
service: wallet frontend feature imported
<a href="https://github.com/ogerly"><img src="https://avatars.githubusercontent.com/u/1324583?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [ogerly](https://github.com/ogerly)** _Tuesday Nov 30, 2021 at 13:01 GMT_ _Originally opened as https://github.com/gradido/gradido/pull/1147_ ---- ## 🍰 Pullrequest Integration of the PublisherID ### Issues - fixes #1137 ![FireShot Capture 821 - Gradido Account - localhost](https://user-images.githubusercontent.com/1324583/144051917-b0865b92-58d3-4988-b469-abc7a4529678.png) ![FireShot Capture 822 - Gradido Account - localhost](https://user-images.githubusercontent.com/1324583/144051923-40c14deb-3691-421d-93bc-9597d26bc200.png) ---- _**[ogerly](https://github.com/ogerly)** included the following code: https://github.com/gradido/gradido/pull/1147/commits_
1.0
[CLOSED] 1137 publisher id as input field on register - <a href="https://github.com/ogerly"><img src="https://avatars.githubusercontent.com/u/1324583?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [ogerly](https://github.com/ogerly)** _Tuesday Nov 30, 2021 at 13:01 GMT_ _Originally opened as https://github.com/gradido/gradido/pull/1147_ ---- ## 🍰 Pullrequest Integration of the PublisherID ### Issues - fixes #1137 ![FireShot Capture 821 - Gradido Account - localhost](https://user-images.githubusercontent.com/1324583/144051917-b0865b92-58d3-4988-b469-abc7a4529678.png) ![FireShot Capture 822 - Gradido Account - localhost](https://user-images.githubusercontent.com/1324583/144051923-40c14deb-3691-421d-93bc-9597d26bc200.png) ---- _**[ogerly](https://github.com/ogerly)** included the following code: https://github.com/gradido/gradido/pull/1147/commits_
non_process
publisher id as input field on register issue by tuesday nov at gmt originally opened as 🍰 pullrequest integration of the publisherid issues fixes included the following code
0
101,122
16,493,292,431
IssuesEvent
2021-05-25 07:34:02
Azure/iotedgehubdev
https://api.github.com/repos/Azure/iotedgehubdev
closed
iotedgedev.exe flagged as malware in Cylance
security_vulnerability
Our corporate anti-virus Cylance is flagging iotedgedev.exe as malware. VirusTotal and JoeSandbox are also reporting it as malware: https://www.virustotal.com/gui/file/def966125bd2aca5a41bcbb580d1bb999f73cc7326f9d3eb46dd6834bf79693e/detection Probably a false positive again, but we cannot just whitelist it until sure. Can the root cause be found and resolved? Thanks!
True
iotedgedev.exe flagged as malware in Cylance - Our corporate anti-virus Cylance is flagging iotedgedev.exe as malware. VirusTotal and JoeSandbox are also reporting it as malware: https://www.virustotal.com/gui/file/def966125bd2aca5a41bcbb580d1bb999f73cc7326f9d3eb46dd6834bf79693e/detection Probably a false positive again, but we cannot just whitelist it until sure. Can the root cause be found and resolved? Thanks!
non_process
iotedgedev exe flagged as malware in cylance our corporate anti virus cylance is flagging iotedgedev exe as malware virustotal and joesandbox are also reporting it as malware probably a false positive again but we cannot just whitelist it until sure can the root cause be found and resolved thanks
0
3,890
6,820,448,661
IssuesEvent
2017-11-07 13:58:02
yvesgoeleven/BasketLummen
https://api.github.com/repos/yvesgoeleven/BasketLummen
closed
Matches overview should have a date next to it
Size: S State: Next In Line Type: Feature Type: Process Improvement
Heard some complaints about the week overview (of matches) being confusing. A date next to the day should fix this issue - [x] Check whether this a realistic feature - [x] Implement the feature
1.0
Matches overview should have a date next to it - Heard some complaints about the week overview (of matches) being confusing. A date next to the day should fix this issue - [x] Check whether this a realistic feature - [x] Implement the feature
process
matches overview should have a date next to it heard some complaints about the week overview of matches being confusing a date next to the day should fix this issue check whether this a realistic feature implement the feature
1
7,985
11,172,973,531
IssuesEvent
2019-12-29 11:09:33
osquery/osquery
https://api.github.com/repos/osquery/osquery
closed
Audit fails when config is updated
Linux bug process auditing
**Description** The audit (event tables) fail when the configuration is updated at runtime. **How to reproduce** 1. Have an initial config file, e.g., similar to this: ``` // Define a schedule of queries: "schedule": { // "1": { // "query" : "SELECT pid,path,cmdline,cwd,uid,gid,time,parent FROM process_events", // "interval" : 10 // } "2": { "query" : "SELECT * FROM socket_events", "interval" : 10 } } ``` 2. Start osqueryd, enabling process and socket audit. Also use the refresh interval, e.g., `--config_refresh=15`. 3. When osquery started and executed the initial schedule once, define a different schedule in the configuration file, e.g., disable query 1 and activate query 2. **What happens** No audit events will be reported and the following log messages are printed: ``` I0221 19:16:45.931212 7757 auditdnetlink.cpp:601] Failed to set the netlink owner I0221 19:16:48.671166 7733 auditdnetlink.cpp:704] Malformed audit record received ```
1.0
Audit fails when config is updated - **Description** The audit (event tables) fail when the configuration is updated at runtime. **How to reproduce** 1. Have an initial config file, e.g., similar to this: ``` // Define a schedule of queries: "schedule": { // "1": { // "query" : "SELECT pid,path,cmdline,cwd,uid,gid,time,parent FROM process_events", // "interval" : 10 // } "2": { "query" : "SELECT * FROM socket_events", "interval" : 10 } } ``` 2. Start osqueryd, enabling process and socket audit. Also use the refresh interval, e.g., `--config_refresh=15`. 3. When osquery started and executed the initial schedule once, define a different schedule in the configuration file, e.g., disable query 1 and activate query 2. **What happens** No audit events will be reported and the following log messages are printed: ``` I0221 19:16:45.931212 7757 auditdnetlink.cpp:601] Failed to set the netlink owner I0221 19:16:48.671166 7733 auditdnetlink.cpp:704] Malformed audit record received ```
process
audit fails when config is updated description the audit event tables fail when the configuration is updated at runtime how to reproduce have an initial config file e g similar to this define a schedule of queries schedule query select pid path cmdline cwd uid gid time parent from process events interval query select from socket events interval start osqueryd enabling process and socket audit also use the refresh interval e g config refresh when osquery started and executed the initial schedule once define a different schedule in the configuration file e g disable query and activate query what happens no audit events will be reported and the following log messages are printed auditdnetlink cpp failed to set the netlink owner auditdnetlink cpp malformed audit record received
1
16,755
21,924,463,570
IssuesEvent
2022-05-23 01:29:32
nodejs/node
https://api.github.com/repos/nodejs/node
closed
Suggestion: Implement `setresuid` in Node prcoess API
feature request process stale
**Is your feature request related to a problem? Please describe.** I am trying to drop privileges for **parts** of my program. Usually this is achieved using the `setresuid` syscall (at least on Linux). Many programming languages implement this call (such as Ruby, Python, C/C++, Rust or Go) and it basically always works like this: ```python #!/usr/bin/env python3 # run this as root (i.e. through sudo) import os # switch to userid 1000 os.setresuid(1000, 1000, -1) os.system("id") # switch back to root os.setresuid(0, 0, -1) os.system("id") ``` Hoever when using `setuid`/`seteuid` there is no way to get back root privileges (see https://man7.org/linux/man-pages/man2/setuid.2.html). **Describe the solution you'd like** I would love the Node process API to implement not only `setuid` and `seeteuid` but also the `setresuid` syscall to be able to change the EUID and the UID at the same time, while leaving the saved user set ID untouched (-1). **Describe alternatives you've considered** I played around with `process.setuid` and `process.seteuid` without success though. Once root privileges are dropped, there is no way back ... ```javascript #!/usr/bin/env node // Note: Run with root privileges. // Note: This does NOT work. const process = require('process') const { spawn } = require('child_process') const gid = 1000 const uid = 1000 if (process.getuid && process.setuid) { console.log(`Current uid: ${process.getuid()}`) try { process.seteuid(uid) process.setuid(uid) console.log(`New uid: ${process.getuid()}`) } catch (err) { console.log(`Failed to set uid: ${err}`) } try { process.seteuid(0) // order does not matter here, process.setuid(0) // won't work, no matter what's first. console.log(`New uid: ${process.getuid()}`) } catch (err) { console.log(`Failed to set uid: ${err}`) } } ```
1.0
Suggestion: Implement `setresuid` in Node prcoess API - **Is your feature request related to a problem? Please describe.** I am trying to drop privileges for **parts** of my program. Usually this is achieved using the `setresuid` syscall (at least on Linux). Many programming languages implement this call (such as Ruby, Python, C/C++, Rust or Go) and it basically always works like this: ```python #!/usr/bin/env python3 # run this as root (i.e. through sudo) import os # switch to userid 1000 os.setresuid(1000, 1000, -1) os.system("id") # switch back to root os.setresuid(0, 0, -1) os.system("id") ``` Hoever when using `setuid`/`seteuid` there is no way to get back root privileges (see https://man7.org/linux/man-pages/man2/setuid.2.html). **Describe the solution you'd like** I would love the Node process API to implement not only `setuid` and `seeteuid` but also the `setresuid` syscall to be able to change the EUID and the UID at the same time, while leaving the saved user set ID untouched (-1). **Describe alternatives you've considered** I played around with `process.setuid` and `process.seteuid` without success though. Once root privileges are dropped, there is no way back ... ```javascript #!/usr/bin/env node // Note: Run with root privileges. // Note: This does NOT work. const process = require('process') const { spawn } = require('child_process') const gid = 1000 const uid = 1000 if (process.getuid && process.setuid) { console.log(`Current uid: ${process.getuid()}`) try { process.seteuid(uid) process.setuid(uid) console.log(`New uid: ${process.getuid()}`) } catch (err) { console.log(`Failed to set uid: ${err}`) } try { process.seteuid(0) // order does not matter here, process.setuid(0) // won't work, no matter what's first. console.log(`New uid: ${process.getuid()}`) } catch (err) { console.log(`Failed to set uid: ${err}`) } } ```
process
suggestion implement setresuid in node prcoess api is your feature request related to a problem please describe i am trying to drop privileges for parts of my program usually this is achieved using the setresuid syscall at least on linux many programming languages implement this call such as ruby python c c rust or go and it basically always works like this python usr bin env run this as root i e through sudo import os switch to userid os setresuid os system id switch back to root os setresuid os system id hoever when using setuid seteuid there is no way to get back root privileges see describe the solution you d like i would love the node process api to implement not only setuid and seeteuid but also the setresuid syscall to be able to change the euid and the uid at the same time while leaving the saved user set id untouched describe alternatives you ve considered i played around with process setuid and process seteuid without success though once root privileges are dropped there is no way back javascript usr bin env node note run with root privileges note this does not work const process require process const spawn require child process const gid const uid if process getuid process setuid console log current uid process getuid try process seteuid uid process setuid uid console log new uid process getuid catch err console log failed to set uid err try process seteuid order does not matter here process setuid won t work no matter what s first console log new uid process getuid catch err console log failed to set uid err
1
29,697
13,160,508,623
IssuesEvent
2020-08-10 17:43:58
microsoft/BotFramework-WebChat
https://api.github.com/repos/microsoft/BotFramework-WebChat
opened
Investigate nonce strategy for updated Content Security Policy
Bot Services Enhancement customer-reported
https://portal.microsofticm.com/imp/v3/incidents/details/199836319/home As a library, Web Chat cannot implement this CSP for users who need it. However, we should provide a mechanism to accept a nonce for Web Chat's inline styles.
1.0
Investigate nonce strategy for updated Content Security Policy - https://portal.microsofticm.com/imp/v3/incidents/details/199836319/home As a library, Web Chat cannot implement this CSP for users who need it. However, we should provide a mechanism to accept a nonce for Web Chat's inline styles.
non_process
investigate nonce strategy for updated content security policy as a library web chat cannot implement this csp for users who need it however we should provide a mechanism to accept a nonce for web chat s inline styles
0
141,757
18,989,586,829
IssuesEvent
2021-11-22 04:38:10
ChoeMinji/jquery-3.3.0
https://api.github.com/repos/ChoeMinji/jquery-3.3.0
opened
CVE-2020-15366 (Medium) detected in ajv-5.5.2.tgz
security vulnerability
## CVE-2020-15366 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ajv-5.5.2.tgz</b></p></summary> <p>Another JSON Schema Validator</p> <p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-5.5.2.tgz">https://registry.npmjs.org/ajv/-/ajv-5.5.2.tgz</a></p> <p>Path to dependency file: jquery-3.3.0/package.json</p> <p>Path to vulnerable library: jquery-3.3.0/node_modules/ajv/package.json</p> <p> Dependency Hierarchy: - grunt-eslint-20.0.0.tgz (Root Library) - eslint-4.19.1.tgz - :x: **ajv-5.5.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/jquery-3.3.0/commit/ceecb0808664406acced37f73d2f7bd52b686a97">ceecb0808664406acced37f73d2f7bd52b686a97</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.) <p>Publish Date: 2020-07-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15366>CVE-2020-15366</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/ajv-validator/ajv/releases/tag/v6.12.3">https://github.com/ajv-validator/ajv/releases/tag/v6.12.3</a></p> <p>Release Date: 2020-07-15</p> <p>Fix Resolution: ajv - 6.12.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-15366 (Medium) detected in ajv-5.5.2.tgz - ## CVE-2020-15366 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ajv-5.5.2.tgz</b></p></summary> <p>Another JSON Schema Validator</p> <p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-5.5.2.tgz">https://registry.npmjs.org/ajv/-/ajv-5.5.2.tgz</a></p> <p>Path to dependency file: jquery-3.3.0/package.json</p> <p>Path to vulnerable library: jquery-3.3.0/node_modules/ajv/package.json</p> <p> Dependency Hierarchy: - grunt-eslint-20.0.0.tgz (Root Library) - eslint-4.19.1.tgz - :x: **ajv-5.5.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/jquery-3.3.0/commit/ceecb0808664406acced37f73d2f7bd52b686a97">ceecb0808664406acced37f73d2f7bd52b686a97</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.) <p>Publish Date: 2020-07-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15366>CVE-2020-15366</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/ajv-validator/ajv/releases/tag/v6.12.3">https://github.com/ajv-validator/ajv/releases/tag/v6.12.3</a></p> <p>Release Date: 2020-07-15</p> <p>Fix Resolution: ajv - 6.12.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in ajv tgz cve medium severity vulnerability vulnerable library ajv tgz another json schema validator library home page a href path to dependency file jquery package json path to vulnerable library jquery node modules ajv package json dependency hierarchy grunt eslint tgz root library eslint tgz x ajv tgz vulnerable library found in head commit a href found in base branch master vulnerability details an issue was discovered in ajv validate in ajv aka another json schema validator a carefully crafted json schema could be provided that allows execution of other code by prototype pollution while untrusted schemas are recommended against the worst case of an untrusted schema should be a denial of service not execution of code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ajv step up your open source security game with whitesource
0
991
3,456,550,232
IssuesEvent
2015-12-18 02:11:44
onyx-platform/onyx
https://api.github.com/repos/onyx-platform/onyx
closed
Get onyx-elasticsearch in release pipeline
release-process
It doesn't run under CI or have release scripts yet.
1.0
Get onyx-elasticsearch in release pipeline - It doesn't run under CI or have release scripts yet.
process
get onyx elasticsearch in release pipeline it doesn t run under ci or have release scripts yet
1
21,601
30,004,457,350
IssuesEvent
2023-06-26 11:30:34
kdgregory/log4j-aws-appenders
https://api.github.com/repos/kdgregory/log4j-aws-appenders
closed
Batch logging doesn't work with Log4J2 CloudWatch appender
bug in-process
The method `generateWriterConfig()` doesn't copy this setting from the Log4J2 configuration object to the internal configuration object.
1.0
Batch logging doesn't work with Log4J2 CloudWatch appender - The method `generateWriterConfig()` doesn't copy this setting from the Log4J2 configuration object to the internal configuration object.
process
batch logging doesn t work with cloudwatch appender the method generatewriterconfig doesn t copy this setting from the configuration object to the internal configuration object
1
61,269
8,506,717,654
IssuesEvent
2018-10-30 17:13:40
sys-bio/tellurium
https://api.github.com/repos/sys-bio/tellurium
opened
Message from steadystate needs more info
documentation enhancement
If a user gets the following messages from the steady state solver: RuntimeError: Initial approximation routine failed. Try turning off allow_presimulation flag to False; Failed to converge while running approximation routine. Try increasing the time or maximum number of iteration. Model might not have a steady state. where it say try setting allow_presimulation flag to False, they have no idea how to set this flag to false. Include with the message something related to r.steadyStateSolver, eg "To change the allow_presimulation flag or any other other property of the steady state solver, use the command: "r.steadyStateSolver.allow_presimulation = False", where "r" is a roadrunner model."
1.0
Message from steadystate needs more info - If a user gets the following messages from the steady state solver: RuntimeError: Initial approximation routine failed. Try turning off allow_presimulation flag to False; Failed to converge while running approximation routine. Try increasing the time or maximum number of iteration. Model might not have a steady state. where it say try setting allow_presimulation flag to False, they have no idea how to set this flag to false. Include with the message something related to r.steadyStateSolver, eg "To change the allow_presimulation flag or any other other property of the steady state solver, use the command: "r.steadyStateSolver.allow_presimulation = False", where "r" is a roadrunner model."
non_process
message from steadystate needs more info if a user gets the following messages from the steady state solver runtimeerror initial approximation routine failed try turning off allow presimulation flag to false failed to converge while running approximation routine try increasing the time or maximum number of iteration model might not have a steady state where it say try setting allow presimulation flag to false they have no idea how to set this flag to false include with the message something related to r steadystatesolver eg to change the allow presimulation flag or any other other property of the steady state solver use the command r steadystatesolver allow presimulation false where r is a roadrunner model
0