Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
4
112
repo_url
stringlengths
33
141
action
stringclasses
3 values
title
stringlengths
1
1.02k
labels
stringlengths
4
1.54k
body
stringlengths
1
262k
index
stringclasses
17 values
text_combine
stringlengths
95
262k
label
stringclasses
2 values
text
stringlengths
96
252k
binary_label
int64
0
1
203,242
15,875,519,143
IssuesEvent
2021-04-09 07:06:32
curl/curl
https://api.github.com/repos/curl/curl
closed
-F for emails lacks documentation
SMTP cmdline tool documentation
I have searched the entire internet for this. I post it here because it literally doesn't work. Manpage reference: [-F, --form <name=content>](https://curl.se/docs/manpage.html#-F) Example: the following command sends an SMTP mime e-mail consisting in an inline part in two alternative formats: plain text and HTML. It attaches a text file: ``` curl -F '=(;type=multipart/alternative' \ -F '=plain text message' \ -F '= <body>HTML message</body>;type=text/html' \ -F '=)' -F '=@textfile.txt' ... smtp://example.com ``` How in heavens name do you do this? No matter what I try, it says file not found. Honestly, at this point it can even be a text/plain single part email. I seriously doubt ANYONE can get this work.
1.0
-F for emails lacks documentation - I have searched the entire internet for this. I post it here because it literally doesn't work. Manpage reference: [-F, --form <name=content>](https://curl.se/docs/manpage.html#-F) Example: the following command sends an SMTP mime e-mail consisting in an inline part in two alternative formats: plain text and HTML. It attaches a text file: ``` curl -F '=(;type=multipart/alternative' \ -F '=plain text message' \ -F '= <body>HTML message</body>;type=text/html' \ -F '=)' -F '=@textfile.txt' ... smtp://example.com ``` How in heavens name do you do this? No matter what I try, it says file not found. Honestly, at this point it can even be a text/plain single part email. I seriously doubt ANYONE can get this work.
non_test
f for emails lacks documentation i have searched the entire internet for this i post it here because it literally doesn t work manpage reference example the following command sends an smtp mime e mail consisting in an inline part in two alternative formats plain text and html it attaches a text file curl f type multipart alternative f plain text message f html message type text html f f textfile txt smtp example com how in heavens name do you do this no matter what i try it says file not found honestly at this point it can even be a text plain single part email i seriously doubt anyone can get this work
0
42,846
5,478,031,190
IssuesEvent
2017-03-12 14:33:00
perl6/whateverable
https://api.github.com/repos/perl6/whateverable
reopened
Bisectable and builds with no perl6 executable
bisectable testneeded
When Bisectable stumbles upon a build that has no ``perl6`` executable (there are some builds like this), it just drops the whole thing. Here is an example: https://irclog.perlgeek.de/perl6-dev/2017-03-01#i_14185671
1.0
Bisectable and builds with no perl6 executable - When Bisectable stumbles upon a build that has no ``perl6`` executable (there are some builds like this), it just drops the whole thing. Here is an example: https://irclog.perlgeek.de/perl6-dev/2017-03-01#i_14185671
test
bisectable and builds with no executable when bisectable stumbles upon a build that has no executable there are some builds like this it just drops the whole thing here is an example
1
53,656
6,340,475,071
IssuesEvent
2017-07-27 10:59:50
FlightControl-Master/MOOSE
https://api.github.com/repos/FlightControl-Master/MOOSE
closed
DESIGNATE: Spam Message "Can't Mark.."
enhancement implemented ready for testing
When I ordered to start to lase a group of enemy ground units, spam message "Can't Mark.." appear for every units wich the Recce can't detect.
1.0
DESIGNATE: Spam Message "Can't Mark.." - When I ordered to start to lase a group of enemy ground units, spam message "Can't Mark.." appear for every units wich the Recce can't detect.
test
designate spam message can t mark when i ordered to start to lase a group of enemy ground units spam message can t mark appear for every units wich the recce can t detect
1
12,998
21,628,939,762
IssuesEvent
2022-05-05 07:37:41
renovatebot/renovate
https://api.github.com/repos/renovatebot/renovate
opened
Support Nexus repository manager for RubyGems
type:feature status:requirements priority-5-triage
### What would you like Renovate to be able to do? Nexus repository manager doesn't support the '/api/v1/' pattern for RubyGem discovery, however, `bundler` does support installing Gems when the Nexus URL is used as the `source` in the `Gemfile`. Following discussion here: https://github.com/renovatebot/renovate/discussions/15451 ### If you have any ideas on how this should be implemented, please tell us here. The `source` used for `bundler` is just a URL link to the Gem group rather than an API endpoint. Should maybe take that into account. ### Is this a feature you are interested in implementing yourself? No
1.0
Support Nexus repository manager for RubyGems - ### What would you like Renovate to be able to do? Nexus repository manager doesn't support the '/api/v1/' pattern for RubyGem discovery, however, `bundler` does support installing Gems when the Nexus URL is used as the `source` in the `Gemfile`. Following discussion here: https://github.com/renovatebot/renovate/discussions/15451 ### If you have any ideas on how this should be implemented, please tell us here. The `source` used for `bundler` is just a URL link to the Gem group rather than an API endpoint. Should maybe take that into account. ### Is this a feature you are interested in implementing yourself? No
non_test
support nexus repository manager for rubygems what would you like renovate to be able to do nexus repository manager doesn t support the api pattern for rubygem discovery however bundler does support installing gems when the nexus url is used as the source in the gemfile following discussion here if you have any ideas on how this should be implemented please tell us here the source used for bundler is just a url link to the gem group rather than an api endpoint should maybe take that into account is this a feature you are interested in implementing yourself no
0
233,795
19,061,010,105
IssuesEvent
2021-11-26 07:44:29
thelfer/tfel
https://api.github.com/repos/thelfer/tfel
closed
[mtest] Imposed inner radius evolution in pipe modelling
enhancement mtest
Dear Thomas, I would like to impose via the module `PTest` the evolution of the inner radius of a pipe. Is it possible to do this with the module `PTest` in `MTest` ? Best regards, Fabien
1.0
[mtest] Imposed inner radius evolution in pipe modelling - Dear Thomas, I would like to impose via the module `PTest` the evolution of the inner radius of a pipe. Is it possible to do this with the module `PTest` in `MTest` ? Best regards, Fabien
test
imposed inner radius evolution in pipe modelling dear thomas i would like to impose via the module ptest the evolution of the inner radius of a pipe is it possible to do this with the module ptest in mtest best regards fabien
1
158,146
12,404,304,693
IssuesEvent
2020-05-21 15:19:08
dotnet/roslyn
https://api.github.com/repos/dotnet/roslyn
closed
ArgumentOutOfRangeException thrown in InteractiveWindow_InProc.GetLastReplOutput
Area-IDE Flaky Integration-Test
``` System.ArgumentOutOfRangeException : StartIndex cannot be less than zero. Parameter name: startIndex Server stack trace: at System.String.Substring(Int32 startIndex, Int32 length) at Microsoft.VisualStudio.IntegrationTest.Utilities.InProcess.InteractiveWindow_InProc.GetLastReplOutput() at Microsoft.VisualStudio.IntegrationTest.Utilities.InProcess.InteractiveWindow_InProc.WaitForPredicate(Func`1 getValue, Func`2 isExpectedValue) at Microsoft.VisualStudio.IntegrationTest.Utilities.InProcess.InteractiveWindow_InProc.WaitForLastReplOutputContains(String outputText) at System.Runtime.Remoting.Messaging.StackBuilderSink._PrivateProcessMessage(IntPtr md, Object[] args, Object server, Object[]& outArgs) at System.Runtime.Remoting.Messaging.StackBuilderSink.SyncProcessMessage(IMessage msg) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at Microsoft.VisualStudio.IntegrationTest.Utilities.InProcess.InteractiveWindow_InProc.WaitForLastReplOutputContains(String outputText) at Microsoft.VisualStudio.IntegrationTest.Utilities.OutOfProcess.InteractiveWindow_OutOfProc.WaitForLastReplOutputContains(String outputText) in /_/src/VisualStudio/IntegrationTest/TestUtilities/OutOfProcess/InteractiveWindow_OutOfProc.cs:line 76 at Roslyn.VisualStudio.IntegrationTests.CSharp.CSharpInteractive.ForStatement() in /_/src/VisualStudio/IntegrationTest/IntegrationTests/CSharp/CSharpInteractive.cs:line 40 ``` There are a couple of other open issues for this test, but the failures there do not appear to be related to this stack. Failed in https://github.com/dotnet/roslyn/pull/43962. https://dev.azure.com/dnceng/public/_build/results?buildId=630588&view=ms.vss-test-web.build-test-results-tab&runId=19673644&resultId=100468&paneView=debug
1.0
ArgumentOutOfRangeException thrown in InteractiveWindow_InProc.GetLastReplOutput - ``` System.ArgumentOutOfRangeException : StartIndex cannot be less than zero. Parameter name: startIndex Server stack trace: at System.String.Substring(Int32 startIndex, Int32 length) at Microsoft.VisualStudio.IntegrationTest.Utilities.InProcess.InteractiveWindow_InProc.GetLastReplOutput() at Microsoft.VisualStudio.IntegrationTest.Utilities.InProcess.InteractiveWindow_InProc.WaitForPredicate(Func`1 getValue, Func`2 isExpectedValue) at Microsoft.VisualStudio.IntegrationTest.Utilities.InProcess.InteractiveWindow_InProc.WaitForLastReplOutputContains(String outputText) at System.Runtime.Remoting.Messaging.StackBuilderSink._PrivateProcessMessage(IntPtr md, Object[] args, Object server, Object[]& outArgs) at System.Runtime.Remoting.Messaging.StackBuilderSink.SyncProcessMessage(IMessage msg) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at Microsoft.VisualStudio.IntegrationTest.Utilities.InProcess.InteractiveWindow_InProc.WaitForLastReplOutputContains(String outputText) at Microsoft.VisualStudio.IntegrationTest.Utilities.OutOfProcess.InteractiveWindow_OutOfProc.WaitForLastReplOutputContains(String outputText) in /_/src/VisualStudio/IntegrationTest/TestUtilities/OutOfProcess/InteractiveWindow_OutOfProc.cs:line 76 at Roslyn.VisualStudio.IntegrationTests.CSharp.CSharpInteractive.ForStatement() in /_/src/VisualStudio/IntegrationTest/IntegrationTests/CSharp/CSharpInteractive.cs:line 40 ``` There are a couple of other open issues for this test, but the failures there do not appear to be related to this stack. Failed in https://github.com/dotnet/roslyn/pull/43962. https://dev.azure.com/dnceng/public/_build/results?buildId=630588&view=ms.vss-test-web.build-test-results-tab&runId=19673644&resultId=100468&paneView=debug
test
argumentoutofrangeexception thrown in interactivewindow inproc getlastreploutput system argumentoutofrangeexception startindex cannot be less than zero parameter name startindex server stack trace at system string substring startindex length at microsoft visualstudio integrationtest utilities inprocess interactivewindow inproc getlastreploutput at microsoft visualstudio integrationtest utilities inprocess interactivewindow inproc waitforpredicate func getvalue func isexpectedvalue at microsoft visualstudio integrationtest utilities inprocess interactivewindow inproc waitforlastreploutputcontains string outputtext at system runtime remoting messaging stackbuildersink privateprocessmessage intptr md object args object server object outargs at system runtime remoting messaging stackbuildersink syncprocessmessage imessage msg exception rethrown at at system runtime remoting proxies realproxy handlereturnmessage imessage reqmsg imessage retmsg at system runtime remoting proxies realproxy privateinvoke messagedata msgdata type at microsoft visualstudio integrationtest utilities inprocess interactivewindow inproc waitforlastreploutputcontains string outputtext at microsoft visualstudio integrationtest utilities outofprocess interactivewindow outofproc waitforlastreploutputcontains string outputtext in src visualstudio integrationtest testutilities outofprocess interactivewindow outofproc cs line at roslyn visualstudio integrationtests csharp csharpinteractive forstatement in src visualstudio integrationtest integrationtests csharp csharpinteractive cs line there are a couple of other open issues for this test but the failures there do not appear to be related to this stack failed in
1
150,399
11,959,092,794
IssuesEvent
2020-04-04 20:33:20
forseti-security/forseti-security
https://api.github.com/repos/forseti-security/forseti-security
opened
Migrate model use test to pytest
module: testing priority: p3 triaged: yes
As part of migration from inspec to pytest, replace the model use test with a pytest equivalent. Setup the test so that other client tests can be gradually added.
1.0
Migrate model use test to pytest - As part of migration from inspec to pytest, replace the model use test with a pytest equivalent. Setup the test so that other client tests can be gradually added.
test
migrate model use test to pytest as part of migration from inspec to pytest replace the model use test with a pytest equivalent setup the test so that other client tests can be gradually added
1
328,093
28,101,067,190
IssuesEvent
2023-03-30 19:32:44
DevelopingSpace/starchart
https://api.github.com/repos/DevelopingSpace/starchart
closed
Test for CRUD operations in models
category: testing
As part of #97, we need to have some tests for CRUD functions in PR #228
1.0
Test for CRUD operations in models - As part of #97, we need to have some tests for CRUD functions in PR #228
test
test for crud operations in models as part of we need to have some tests for crud functions in pr
1
42,331
22,499,890,105
IssuesEvent
2022-06-23 10:49:57
tarantool/tarantool
https://api.github.com/repos/tarantool/tarantool
closed
sql: Analyze works incorrectly
bug sql performance
https://github.com/tarantool/tarantool/commit/e3ec3d47a0ec520206fbb33d3f16ce448e3ad279?diff=unified#diff-8dc039e198a8fe0a2d70c89a7aca8a18R1105 This is a strange change. It seems that rows in `_sql_stat4` should be unique and `analyze` should not replace rows which were created by itself just before. Here are some useful snippets: Src to repeat key duplicate in `_sql_stat4` ``` CREATE TABLE t1(a primary key ,b); CREATE INDEX t1b ON t1(b); insert into t1 values(1,2); insert into t1 values(2,2); analyze; ``` Part of analyze vdbe: ``` 101> 108 String8 0 50 0 T1B 00 r[50]='T1B'; Analysis for T1.T1B [480/1903] 101> 109 OpenRead 6 526337 0 k(1,) 00 root=526337; T1B 101> 110 Count 6 48 0 00 r[48]=count() 101> 111 Integer 1 46 0 00 r[46]=1 101> 112 Integer 1 47 0 00 r[47]=1 101> 113 Function0 0 46 45 stat_init(3) 03 r[45]=func(r[46..48]) 101> 114 Rewind 6 146 0 00 101> 115 Integer 0 46 0 00 r[46]=0 101> 116 Goto 0 122 0 00 101> 117 Integer 0 46 0 00 r[46]=0 101> 118 Column 6 1 48 00 r[48]= 101> 119 Ne 48 122 52 (BINARY) 80 if r[52]!=r[48] goto 122 101> 120 Integer 1 46 0 00 r[46]=1 101> 121 Goto 0 123 0 00 101> 122 Column 6 1 52 00 r[52]= 101> 123 Column 6 0 54 00 r[54]=A 101> 124 MakeRecord 54 1 47 00 r[47]=mkrec(r[54]) 101> 125 Function0 1 45 48 stat_push(3) 03 r[48]=func(r[45..47]) 101> 126 Next 6 117 0 00 ``` Draw extra attention to lines 118, 119 and 122
True
sql: Analyze works incorrectly - https://github.com/tarantool/tarantool/commit/e3ec3d47a0ec520206fbb33d3f16ce448e3ad279?diff=unified#diff-8dc039e198a8fe0a2d70c89a7aca8a18R1105 This is a strange change. It seems that rows in `_sql_stat4` should be unique and `analyze` should not replace rows which were created by itself just before. Here are some useful snippets: Src to repeat key duplicate in `_sql_stat4` ``` CREATE TABLE t1(a primary key ,b); CREATE INDEX t1b ON t1(b); insert into t1 values(1,2); insert into t1 values(2,2); analyze; ``` Part of analyze vdbe: ``` 101> 108 String8 0 50 0 T1B 00 r[50]='T1B'; Analysis for T1.T1B [480/1903] 101> 109 OpenRead 6 526337 0 k(1,) 00 root=526337; T1B 101> 110 Count 6 48 0 00 r[48]=count() 101> 111 Integer 1 46 0 00 r[46]=1 101> 112 Integer 1 47 0 00 r[47]=1 101> 113 Function0 0 46 45 stat_init(3) 03 r[45]=func(r[46..48]) 101> 114 Rewind 6 146 0 00 101> 115 Integer 0 46 0 00 r[46]=0 101> 116 Goto 0 122 0 00 101> 117 Integer 0 46 0 00 r[46]=0 101> 118 Column 6 1 48 00 r[48]= 101> 119 Ne 48 122 52 (BINARY) 80 if r[52]!=r[48] goto 122 101> 120 Integer 1 46 0 00 r[46]=1 101> 121 Goto 0 123 0 00 101> 122 Column 6 1 52 00 r[52]= 101> 123 Column 6 0 54 00 r[54]=A 101> 124 MakeRecord 54 1 47 00 r[47]=mkrec(r[54]) 101> 125 Function0 1 45 48 stat_push(3) 03 r[48]=func(r[45..47]) 101> 126 Next 6 117 0 00 ``` Draw extra attention to lines 118, 119 and 122
non_test
sql analyze works incorrectly this is a strange change it seems that rows in sql should be unique and analyze should not replace rows which were created by itself just before here are some useful snippets src to repeat key duplicate in sql create table a primary key b create index on b insert into values insert into values analyze part of analyze vdbe r analysis for openread k root count r count integer r integer r stat init r func r rewind integer r goto integer r column r ne binary if r r goto integer r goto column r column r a makerecord r mkrec r stat push r func r next draw extra attention to lines and
0
53,312
13,153,224,763
IssuesEvent
2020-08-10 02:27:45
linagora/james-project
https://api.github.com/repos/linagora/james-project
closed
Fasten CI
build-time
Here are the maven modules that takes the most time to test. We need to find a way to make things faster... ``` [INFO] Apache James :: Mailbox :: Tools :: Indexer ........ SUCCESS [02:02 min] [INFO] Apache James :: Server :: MailRepository :: Cassandra SUCCESS [02:13 min] [INFO] Apache James :: Server :: Web Admin :: mailbox ..... SUCCESS [02:41 min] [INFO] Apache James :: Server :: Cassandra/Ldap with RabbitMQ - guice injection SUCCESS [03:05 min] [INFO] apache-james-backends-es ........................... SUCCESS [03:06 min] [INFO] Apache James :: Server :: Web Admin server integration tests :: Memory SUCCESS [03:19 min] [INFO] Apache James :: Server :: Mailets Integration Testing SUCCESS [03:30 min] [INFO] Apache James :: Server :: Data :: Cassandra Persistence SUCCESS [04:18 min] [INFO] Apache James RabbitMQ backend ...................... SUCCESS [05:09 min] [INFO] Apache James :: Server :: JMAP (draft) :: Cassandra Integration testing SUCCESS [05:11 min] [INFO] Apache James :: Server :: Cassandra - guice injection SUCCESS [05:48 min] [INFO] Apache James :: Server :: Blob :: Cassandra ........ SUCCESS [06:12 min] [INFO] Apache James MPT Imap Mailbox - Cassandra .......... SUCCESS [07:02 min] [INFO] Apache James :: Server :: JMAP (draft) :: Memory Integration testing SUCCESS [07:54 min] [INFO] Apache James :: Server :: JMAP (draft) :: RabbitMQ + Object Store + Cassandra Integration testing SUCCESS [08:37 min] [INFO] Apache James :: Server :: Task :: Distributed ...... SUCCESS [09:22 min] [INFO] Apache James :: Mailbox :: Cassandra ............... SUCCESS [10:17 min] [INFO] Apache James :: Server :: Cassandra with RabbitMQ - guice injection SUCCESS [10:19 min] [INFO] Apache James :: Server :: Web Admin server integration tests :: Distributed SUCCESS [11:51 min] [INFO] Apache James :: Server :: Blob :: Object storage ... SUCCESS [12:31 min] [INFO] Apache James :: Server :: Mail Queue :: RabbitMQ ... SUCCESS [15:17 min] [INFO] Apache James :: Mailbox :: Event :: RabbitMQ implementation SUCCESS [28:36 min] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 03:45 h [INFO] Finished at: 2020-07-24T15:41:30Z [INFO] ------------------------------------------------------------------------ ```
1.0
Fasten CI - Here are the maven modules that takes the most time to test. We need to find a way to make things faster... ``` [INFO] Apache James :: Mailbox :: Tools :: Indexer ........ SUCCESS [02:02 min] [INFO] Apache James :: Server :: MailRepository :: Cassandra SUCCESS [02:13 min] [INFO] Apache James :: Server :: Web Admin :: mailbox ..... SUCCESS [02:41 min] [INFO] Apache James :: Server :: Cassandra/Ldap with RabbitMQ - guice injection SUCCESS [03:05 min] [INFO] apache-james-backends-es ........................... SUCCESS [03:06 min] [INFO] Apache James :: Server :: Web Admin server integration tests :: Memory SUCCESS [03:19 min] [INFO] Apache James :: Server :: Mailets Integration Testing SUCCESS [03:30 min] [INFO] Apache James :: Server :: Data :: Cassandra Persistence SUCCESS [04:18 min] [INFO] Apache James RabbitMQ backend ...................... SUCCESS [05:09 min] [INFO] Apache James :: Server :: JMAP (draft) :: Cassandra Integration testing SUCCESS [05:11 min] [INFO] Apache James :: Server :: Cassandra - guice injection SUCCESS [05:48 min] [INFO] Apache James :: Server :: Blob :: Cassandra ........ SUCCESS [06:12 min] [INFO] Apache James MPT Imap Mailbox - Cassandra .......... SUCCESS [07:02 min] [INFO] Apache James :: Server :: JMAP (draft) :: Memory Integration testing SUCCESS [07:54 min] [INFO] Apache James :: Server :: JMAP (draft) :: RabbitMQ + Object Store + Cassandra Integration testing SUCCESS [08:37 min] [INFO] Apache James :: Server :: Task :: Distributed ...... SUCCESS [09:22 min] [INFO] Apache James :: Mailbox :: Cassandra ............... SUCCESS [10:17 min] [INFO] Apache James :: Server :: Cassandra with RabbitMQ - guice injection SUCCESS [10:19 min] [INFO] Apache James :: Server :: Web Admin server integration tests :: Distributed SUCCESS [11:51 min] [INFO] Apache James :: Server :: Blob :: Object storage ... SUCCESS [12:31 min] [INFO] Apache James :: Server :: Mail Queue :: RabbitMQ ... SUCCESS [15:17 min] [INFO] Apache James :: Mailbox :: Event :: RabbitMQ implementation SUCCESS [28:36 min] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 03:45 h [INFO] Finished at: 2020-07-24T15:41:30Z [INFO] ------------------------------------------------------------------------ ```
non_test
fasten ci here are the maven modules that takes the most time to test we need to find a way to make things faster apache james mailbox tools indexer success apache james server mailrepository cassandra success apache james server web admin mailbox success apache james server cassandra ldap with rabbitmq guice injection success apache james backends es success apache james server web admin server integration tests memory success apache james server mailets integration testing success apache james server data cassandra persistence success apache james rabbitmq backend success apache james server jmap draft cassandra integration testing success apache james server cassandra guice injection success apache james server blob cassandra success apache james mpt imap mailbox cassandra success apache james server jmap draft memory integration testing success apache james server jmap draft rabbitmq object store cassandra integration testing success apache james server task distributed success apache james mailbox cassandra success apache james server cassandra with rabbitmq guice injection success apache james server web admin server integration tests distributed success apache james server blob object storage success apache james server mail queue rabbitmq success apache james mailbox event rabbitmq implementation success build success total time h finished at
0
122,551
12,155,304,813
IssuesEvent
2020-04-25 12:32:46
mykeels/crypto-dip-alert
https://api.github.com/repos/mykeels/crypto-dip-alert
closed
There should be a reference to Coincap.io for their API
bug documentation good first issue
# Bug Description The software in this repo depends on Coincap.io's free and open API. There is no mention of this, except in the code. ## Expected Behaviour It should be mentioned in the root README.
1.0
There should be a reference to Coincap.io for their API - # Bug Description The software in this repo depends on Coincap.io's free and open API. There is no mention of this, except in the code. ## Expected Behaviour It should be mentioned in the root README.
non_test
there should be a reference to coincap io for their api bug description the software in this repo depends on coincap io s free and open api there is no mention of this except in the code expected behaviour it should be mentioned in the root readme
0
383,087
26,531,923,993
IssuesEvent
2023-01-19 13:09:11
pik-piam/mrdrivers
https://api.github.com/repos/pik-piam/mrdrivers
opened
No citable source for GDP data.
bug documentation
The documentation of [`calcGDP()`](https://github.com/pik-piam/mrdrivers/blob/eb1da620d641ff8df83dbe8861a0e320400f3527/R/calcGDP.R) does not contain any citable source for GDP data. Like one would need for _doing science_. - https://github.com/pik-piam/mrdrivers/blob/eb1da620d641ff8df83dbe8861a0e320400f3527/R/calcGDP.R#L49 And what happens if `GDPCalib` is `NULL`? - https://github.com/pik-piam/mrdrivers/blob/eb1da620d641ff8df83dbe8861a0e320400f3527/R/calcGDP.R#L65 And what happens if `GDPPast` is `NULL`? - https://github.com/pik-piam/mrdrivers/blob/eb1da620d641ff8df83dbe8861a0e320400f3527/R/calcGDP.R#L82 And what happens if `GDPFuture` is `NULL`? - https://github.com/pik-piam/mrdrivers/blob/eb1da620d641ff8df83dbe8861a0e320400f3527/R/calcGDP.R#L85 Besides being completely insufficient as a source, the link is also dead. I would really like to know how https://github.com/pik-piam/mrdrivers/commit/ec02c71c7599a5bd2025ef8866d68390214399be was supposed to provide anything citable. ----- https://github.com/pik-piam/mrdrivers/blob/eb1da620d641ff8df83dbe8861a0e320400f3527/R/calcGDP.R#L25-L28 At least somebody had fun.
1.0
No citable source for GDP data. - The documentation of [`calcGDP()`](https://github.com/pik-piam/mrdrivers/blob/eb1da620d641ff8df83dbe8861a0e320400f3527/R/calcGDP.R) does not contain any citable source for GDP data. Like one would need for _doing science_. - https://github.com/pik-piam/mrdrivers/blob/eb1da620d641ff8df83dbe8861a0e320400f3527/R/calcGDP.R#L49 And what happens if `GDPCalib` is `NULL`? - https://github.com/pik-piam/mrdrivers/blob/eb1da620d641ff8df83dbe8861a0e320400f3527/R/calcGDP.R#L65 And what happens if `GDPPast` is `NULL`? - https://github.com/pik-piam/mrdrivers/blob/eb1da620d641ff8df83dbe8861a0e320400f3527/R/calcGDP.R#L82 And what happens if `GDPFuture` is `NULL`? - https://github.com/pik-piam/mrdrivers/blob/eb1da620d641ff8df83dbe8861a0e320400f3527/R/calcGDP.R#L85 Besides being completely insufficient as a source, the link is also dead. I would really like to know how https://github.com/pik-piam/mrdrivers/commit/ec02c71c7599a5bd2025ef8866d68390214399be was supposed to provide anything citable. ----- https://github.com/pik-piam/mrdrivers/blob/eb1da620d641ff8df83dbe8861a0e320400f3527/R/calcGDP.R#L25-L28 At least somebody had fun.
non_test
no citable source for gdp data the documentation of does not contain any citable source for gdp data like one would need for doing science and what happens if gdpcalib is null and what happens if gdppast is null and what happens if gdpfuture is null besides being completely insufficient as a source the link is also dead i would really like to know how was supposed to provide anything citable at least somebody had fun
0
223,495
17,603,223,677
IssuesEvent
2021-08-17 14:12:07
cseelhoff/RimThreaded
https://api.github.com/repos/cseelhoff/RimThreaded
closed
"Replace Stuff" pretty frequent errors
Bug Reproducible Accepted For Testing Mod Incompatibility Confirmed 1.3.X.X 2.0.X.X 2.2.X.X 2.3.X.X
**Describe the bug** IMPORTANT: Please first search existing bugs to ensure you are not creating a duplicate bug report. errors with some frequency **To Reproduce (VERY IMPORTANT)** Steps to reproduce the behavior: 1. load any save 2. hit play 3. see errors **Error Log** ``` Exception ticking SteamGeyser255415 (at (258, 0, 198)): System.InvalidOperationException: Collection was modified; enumeration operation may not execute. at System.ThrowHelper.ThrowInvalidOperationException (System.ExceptionResource resource) [0x0000b] in <567df3e0919241ba98db88bec4c6696f>:0 at System.Collections.Generic.List`1+Enumerator[T].MoveNextRare () [0x00013] in <567df3e0919241ba98db88bec4c6696f>:0 at System.Collections.Generic.List`1+Enumerator[T].MoveNext () [0x0004a] in <567df3e0919241ba98db88bec4c6696f>:0 at Replace_Stuff.NewThing.NewThingReplacement.IsNewThingReplacement (Verse.ThingDef newDef, Verse.IntVec3 pos, Verse.Rot4 rotation, Verse.Map map, Verse.Thing& oldThing) [0x0004c] in <dd5f6b372d7741358d6491f16fb78d1f>:0 at Replace_Stuff.NewThing.TransferSettings.Prefix (Verse.Thing newThing, Verse.IntVec3 loc, Verse.Map map, Verse.Rot4 rot, System.Boolean respawningAfterLoad, Verse.Thing& __state) [0x00009] in <dd5f6b372d7741358d6491f16fb78d1f>:0 at (wrapper dynamic-method) Verse.GenSpawn.Verse.GenSpawn.Spawn_Patch2(Verse.Thing,Verse.IntVec3,Verse.Map,Verse.Rot4,Verse.WipeMode,bool) at Verse.GenSpawn.Spawn (Verse.Thing newThing, Verse.IntVec3 loc, Verse.Map map, Verse.WipeMode wipeMode) [0x00008] in <d72310b4d8f64d25aee502792b58549f>:0 at RimWorld.MoteMaker.ThrowAirPuffUp (UnityEngine.Vector3 loc, Verse.Map map) [0x00086] in <d72310b4d8f64d25aee502792b58549f>:0 at RimWorld.IntermittentSteamSprayer.SteamSprayerTick () [0x0003c] in <d72310b4d8f64d25aee502792b58549f>:0 at RimWorld.Building_SteamGeyser.Tick () [0x00008] in <d72310b4d8f64d25aee502792b58549f>:0 at RimThreaded.RimThreaded.ExecuteTicks () [0x00105] in <08216f895b4b4c66b9b6902b448df752>:0 Verse.Log:Error(String, Boolean) RimThreaded.RimThreaded:ExecuteTicks() RimThreaded.RimThreaded:ProcessTicks(ThreadInfo) RimThreaded.RimThreaded:InitializeThread(ThreadInfo) RimThreaded.<>c__DisplayClass80_0:<CreateWorkerThread>b__0() System.Threading.ThreadHelper:ThreadStart_Context(Object) System.Threading.ExecutionContext:RunInternal(ExecutionContext, ContextCallback, Object, Boolean) System.Threading.ExecutionContext:Run(ExecutionContext, ContextCallback, Object, Boolean) System.Threading.ExecutionContext:Run(ExecutionContext, ContextCallback, Object) System.Threading.ThreadHelper:ThreadStart() ``` https://gist.github.com/42ef0429766400500ba3a1d859319fce **Mod List** see log https://steamcommunity.com/sharedfiles/filedetails/?id=1372003680&searchtext=replace **Screenshots** NA
1.0
"Replace Stuff" pretty frequent errors - **Describe the bug** IMPORTANT: Please first search existing bugs to ensure you are not creating a duplicate bug report. errors with some frequency **To Reproduce (VERY IMPORTANT)** Steps to reproduce the behavior: 1. load any save 2. hit play 3. see errors **Error Log** ``` Exception ticking SteamGeyser255415 (at (258, 0, 198)): System.InvalidOperationException: Collection was modified; enumeration operation may not execute. at System.ThrowHelper.ThrowInvalidOperationException (System.ExceptionResource resource) [0x0000b] in <567df3e0919241ba98db88bec4c6696f>:0 at System.Collections.Generic.List`1+Enumerator[T].MoveNextRare () [0x00013] in <567df3e0919241ba98db88bec4c6696f>:0 at System.Collections.Generic.List`1+Enumerator[T].MoveNext () [0x0004a] in <567df3e0919241ba98db88bec4c6696f>:0 at Replace_Stuff.NewThing.NewThingReplacement.IsNewThingReplacement (Verse.ThingDef newDef, Verse.IntVec3 pos, Verse.Rot4 rotation, Verse.Map map, Verse.Thing& oldThing) [0x0004c] in <dd5f6b372d7741358d6491f16fb78d1f>:0 at Replace_Stuff.NewThing.TransferSettings.Prefix (Verse.Thing newThing, Verse.IntVec3 loc, Verse.Map map, Verse.Rot4 rot, System.Boolean respawningAfterLoad, Verse.Thing& __state) [0x00009] in <dd5f6b372d7741358d6491f16fb78d1f>:0 at (wrapper dynamic-method) Verse.GenSpawn.Verse.GenSpawn.Spawn_Patch2(Verse.Thing,Verse.IntVec3,Verse.Map,Verse.Rot4,Verse.WipeMode,bool) at Verse.GenSpawn.Spawn (Verse.Thing newThing, Verse.IntVec3 loc, Verse.Map map, Verse.WipeMode wipeMode) [0x00008] in <d72310b4d8f64d25aee502792b58549f>:0 at RimWorld.MoteMaker.ThrowAirPuffUp (UnityEngine.Vector3 loc, Verse.Map map) [0x00086] in <d72310b4d8f64d25aee502792b58549f>:0 at RimWorld.IntermittentSteamSprayer.SteamSprayerTick () [0x0003c] in <d72310b4d8f64d25aee502792b58549f>:0 at RimWorld.Building_SteamGeyser.Tick () [0x00008] in <d72310b4d8f64d25aee502792b58549f>:0 at RimThreaded.RimThreaded.ExecuteTicks () [0x00105] in <08216f895b4b4c66b9b6902b448df752>:0 Verse.Log:Error(String, Boolean) RimThreaded.RimThreaded:ExecuteTicks() RimThreaded.RimThreaded:ProcessTicks(ThreadInfo) RimThreaded.RimThreaded:InitializeThread(ThreadInfo) RimThreaded.<>c__DisplayClass80_0:<CreateWorkerThread>b__0() System.Threading.ThreadHelper:ThreadStart_Context(Object) System.Threading.ExecutionContext:RunInternal(ExecutionContext, ContextCallback, Object, Boolean) System.Threading.ExecutionContext:Run(ExecutionContext, ContextCallback, Object, Boolean) System.Threading.ExecutionContext:Run(ExecutionContext, ContextCallback, Object) System.Threading.ThreadHelper:ThreadStart() ``` https://gist.github.com/42ef0429766400500ba3a1d859319fce **Mod List** see log https://steamcommunity.com/sharedfiles/filedetails/?id=1372003680&searchtext=replace **Screenshots** NA
test
replace stuff pretty frequent errors describe the bug important please first search existing bugs to ensure you are not creating a duplicate bug report errors with some frequency to reproduce very important steps to reproduce the behavior load any save hit play see errors error log exception ticking at system invalidoperationexception collection was modified enumeration operation may not execute at system throwhelper throwinvalidoperationexception system exceptionresource resource in at system collections generic list enumerator movenextrare in at system collections generic list enumerator movenext in at replace stuff newthing newthingreplacement isnewthingreplacement verse thingdef newdef verse pos verse rotation verse map map verse thing oldthing in at replace stuff newthing transfersettings prefix verse thing newthing verse loc verse map map verse rot system boolean respawningafterload verse thing state in at wrapper dynamic method verse genspawn verse genspawn spawn verse thing verse verse map verse verse wipemode bool at verse genspawn spawn verse thing newthing verse loc verse map map verse wipemode wipemode in at rimworld motemaker throwairpuffup unityengine loc verse map map in at rimworld intermittentsteamsprayer steamsprayertick in at rimworld building steamgeyser tick in at rimthreaded rimthreaded executeticks in verse log error string boolean rimthreaded rimthreaded executeticks rimthreaded rimthreaded processticks threadinfo rimthreaded rimthreaded initializethread threadinfo rimthreaded c b system threading threadhelper threadstart context object system threading executioncontext runinternal executioncontext contextcallback object boolean system threading executioncontext run executioncontext contextcallback object boolean system threading executioncontext run executioncontext contextcallback object system threading threadhelper threadstart mod list see log screenshots na
1
309,219
26,657,138,345
IssuesEvent
2023-01-25 17:44:12
PalisadoesFoundation/talawa
https://api.github.com/repos/PalisadoesFoundation/talawa
closed
Complete Code Coverage for View Models
unapproved test parent
The Talawa code base needs to be 100% reliable. This means we need to have We 100% unittest code coverage. This is a parent issue for all View Model unittest issues. We will be creating one issue per related file with the expectation that widgets referenced in these files must also have unittests. - We'll only assign the child issues. - Please comment on one of the child issues for assignment. - Only one child issue will be assigned to a person at any one time. Parent Issue: - https://github.com/PalisadoesFoundation/talawa/issues/1124 Child Issues: - https://github.com/PalisadoesFoundation/talawa/issues/1002 - https://github.com/PalisadoesFoundation/talawa/issues/1003 - https://github.com/PalisadoesFoundation/talawa/issues/1023 - https://github.com/PalisadoesFoundation/talawa/issues/1018 - https://github.com/PalisadoesFoundation/talawa/issues/1119
1.0
Complete Code Coverage for View Models - The Talawa code base needs to be 100% reliable. This means we need to have We 100% unittest code coverage. This is a parent issue for all View Model unittest issues. We will be creating one issue per related file with the expectation that widgets referenced in these files must also have unittests. - We'll only assign the child issues. - Please comment on one of the child issues for assignment. - Only one child issue will be assigned to a person at any one time. Parent Issue: - https://github.com/PalisadoesFoundation/talawa/issues/1124 Child Issues: - https://github.com/PalisadoesFoundation/talawa/issues/1002 - https://github.com/PalisadoesFoundation/talawa/issues/1003 - https://github.com/PalisadoesFoundation/talawa/issues/1023 - https://github.com/PalisadoesFoundation/talawa/issues/1018 - https://github.com/PalisadoesFoundation/talawa/issues/1119
test
complete code coverage for view models the talawa code base needs to be reliable this means we need to have we unittest code coverage this is a parent issue for all view model unittest issues we will be creating one issue per related file with the expectation that widgets referenced in these files must also have unittests we ll only assign the child issues please comment on one of the child issues for assignment only one child issue will be assigned to a person at any one time parent issue child issues
1
298,171
25,794,562,013
IssuesEvent
2022-12-10 12:11:56
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
roachtest: sequelize failed
C-test-failure O-robot O-roachtest release-blocker branch-release-21.2
roachtest.sequelize [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=7909414&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=7909414&tab=artifacts#/sequelize) on release-21.2 @ [14dff4d7f9832f50b4844ac1a79c96e4b06f521c](https://github.com/cockroachdb/cockroach/commits/14dff4d7f9832f50b4844ac1a79c96e4b06f521c): ``` The test failed on branch=release-21.2, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sequelize/run_1 sequelize.go:138,sequelize.go:158,test_runner.go:777: all attempts failed for install dependencies due to error: output in run_121131.636286404_n1_cd: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-7909414-1670655128-72-n1cpu4:1 -- cd /mnt/data1/sequelize && sudo npm i returned: exit status 20 ``` <details><summary>Reproduce</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) </p> </details> <details><summary>Same failure on other branches</summary> <p> - #91663 roachtest: sequelize failed [C-test-failure O-roachtest O-robot T-sql-experience deprecated-branch-release-22.2.0] </p> </details> /cc @cockroachdb/sql-experience <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sequelize.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
2.0
roachtest: sequelize failed - roachtest.sequelize [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=7909414&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=7909414&tab=artifacts#/sequelize) on release-21.2 @ [14dff4d7f9832f50b4844ac1a79c96e4b06f521c](https://github.com/cockroachdb/cockroach/commits/14dff4d7f9832f50b4844ac1a79c96e4b06f521c): ``` The test failed on branch=release-21.2, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sequelize/run_1 sequelize.go:138,sequelize.go:158,test_runner.go:777: all attempts failed for install dependencies due to error: output in run_121131.636286404_n1_cd: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-7909414-1670655128-72-n1cpu4:1 -- cd /mnt/data1/sequelize && sudo npm i returned: exit status 20 ``` <details><summary>Reproduce</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) </p> </details> <details><summary>Same failure on other branches</summary> <p> - #91663 roachtest: sequelize failed [C-test-failure O-roachtest O-robot T-sql-experience deprecated-branch-release-22.2.0] </p> </details> /cc @cockroachdb/sql-experience <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sequelize.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
test
roachtest sequelize failed roachtest sequelize with on release the test failed on branch release cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts sequelize run sequelize go sequelize go test runner go all attempts failed for install dependencies due to error output in run cd home agent work go src github com cockroachdb cockroach bin roachprod run teamcity cd mnt sequelize sudo npm i returned exit status reproduce see same failure on other branches roachtest sequelize failed cc cockroachdb sql experience
1
20,693
6,916,523,547
IssuesEvent
2017-11-29 03:04:15
opencv/opencv
https://api.github.com/repos/opencv/opencv
opened
MacOSX: LAPACK support detection doesn't work
bug category: build/install category: ios/osx
Since this [build](http://pullrequest.opencv.org/buildbot/builders/master_noOCL-mac/builds/10266): ``` -- A library with BLAS API found. -- Looking for cheev_ -- Looking for cheev_ - found -- A library with LAPACK API found. -- LAPACK(LAPACK/Apple): LAPACK_LIBRARIES: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk/System/Library/Frameworks/Accelerate.framework;/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk/System/Library/Frameworks/Accelerate.framework -- Looking for Accelerate/Accelerate.h -- Looking for Accelerate/Accelerate.h - found -- Looking for Accelerate/Accelerate.h -- Looking for Accelerate/Accelerate.h - found -- LAPACK(LAPACK/Apple): Can't build LAPACK check code. This LAPACK version is not supported. -- LAPACK(Apple): LAPACK_LIBRARIES: -framework Accelerate -- Looking for Accelerate/Accelerate.h -- Looking for Accelerate/Accelerate.h - found -- Looking for Accelerate/Accelerate.h -- Looking for Accelerate/Accelerate.h - found -- LAPACK(Apple): Can't build LAPACK check code. This LAPACK version is not supported. ```
1.0
MacOSX: LAPACK support detection doesn't work - Since this [build](http://pullrequest.opencv.org/buildbot/builders/master_noOCL-mac/builds/10266): ``` -- A library with BLAS API found. -- Looking for cheev_ -- Looking for cheev_ - found -- A library with LAPACK API found. -- LAPACK(LAPACK/Apple): LAPACK_LIBRARIES: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk/System/Library/Frameworks/Accelerate.framework;/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk/System/Library/Frameworks/Accelerate.framework -- Looking for Accelerate/Accelerate.h -- Looking for Accelerate/Accelerate.h - found -- Looking for Accelerate/Accelerate.h -- Looking for Accelerate/Accelerate.h - found -- LAPACK(LAPACK/Apple): Can't build LAPACK check code. This LAPACK version is not supported. -- LAPACK(Apple): LAPACK_LIBRARIES: -framework Accelerate -- Looking for Accelerate/Accelerate.h -- Looking for Accelerate/Accelerate.h - found -- Looking for Accelerate/Accelerate.h -- Looking for Accelerate/Accelerate.h - found -- LAPACK(Apple): Can't build LAPACK check code. This LAPACK version is not supported. ```
non_test
macosx lapack support detection doesn t work since this a library with blas api found looking for cheev looking for cheev found a library with lapack api found lapack lapack apple lapack libraries applications xcode app contents developer platforms macosx platform developer sdks sdk system library frameworks accelerate framework applications xcode app contents developer platforms macosx platform developer sdks sdk system library frameworks accelerate framework looking for accelerate accelerate h looking for accelerate accelerate h found looking for accelerate accelerate h looking for accelerate accelerate h found lapack lapack apple can t build lapack check code this lapack version is not supported lapack apple lapack libraries framework accelerate looking for accelerate accelerate h looking for accelerate accelerate h found looking for accelerate accelerate h looking for accelerate accelerate h found lapack apple can t build lapack check code this lapack version is not supported
0
346,182
30,871,843,626
IssuesEvent
2023-08-03 11:55:15
PerfectFit-project/virtual-coach-issues
https://api.github.com/repos/PerfectFit-project/virtual-coach-issues
closed
Check activity benefits in db
bug testing ticket
From here: https://github.com/PerfectFit-project/testing-tickets/issues/52. It is important that the benefits of the activities do not start with "om", because the start of the sentence about benefits already contains "om": https://github.com/PerfectFit-project/virtual-coach-rasa/blob/main/Rasa_Bot/actions/definitions.py.
1.0
Check activity benefits in db - From here: https://github.com/PerfectFit-project/testing-tickets/issues/52. It is important that the benefits of the activities do not start with "om", because the start of the sentence about benefits already contains "om": https://github.com/PerfectFit-project/virtual-coach-rasa/blob/main/Rasa_Bot/actions/definitions.py.
test
check activity benefits in db from here it is important that the benefits of the activities do not start with om because the start of the sentence about benefits already contains om
1
324,873
27,826,467,729
IssuesEvent
2023-03-19 20:26:44
SuperCowPowers/sageworks
https://api.github.com/repos/SuperCowPowers/sageworks
closed
Pandas To Feature Set Test Code
transform feature_set testing
Right now the test code for `pandas_to_features.py` is mostly disabled. After the class is finished, go back and enable the rest of the tests. ``` # FIXME: This rest if this test is disabled for now ```
1.0
Pandas To Feature Set Test Code - Right now the test code for `pandas_to_features.py` is mostly disabled. After the class is finished, go back and enable the rest of the tests. ``` # FIXME: This rest if this test is disabled for now ```
test
pandas to feature set test code right now the test code for pandas to features py is mostly disabled after the class is finished go back and enable the rest of the tests fixme this rest if this test is disabled for now
1
79,500
10,130,285,009
IssuesEvent
2019-08-01 16:34:31
bd1887/wine-reviews
https://api.github.com/repos/bd1887/wine-reviews
closed
Django Documentation
documentation
# Create a README explaining the following: * How to clone the repository * How to create a virtual environment * How to install requirements.txt * How to migrate the database
1.0
Django Documentation - # Create a README explaining the following: * How to clone the repository * How to create a virtual environment * How to install requirements.txt * How to migrate the database
non_test
django documentation create a readme explaining the following how to clone the repository how to create a virtual environment how to install requirements txt how to migrate the database
0
159,415
12,475,070,369
IssuesEvent
2020-05-29 10:50:33
aliasrobotics/RVD
https://api.github.com/repos/aliasrobotics/RVD
closed
RVD#2035: Use of insecure MD2, MD4, MD5, or SHA1 hash function., /opt/ros_noetic_ws/src/ros/rosbuild/core/rosbuild/bin/download_checkmd5.py:81
bandit bug static analysis testing triage
```yaml { "id": 2035, "title": "RVD#2035: Use of insecure MD2, MD4, MD5, or SHA1 hash function., /opt/ros_noetic_ws/src/ros/rosbuild/core/rosbuild/bin/download_checkmd5.py:81", "type": "bug", "description": "HIGH confidence of MEDIUM severity bug. Use of insecure MD2, MD4, MD5, or SHA1 hash function. at /opt/ros_noetic_ws/src/ros/rosbuild/core/rosbuild/bin/download_checkmd5.py:81 See links for more info on the bug.", "cwe": "None", "cve": "None", "keywords": [ "bandit", "bug", "static analysis", "testing", "triage", "bug" ], "system": "", "vendor": null, "severity": { "rvss-score": 0, "rvss-vector": "", "severity-description": "", "cvss-score": 0, "cvss-vector": "" }, "links": [ "https://github.com/aliasrobotics/RVD/issues/2035", "https://bandit.readthedocs.io/en/latest/blacklists/blacklist_calls.html#b303-md5" ], "flaw": { "phase": "testing", "specificity": "subject-specific", "architectural-location": "application-specific", "application": "N/A", "subsystem": "N/A", "package": "N/A", "languages": "None", "date-detected": "2020-05-29 (09:20)", "detected-by": "Alias Robotics", "detected-by-method": "testing static", "date-reported": "2020-05-29 (09:20)", "reported-by": "Alias Robotics", "reported-by-relationship": "automatic", "issue": "https://github.com/aliasrobotics/RVD/issues/2035", "reproducibility": "always", "trace": "/opt/ros_noetic_ws/src/ros/rosbuild/core/rosbuild/bin/download_checkmd5.py:81", "reproduction": "See artifacts below (if available)", "reproduction-image": "" }, "exploitation": { "description": "", "exploitation-image": "", "exploitation-vector": "" }, "mitigation": { "description": "", "pull-request": "", "date-mitigation": "" } } ```
1.0
RVD#2035: Use of insecure MD2, MD4, MD5, or SHA1 hash function., /opt/ros_noetic_ws/src/ros/rosbuild/core/rosbuild/bin/download_checkmd5.py:81 - ```yaml { "id": 2035, "title": "RVD#2035: Use of insecure MD2, MD4, MD5, or SHA1 hash function., /opt/ros_noetic_ws/src/ros/rosbuild/core/rosbuild/bin/download_checkmd5.py:81", "type": "bug", "description": "HIGH confidence of MEDIUM severity bug. Use of insecure MD2, MD4, MD5, or SHA1 hash function. at /opt/ros_noetic_ws/src/ros/rosbuild/core/rosbuild/bin/download_checkmd5.py:81 See links for more info on the bug.", "cwe": "None", "cve": "None", "keywords": [ "bandit", "bug", "static analysis", "testing", "triage", "bug" ], "system": "", "vendor": null, "severity": { "rvss-score": 0, "rvss-vector": "", "severity-description": "", "cvss-score": 0, "cvss-vector": "" }, "links": [ "https://github.com/aliasrobotics/RVD/issues/2035", "https://bandit.readthedocs.io/en/latest/blacklists/blacklist_calls.html#b303-md5" ], "flaw": { "phase": "testing", "specificity": "subject-specific", "architectural-location": "application-specific", "application": "N/A", "subsystem": "N/A", "package": "N/A", "languages": "None", "date-detected": "2020-05-29 (09:20)", "detected-by": "Alias Robotics", "detected-by-method": "testing static", "date-reported": "2020-05-29 (09:20)", "reported-by": "Alias Robotics", "reported-by-relationship": "automatic", "issue": "https://github.com/aliasrobotics/RVD/issues/2035", "reproducibility": "always", "trace": "/opt/ros_noetic_ws/src/ros/rosbuild/core/rosbuild/bin/download_checkmd5.py:81", "reproduction": "See artifacts below (if available)", "reproduction-image": "" }, "exploitation": { "description": "", "exploitation-image": "", "exploitation-vector": "" }, "mitigation": { "description": "", "pull-request": "", "date-mitigation": "" } } ```
test
rvd use of insecure or hash function opt ros noetic ws src ros rosbuild core rosbuild bin download py yaml id title rvd use of insecure or hash function opt ros noetic ws src ros rosbuild core rosbuild bin download py type bug description high confidence of medium severity bug use of insecure or hash function at opt ros noetic ws src ros rosbuild core rosbuild bin download py see links for more info on the bug cwe none cve none keywords bandit bug static analysis testing triage bug system vendor null severity rvss score rvss vector severity description cvss score cvss vector links flaw phase testing specificity subject specific architectural location application specific application n a subsystem n a package n a languages none date detected detected by alias robotics detected by method testing static date reported reported by alias robotics reported by relationship automatic issue reproducibility always trace opt ros noetic ws src ros rosbuild core rosbuild bin download py reproduction see artifacts below if available reproduction image exploitation description exploitation image exploitation vector mitigation description pull request date mitigation
1
225,905
17,929,496,553
IssuesEvent
2021-09-10 07:16:08
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
roachtest: scbench/randomload/nodes=3/ops=10000/conc=20 failed
C-test-failure O-robot O-roachtest branch-release-21.2
roachtest.scbench/randomload/nodes=3/ops=10000/conc=20 [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=3421825&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=3421825&tab=artifacts#/scbench/randomload/nodes=3/ops=10000/conc=20) on release-21.2 @ [99a4816fc272228a63df20dae3cc41d235e705f3](https://github.com/cockroachdb/cockroach/commits/99a4816fc272228a63df20dae3cc41d235e705f3): ``` | } | { | "workerId": 0, | "clientTimestamp": "07:15:46.030643", | "ops": [ | "BEGIN", | "DROP SEQUENCE public.seq573" | ], | "expectedExecErrors": "42P01", | "expectedCommitErrors": "", | "message": "ROLLBACK; Successfully got expected execution error: ERROR: relation \"public.seq573\" does not exist (SQLSTATE 42P01)" | } | { | "workerId": 0, | "clientTimestamp": "07:15:46.135861", | "ops": [ | "BEGIN", | "DROP TABLE IF EXISTS public.table550 CASCADE", | "ALTER DATABASE schemachange ADD REGION \"us-east1\"" | ], | "expectedExecErrors": "42710", | "expectedCommitErrors": "", | "message": "ROLLBACK; Successfully got expected execution error: ERROR: region \"us-east1\" already added to database (SQLSTATE 42710)" | } | { | "workerId": 0, | "clientTimestamp": "07:15:46.068299", | "ops": [ | "BEGIN", | "CREATE TABLE IF NOT EXISTS public.table603 (col603_604 NAME NULL, col603_605 GEOMETRY, col603_606 UUID, col603_607 FLOAT4 NULL, col603_608 TIMESTAMPTZ, col603_609 JSONB NULL, col603_610 FLOAT4, col603_611 \"char\" NULL, col603_612 INTERVAL NULL, col603_613 NAME NOT NULL, col603_614 REGCLASS NULL, col603_615 REGROLE NOT NULL, col603_616 REGPROCEDURE NOT NULL, col603_617 FLOAT4 NULL AS (col603_607 + col603_610) VIRTUAL, col603_618 STRING NULL AS (lower(col603_604)) STORED, col603_619 FLOAT4 NULL AS (col603_607 + col603_610) VIRTUAL, col603_620 FLOAT4 AS (col603_610 + col603_607) VIRTUAL, col603_621 STRING AS (lower(CAST(col603_605 AS STRING))) STORED, col603_622 STRING NOT NULL AS (lower(CAST(col603_616 AS STRING))) STORED, UNIQUE (lower(CAST(col603_614 AS STRING)), col603_621 ASC, col603_604, col603_619 ASC, col603_607 ASC, col603_622 ASC, col603_618 DESC), UNIQUE (col603_611), FAMILY (col603_612, col603_606), FAMILY (col603_607, col603_621), FAMILY (col603_610), FAMILY (col603_605, col603_618, col603_616), FAMILY (col603_615, col603_609), FAMILY (col603_608), FAMILY (col603_622), FAMILY (col603_611), FAMILY (col603_614, col603_613), FAMILY (col603_604))", | "DROP SCHEMA \"schema625\" CASCADE" | ], | "expectedExecErrors": "3F000", | "expectedCommitErrors": "", | "message": "ROLLBACK; Successfully got expected execution error: ERROR: unknown schema \"schema625\" (SQLSTATE 3F000)" | } | { | "workerId": 0, | "clientTimestamp": "07:15:46.126758", | "ops": [ | "BEGIN", | "DROP VIEW IF EXISTS public.view593", | "DROP TABLE IF EXISTS public.table550 CASCADE" | ], | "expectedExecErrors": "", | "expectedCommitErrors": "", | "message": "***UNEXPECTED ERROR; Failed to generate a random operation: ERROR: relation \"public.table550\" does not exist (SQLSTATE 42P01)" | } Wraps: (4) exit status 20 Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) *exec.ExitError ``` <details><summary>Reproduce</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) </p> </details> <details><summary>Same failure on other branches</summary> <p> - #63496 roachtest: scbench/randomload/nodes=3/ops=10000/conc=20 failed [C-test-failure O-roachtest O-robot T-sql-schema branch-63484] - #63373 roachtest: scbench/randomload/nodes=3/ops=10000/conc=20 failed [C-test-failure O-roachtest O-robot T-sql-schema branch-release-20.2] - #61688 roachtest: scbench/randomload/nodes=3/ops=10000/conc=20 failed [C-test-failure O-roachtest O-robot T-sql-schema branch-release-21.1] - #56230 roachtest: scbench/randomload/nodes=3/ops=10000/conc=20 failed [C-test-failure O-roachtest O-robot T-sql-schema branch-master] </p> </details> /cc @cockroachdb/sql-schema <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*scbench/randomload/nodes=3/ops=10000/conc=20.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
2.0
roachtest: scbench/randomload/nodes=3/ops=10000/conc=20 failed - roachtest.scbench/randomload/nodes=3/ops=10000/conc=20 [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=3421825&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=3421825&tab=artifacts#/scbench/randomload/nodes=3/ops=10000/conc=20) on release-21.2 @ [99a4816fc272228a63df20dae3cc41d235e705f3](https://github.com/cockroachdb/cockroach/commits/99a4816fc272228a63df20dae3cc41d235e705f3): ``` | } | { | "workerId": 0, | "clientTimestamp": "07:15:46.030643", | "ops": [ | "BEGIN", | "DROP SEQUENCE public.seq573" | ], | "expectedExecErrors": "42P01", | "expectedCommitErrors": "", | "message": "ROLLBACK; Successfully got expected execution error: ERROR: relation \"public.seq573\" does not exist (SQLSTATE 42P01)" | } | { | "workerId": 0, | "clientTimestamp": "07:15:46.135861", | "ops": [ | "BEGIN", | "DROP TABLE IF EXISTS public.table550 CASCADE", | "ALTER DATABASE schemachange ADD REGION \"us-east1\"" | ], | "expectedExecErrors": "42710", | "expectedCommitErrors": "", | "message": "ROLLBACK; Successfully got expected execution error: ERROR: region \"us-east1\" already added to database (SQLSTATE 42710)" | } | { | "workerId": 0, | "clientTimestamp": "07:15:46.068299", | "ops": [ | "BEGIN", | "CREATE TABLE IF NOT EXISTS public.table603 (col603_604 NAME NULL, col603_605 GEOMETRY, col603_606 UUID, col603_607 FLOAT4 NULL, col603_608 TIMESTAMPTZ, col603_609 JSONB NULL, col603_610 FLOAT4, col603_611 \"char\" NULL, col603_612 INTERVAL NULL, col603_613 NAME NOT NULL, col603_614 REGCLASS NULL, col603_615 REGROLE NOT NULL, col603_616 REGPROCEDURE NOT NULL, col603_617 FLOAT4 NULL AS (col603_607 + col603_610) VIRTUAL, col603_618 STRING NULL AS (lower(col603_604)) STORED, col603_619 FLOAT4 NULL AS (col603_607 + col603_610) VIRTUAL, col603_620 FLOAT4 AS (col603_610 + col603_607) VIRTUAL, col603_621 STRING AS (lower(CAST(col603_605 AS STRING))) STORED, col603_622 STRING NOT NULL AS (lower(CAST(col603_616 AS STRING))) STORED, UNIQUE (lower(CAST(col603_614 AS STRING)), col603_621 ASC, col603_604, col603_619 ASC, col603_607 ASC, col603_622 ASC, col603_618 DESC), UNIQUE (col603_611), FAMILY (col603_612, col603_606), FAMILY (col603_607, col603_621), FAMILY (col603_610), FAMILY (col603_605, col603_618, col603_616), FAMILY (col603_615, col603_609), FAMILY (col603_608), FAMILY (col603_622), FAMILY (col603_611), FAMILY (col603_614, col603_613), FAMILY (col603_604))", | "DROP SCHEMA \"schema625\" CASCADE" | ], | "expectedExecErrors": "3F000", | "expectedCommitErrors": "", | "message": "ROLLBACK; Successfully got expected execution error: ERROR: unknown schema \"schema625\" (SQLSTATE 3F000)" | } | { | "workerId": 0, | "clientTimestamp": "07:15:46.126758", | "ops": [ | "BEGIN", | "DROP VIEW IF EXISTS public.view593", | "DROP TABLE IF EXISTS public.table550 CASCADE" | ], | "expectedExecErrors": "", | "expectedCommitErrors": "", | "message": "***UNEXPECTED ERROR; Failed to generate a random operation: ERROR: relation \"public.table550\" does not exist (SQLSTATE 42P01)" | } Wraps: (4) exit status 20 Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) *exec.ExitError ``` <details><summary>Reproduce</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) </p> </details> <details><summary>Same failure on other branches</summary> <p> - #63496 roachtest: scbench/randomload/nodes=3/ops=10000/conc=20 failed [C-test-failure O-roachtest O-robot T-sql-schema branch-63484] - #63373 roachtest: scbench/randomload/nodes=3/ops=10000/conc=20 failed [C-test-failure O-roachtest O-robot T-sql-schema branch-release-20.2] - #61688 roachtest: scbench/randomload/nodes=3/ops=10000/conc=20 failed [C-test-failure O-roachtest O-robot T-sql-schema branch-release-21.1] - #56230 roachtest: scbench/randomload/nodes=3/ops=10000/conc=20 failed [C-test-failure O-roachtest O-robot T-sql-schema branch-master] </p> </details> /cc @cockroachdb/sql-schema <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*scbench/randomload/nodes=3/ops=10000/conc=20.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
test
roachtest scbench randomload nodes ops conc failed roachtest scbench randomload nodes ops conc with on release workerid clienttimestamp ops begin drop sequence public expectedexecerrors expectedcommiterrors message rollback successfully got expected execution error error relation public does not exist sqlstate workerid clienttimestamp ops begin drop table if exists public cascade alter database schemachange add region us expectedexecerrors expectedcommiterrors message rollback successfully got expected execution error error region us already added to database sqlstate workerid clienttimestamp ops begin create table if not exists public name null geometry uuid null timestamptz jsonb null char null interval null name not null regclass null regrole not null regprocedure not null null as virtual string null as lower stored null as virtual as virtual string as lower cast as string stored string not null as lower cast as string stored unique lower cast as string asc asc asc asc desc unique family family family family family family family family family family drop schema cascade expectedexecerrors expectedcommiterrors message rollback successfully got expected execution error error unknown schema sqlstate workerid clienttimestamp ops begin drop view if exists public drop table if exists public cascade expectedexecerrors expectedcommiterrors message unexpected error failed to generate a random operation error relation public does not exist sqlstate wraps exit status error types withstack withstack errutil withprefix cluster withcommanddetails exec exiterror reproduce see same failure on other branches roachtest scbench randomload nodes ops conc failed roachtest scbench randomload nodes ops conc failed roachtest scbench randomload nodes ops conc failed roachtest scbench randomload nodes ops conc failed cc cockroachdb sql schema
1
283,115
21,316,042,022
IssuesEvent
2022-04-16 09:40:06
allyfern72/pe
https://api.github.com/repos/allyfern72/pe
opened
[DG] Typo in `printStaff()` sequence diagram
severity.VeryLow type.DocumentationBug
It looks like it should have been "for each staff" instead of "for each dish" in the loop frame. ![image.png](https://raw.githubusercontent.com/allyfern72/pe/main/files/d0db64c6-a921-40bd-ba67-ab6ea3579fe4.png) <!--session: 1650096024879-64fa4b51-68b4-4157-8052-f142c2406b7b--> <!--Version: Web v3.4.2-->
1.0
[DG] Typo in `printStaff()` sequence diagram - It looks like it should have been "for each staff" instead of "for each dish" in the loop frame. ![image.png](https://raw.githubusercontent.com/allyfern72/pe/main/files/d0db64c6-a921-40bd-ba67-ab6ea3579fe4.png) <!--session: 1650096024879-64fa4b51-68b4-4157-8052-f142c2406b7b--> <!--Version: Web v3.4.2-->
non_test
typo in printstaff sequence diagram it looks like it should have been for each staff instead of for each dish in the loop frame
0
23,046
11,837,205,410
IssuesEvent
2020-03-23 13:50:29
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Built into a bot?
Pri1 cognitive-services/svc cxp product-question qna-maker/subsvc triaged
Is this something that's built into the default bot for Q&A? How do I enable rating suggestions in the bot? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: fcd7d96a-7def-6b8b-2753-2904737ac743 * Version Independent ID: e71a1b3f-da7c-e79e-8bdf-54a5ecf8d8ca * Content: [Active learning suggestions - QnA Maker - Azure Cognitive Services](https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/concepts/active-learning-suggestions) * Content Source: [articles/cognitive-services/QnAMaker/Concepts/active-learning-suggestions.md](https://github.com/Microsoft/azure-docs/blob/master/articles/cognitive-services/QnAMaker/Concepts/active-learning-suggestions.md) * Service: **cognitive-services** * Sub-service: **qna-maker** * GitHub Login: @diberry * Microsoft Alias: **diberry**
1.0
Built into a bot? - Is this something that's built into the default bot for Q&A? How do I enable rating suggestions in the bot? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: fcd7d96a-7def-6b8b-2753-2904737ac743 * Version Independent ID: e71a1b3f-da7c-e79e-8bdf-54a5ecf8d8ca * Content: [Active learning suggestions - QnA Maker - Azure Cognitive Services](https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/concepts/active-learning-suggestions) * Content Source: [articles/cognitive-services/QnAMaker/Concepts/active-learning-suggestions.md](https://github.com/Microsoft/azure-docs/blob/master/articles/cognitive-services/QnAMaker/Concepts/active-learning-suggestions.md) * Service: **cognitive-services** * Sub-service: **qna-maker** * GitHub Login: @diberry * Microsoft Alias: **diberry**
non_test
built into a bot is this something that s built into the default bot for q a how do i enable rating suggestions in the bot document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service cognitive services sub service qna maker github login diberry microsoft alias diberry
0
5,111
3,510,141,865
IssuesEvent
2016-01-09 07:31:06
SouthAfricaDigitalScience/python-deploy
https://api.github.com/repos/SouthAfricaDigitalScience/python-deploy
closed
python 3 check build fails on u1404
build failures
The python 3 build fails on u1404 with gcc-4 : It seems that python installation is not found : ``` Our python is /usr/bin/python ``` There is indeed `python3`... not sure if this needs to be fixed.
1.0
python 3 check build fails on u1404 - The python 3 build fails on u1404 with gcc-4 : It seems that python installation is not found : ``` Our python is /usr/bin/python ``` There is indeed `python3`... not sure if this needs to be fixed.
non_test
python check build fails on the python build fails on with gcc it seems that python installation is not found our python is usr bin python there is indeed not sure if this needs to be fixed
0
154,650
19,751,327,195
IssuesEvent
2022-01-15 04:56:24
turkdevops/atom
https://api.github.com/repos/turkdevops/atom
closed
CVE-2021-3807 (High) detected in ansi-regex-4.1.0.tgz, ansi-regex-3.0.0.tgz - autoclosed
security vulnerability
## CVE-2021-3807 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ansi-regex-4.1.0.tgz</b>, <b>ansi-regex-3.0.0.tgz</b></p></summary> <p> <details><summary><b>ansi-regex-4.1.0.tgz</b></p></summary> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz</a></p> <p>Path to dependency file: /script/package.json</p> <p>Path to vulnerable library: /script/node_modules/inquirer/node_modules/strip-ansi/node_modules/ansi-regex/package.json,/script/node_modules/table/node_modules/ansi-regex/package.json</p> <p> Dependency Hierarchy: - eslint-5.16.0.tgz (Root Library) - inquirer-6.3.1.tgz - strip-ansi-5.2.0.tgz - :x: **ansi-regex-4.1.0.tgz** (Vulnerable Library) </details> <details><summary><b>ansi-regex-3.0.0.tgz</b></p></summary> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz</a></p> <p>Path to dependency file: /script/package.json</p> <p>Path to vulnerable library: /script/node_modules/npm/node_modules/string-width/node_modules/ansi-regex/package.json,/packages/update-package-dependencies/node_modules/table/node_modules/ansi-regex/package.json,/script/node_modules/@wdio/logger/node_modules/ansi-regex/package.json,/script/node_modules/eslint/node_modules/ansi-regex/package.json,/script/node_modules/inquirer/node_modules/ansi-regex/package.json,/script/node_modules/stylelint/node_modules/ansi-regex/package.json,/script/node_modules/npm/node_modules/cliui/node_modules/ansi-regex/package.json,/packages/one-light-ui/node_modules/ansi-regex/package.json,/packages/one-dark-ui/node_modules/ansi-regex/package.json,/packages/grammar-selector/node_modules/table/node_modules/ansi-regex/package.json</p> <p> Dependency Hierarchy: - stylelint-9.3.0.tgz (Root Library) - string-width-2.1.1.tgz - strip-ansi-4.0.0.tgz - :x: **ansi-regex-3.0.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/turkdevops/atom/commit/1eff9f4173420e33aa6739fdce981e7651e8f212">1eff9f4173420e33aa6739fdce981e7651e8f212</a></p> <p>Found in base branch: <b>electron-upgrade</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ansi-regex is vulnerable to Inefficient Regular Expression Complexity <p>Publish Date: 2021-09-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807>CVE-2021-3807</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p> <p>Release Date: 2021-09-17</p> <p>Fix Resolution: ansi-regex - 5.0.1,6.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-3807 (High) detected in ansi-regex-4.1.0.tgz, ansi-regex-3.0.0.tgz - autoclosed - ## CVE-2021-3807 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ansi-regex-4.1.0.tgz</b>, <b>ansi-regex-3.0.0.tgz</b></p></summary> <p> <details><summary><b>ansi-regex-4.1.0.tgz</b></p></summary> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz</a></p> <p>Path to dependency file: /script/package.json</p> <p>Path to vulnerable library: /script/node_modules/inquirer/node_modules/strip-ansi/node_modules/ansi-regex/package.json,/script/node_modules/table/node_modules/ansi-regex/package.json</p> <p> Dependency Hierarchy: - eslint-5.16.0.tgz (Root Library) - inquirer-6.3.1.tgz - strip-ansi-5.2.0.tgz - :x: **ansi-regex-4.1.0.tgz** (Vulnerable Library) </details> <details><summary><b>ansi-regex-3.0.0.tgz</b></p></summary> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz</a></p> <p>Path to dependency file: /script/package.json</p> <p>Path to vulnerable library: /script/node_modules/npm/node_modules/string-width/node_modules/ansi-regex/package.json,/packages/update-package-dependencies/node_modules/table/node_modules/ansi-regex/package.json,/script/node_modules/@wdio/logger/node_modules/ansi-regex/package.json,/script/node_modules/eslint/node_modules/ansi-regex/package.json,/script/node_modules/inquirer/node_modules/ansi-regex/package.json,/script/node_modules/stylelint/node_modules/ansi-regex/package.json,/script/node_modules/npm/node_modules/cliui/node_modules/ansi-regex/package.json,/packages/one-light-ui/node_modules/ansi-regex/package.json,/packages/one-dark-ui/node_modules/ansi-regex/package.json,/packages/grammar-selector/node_modules/table/node_modules/ansi-regex/package.json</p> <p> Dependency Hierarchy: - stylelint-9.3.0.tgz (Root Library) - string-width-2.1.1.tgz - strip-ansi-4.0.0.tgz - :x: **ansi-regex-3.0.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/turkdevops/atom/commit/1eff9f4173420e33aa6739fdce981e7651e8f212">1eff9f4173420e33aa6739fdce981e7651e8f212</a></p> <p>Found in base branch: <b>electron-upgrade</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ansi-regex is vulnerable to Inefficient Regular Expression Complexity <p>Publish Date: 2021-09-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807>CVE-2021-3807</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p> <p>Release Date: 2021-09-17</p> <p>Fix Resolution: ansi-regex - 5.0.1,6.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve high detected in ansi regex tgz ansi regex tgz autoclosed cve high severity vulnerability vulnerable libraries ansi regex tgz ansi regex tgz ansi regex tgz regular expression for matching ansi escape codes library home page a href path to dependency file script package json path to vulnerable library script node modules inquirer node modules strip ansi node modules ansi regex package json script node modules table node modules ansi regex package json dependency hierarchy eslint tgz root library inquirer tgz strip ansi tgz x ansi regex tgz vulnerable library ansi regex tgz regular expression for matching ansi escape codes library home page a href path to dependency file script package json path to vulnerable library script node modules npm node modules string width node modules ansi regex package json packages update package dependencies node modules table node modules ansi regex package json script node modules wdio logger node modules ansi regex package json script node modules eslint node modules ansi regex package json script node modules inquirer node modules ansi regex package json script node modules stylelint node modules ansi regex package json script node modules npm node modules cliui node modules ansi regex package json packages one light ui node modules ansi regex package json packages one dark ui node modules ansi regex package json packages grammar selector node modules table node modules ansi regex package json dependency hierarchy stylelint tgz root library string width tgz strip ansi tgz x ansi regex tgz vulnerable library found in head commit a href found in base branch electron upgrade vulnerability details ansi regex is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ansi regex step up your open source security game with whitesource
0
47,655
5,906,833,958
IssuesEvent
2017-05-19 16:04:21
openshift/origin
https://api.github.com/repos/openshift/origin
closed
jUnit generation for integration tests aliases return code
area/tests component/internal-tools priority/P0
This leads to failed tests in the jUnit report but successful test results and `UNSTABLE` job like [here](https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin_check/1970/consoleFull#-54338817556bf4006e4b05b79524e5923). /cc @jhadvig
1.0
jUnit generation for integration tests aliases return code - This leads to failed tests in the jUnit report but successful test results and `UNSTABLE` job like [here](https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin_check/1970/consoleFull#-54338817556bf4006e4b05b79524e5923). /cc @jhadvig
test
junit generation for integration tests aliases return code this leads to failed tests in the junit report but successful test results and unstable job like cc jhadvig
1
295,813
25,507,234,682
IssuesEvent
2022-11-28 10:21:32
gardener/gardener
https://api.github.com/repos/gardener/gardener
closed
[Flaky Test] Extensions Controller BackupBucket Integration Test Suite
area/testing kind/flake
<!-- Please only use this template for submitting reports about flaky tests or jobs (pass or fail with no underlying change in code) in Gardener CI --> **How to categorize this issue?** <!-- Please select area, kind, and priority for this issue. This helps the community categorizing it. Replace below TODOs or exchange the existing identifiers with those that fit best in your opinion. If multiple identifiers make sense you can also state the commands multiple times, e.g. /area control-plane /area auto-scaling ... "/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management "/kind" identifiers: api-change|bug|cleanup|discussion|enhancement|epic|impediment|poc|post-mortem|question|regression|task|technical-debt|test --> /area testing /kind flake **Which test(s)/suite(s) are flaking**: `BackupBucket should successfully create and delete a BackupBucket (respecting operation annotation)` **CI link**: https://prow.gardener.cloud/view/gs/gardener-prow/pr-logs/pull/gardener_gardener/6902/pull-gardener-integration/1585137129640431616#1:build-log.txt%3A131-219 **Reason for failure**: The test never observes the BackupBucket object to transition to `status.lastOperation.state=Error` but instead sees `Processing`. This could be caused by the exponential backoff (the requested error causes the controller to retry). **Anything else we need to know**:
1.0
[Flaky Test] Extensions Controller BackupBucket Integration Test Suite - <!-- Please only use this template for submitting reports about flaky tests or jobs (pass or fail with no underlying change in code) in Gardener CI --> **How to categorize this issue?** <!-- Please select area, kind, and priority for this issue. This helps the community categorizing it. Replace below TODOs or exchange the existing identifiers with those that fit best in your opinion. If multiple identifiers make sense you can also state the commands multiple times, e.g. /area control-plane /area auto-scaling ... "/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management "/kind" identifiers: api-change|bug|cleanup|discussion|enhancement|epic|impediment|poc|post-mortem|question|regression|task|technical-debt|test --> /area testing /kind flake **Which test(s)/suite(s) are flaking**: `BackupBucket should successfully create and delete a BackupBucket (respecting operation annotation)` **CI link**: https://prow.gardener.cloud/view/gs/gardener-prow/pr-logs/pull/gardener_gardener/6902/pull-gardener-integration/1585137129640431616#1:build-log.txt%3A131-219 **Reason for failure**: The test never observes the BackupBucket object to transition to `status.lastOperation.state=Error` but instead sees `Processing`. This could be caused by the exponential backoff (the requested error causes the controller to retry). **Anything else we need to know**:
test
extensions controller backupbucket integration test suite how to categorize this issue please select area kind and priority for this issue this helps the community categorizing it replace below todos or exchange the existing identifiers with those that fit best in your opinion if multiple identifiers make sense you can also state the commands multiple times e g area control plane area auto scaling area identifiers audit logging auto scaling backup certification control plane migration control plane cost delivery dev productivity disaster recovery documentation high availability logging metering monitoring networking open source ops productivity os performance quality robustness scalability security storage testing usability user management kind identifiers api change bug cleanup discussion enhancement epic impediment poc post mortem question regression task technical debt test area testing kind flake which test s suite s are flaking backupbucket should successfully create and delete a backupbucket respecting operation annotation ci link reason for failure the test never observes the backupbucket object to transition to status lastoperation state error but instead sees processing this could be caused by the exponential backoff the requested error causes the controller to retry anything else we need to know
1
177,874
13,750,987,239
IssuesEvent
2020-10-06 12:48:25
netsec-ethz/scionlab
https://api.github.com/repos/netsec-ethz/scionlab
closed
Profiling of UserAS pages and config generation
missing tests
Profiling of the DB models and queries. Can be based on the existing tests. Preferrably also using PostgreSQL. The goal is to detect big performance problems early and make the necessary adjustments to the data model.
1.0
Profiling of UserAS pages and config generation - Profiling of the DB models and queries. Can be based on the existing tests. Preferrably also using PostgreSQL. The goal is to detect big performance problems early and make the necessary adjustments to the data model.
test
profiling of useras pages and config generation profiling of the db models and queries can be based on the existing tests preferrably also using postgresql the goal is to detect big performance problems early and make the necessary adjustments to the data model
1
256,716
22,093,889,587
IssuesEvent
2022-06-01 08:25:46
wordpress-mobile/gutenberg-mobile
https://api.github.com/repos/wordpress-mobile/gutenberg-mobile
opened
[E2E Tests] - Tests failing with "[setOrientation("PORTRAIT")] Unexpected data in simpleCallback."
[Type] Bug Testing [Pri] Low
**Describe the bug** Not a bug in the app but to track/address flakiness in the E2E tests. Currently, some E2E tests are failing with this error "[setOrientation("PORTRAIT")] Unexpected data in simpleCallback.". This happens intermittently making the tests flaky. Not sure why this is happening, yet. Tracking it here with some examples for now. Examples: [1](https://app.circleci.com/pipelines/github/wordpress-mobile/gutenberg-mobile/17628/workflows/129d11fc-bf77-47ec-887f-d7b5b236de75/jobs/96447), [2](https://app.circleci.com/pipelines/github/wordpress-mobile/gutenberg-mobile/17614/workflows/644fc091-638d-483f-9cae-ab4ee92fa3a3/jobs/96410), [3](https://app.circleci.com/pipelines/github/wordpress-mobile/gutenberg-mobile/17598/workflows/74de424d-0e29-4c76-bcbe-04f457a4f4bd/jobs/96242)
1.0
[E2E Tests] - Tests failing with "[setOrientation("PORTRAIT")] Unexpected data in simpleCallback." - **Describe the bug** Not a bug in the app but to track/address flakiness in the E2E tests. Currently, some E2E tests are failing with this error "[setOrientation("PORTRAIT")] Unexpected data in simpleCallback.". This happens intermittently making the tests flaky. Not sure why this is happening, yet. Tracking it here with some examples for now. Examples: [1](https://app.circleci.com/pipelines/github/wordpress-mobile/gutenberg-mobile/17628/workflows/129d11fc-bf77-47ec-887f-d7b5b236de75/jobs/96447), [2](https://app.circleci.com/pipelines/github/wordpress-mobile/gutenberg-mobile/17614/workflows/644fc091-638d-483f-9cae-ab4ee92fa3a3/jobs/96410), [3](https://app.circleci.com/pipelines/github/wordpress-mobile/gutenberg-mobile/17598/workflows/74de424d-0e29-4c76-bcbe-04f457a4f4bd/jobs/96242)
test
tests failing with unexpected data in simplecallback describe the bug not a bug in the app but to track address flakiness in the tests currently some tests are failing with this error unexpected data in simplecallback this happens intermittently making the tests flaky not sure why this is happening yet tracking it here with some examples for now examples
1
50,262
6,076,706,923
IssuesEvent
2017-06-16 00:20:59
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000 {E2eNode Suite}
kind/flake needs-sig priority/failing-test sig/node
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/814/ Failed: [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000 {E2eNode Suite} ``` /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:149 Timed out after 120.000s. Expected <*errors.errorString | 0xc420d48550>: { s: "failed to get oom_score_adj for 29213: strconv.ParseInt: parsing \"sudo: unable to resolve host ubuntu-trusty\\n1000\": invalid syntax", } to be nil /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:148 ```
1.0
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000 {E2eNode Suite} - https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/814/ Failed: [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000 {E2eNode Suite} ``` /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:149 Timed out after 120.000s. Expected <*errors.errorString | 0xc420d48550>: { s: "failed to get oom_score_adj for 29213: strconv.ParseInt: parsing \"sudo: unable to resolve host ubuntu-trusty\\n1000\": invalid syntax", } to be nil /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:148 ```
test
container manager misc validate oom score adjustments once the node is setup pod infra containers oom score adj should be and best effort container s should be suite failed container manager misc validate oom score adjustments once the node is setup pod infra containers oom score adj should be and best effort container s should be suite go src io kubernetes output local go src io kubernetes test node container manager test go timed out after expected s failed to get oom score adj for strconv parseint parsing sudo unable to resolve host ubuntu trusty invalid syntax to be nil go src io kubernetes output local go src io kubernetes test node container manager test go
1
125,253
10,339,639,872
IssuesEvent
2019-09-03 19:51:08
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
Failing test: Browser Unit Tests.ML - time buckets - ML - time buckets "before each" hook for "returns correct interval for default target with hour bounds"
failed-test
A test failed on a tracked branch ``` [object Object] ``` First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+6.7/JOB=x-pack-intake,node=immutable/2/) <!-- kibanaCiData = {"failed-test":{"test.class":"Browser Unit Tests.ML - time buckets","test.name":"ML - time buckets \"before each\" hook for \"returns correct interval for default target with hour bounds\"","test.failCount":12}} -->
1.0
Failing test: Browser Unit Tests.ML - time buckets - ML - time buckets "before each" hook for "returns correct interval for default target with hour bounds" - A test failed on a tracked branch ``` [object Object] ``` First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+6.7/JOB=x-pack-intake,node=immutable/2/) <!-- kibanaCiData = {"failed-test":{"test.class":"Browser Unit Tests.ML - time buckets","test.name":"ML - time buckets \"before each\" hook for \"returns correct interval for default target with hour bounds\"","test.failCount":12}} -->
test
failing test browser unit tests ml time buckets ml time buckets before each hook for returns correct interval for default target with hour bounds a test failed on a tracked branch first failure
1
283,896
24,569,808,647
IssuesEvent
2022-10-13 07:44:45
agri-gaia/seerep
https://api.github.com/repos/agri-gaia/seerep
closed
Test protobuf write and load of images
test protobuf
Basically same as #71, but using the [Protocol Buffers](https://developers.google.com/protocol-buffers) specific functions: https://github.com/agri-gaia/seerep/blob/8295338243fd2b661d348e8e74b305bd557ce0b9/seerep-hdf5/seerep-hdf5-pb/include/seerep-hdf5-pb/hdf5-pb-image.h#L26 https://github.com/agri-gaia/seerep/blob/8295338243fd2b661d348e8e74b305bd557ce0b9/seerep-hdf5/seerep-hdf5-pb/include/seerep-hdf5-pb/hdf5-pb-image.h#L28
1.0
Test protobuf write and load of images - Basically same as #71, but using the [Protocol Buffers](https://developers.google.com/protocol-buffers) specific functions: https://github.com/agri-gaia/seerep/blob/8295338243fd2b661d348e8e74b305bd557ce0b9/seerep-hdf5/seerep-hdf5-pb/include/seerep-hdf5-pb/hdf5-pb-image.h#L26 https://github.com/agri-gaia/seerep/blob/8295338243fd2b661d348e8e74b305bd557ce0b9/seerep-hdf5/seerep-hdf5-pb/include/seerep-hdf5-pb/hdf5-pb-image.h#L28
test
test protobuf write and load of images basically same as but using the specific functions
1
26,903
6,812,710,107
IssuesEvent
2017-11-06 05:13:07
BTDF/DeploymentFramework
https://api.github.com/repos/BTDF/DeploymentFramework
closed
Create an MSBuild warning if create environment variable from environment settings task can't find a key in PropsFromEnvSettings
bug CodePlexMigrationInitiated General Impact: Medium Release 5.0
Create an MSBuild warning if the create environment variable from environment settings custom task can't find a key in PropsFromEnvSettings when it loads the settings file.  It currently throws a non-descriptive exception. #### This work item was migrated from CodePlex CodePlex work item ID: '3691' Assigned to: 'tfabraham' Vote count: '0'
1.0
Create an MSBuild warning if create environment variable from environment settings task can't find a key in PropsFromEnvSettings - Create an MSBuild warning if the create environment variable from environment settings custom task can't find a key in PropsFromEnvSettings when it loads the settings file.  It currently throws a non-descriptive exception. #### This work item was migrated from CodePlex CodePlex work item ID: '3691' Assigned to: 'tfabraham' Vote count: '0'
non_test
create an msbuild warning if create environment variable from environment settings task can t find a key in propsfromenvsettings create an msbuild warning if the create environment variable from environment settings custom task can t find a key in propsfromenvsettings when it loads the settings file   it currently throws a non descriptive exception this work item was migrated from codeplex codeplex work item id assigned to tfabraham vote count
0
282,716
24,492,953,211
IssuesEvent
2022-10-10 05:26:30
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
roachtest: import/nodeShutdown/coordinator failed
C-test-failure O-robot O-roachtest release-blocker branch-release-22.2.0
roachtest.import/nodeShutdown/coordinator [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6859865?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6859865?buildTab=artifacts#/import/nodeShutdown/coordinator) on release-22.2.0 @ [776e9f889dec6fec9b4011e82c523c648b72254e](https://github.com/cockroachdb/cockroach/commits/776e9f889dec6fec9b4011e82c523c648b72254e): ``` test artifacts and logs in: /artifacts/import/nodeShutdown/coordinator/run_1 monitor.go:127,jobs.go:154,import.go:109,test_runner.go:930: monitor failure: monitor task failed: unexpectedly found job 803828990863015938 in state failed (1) attached stack trace -- stack trace: | main.(*monitorImpl).WaitE | main/pkg/cmd/roachtest/monitor.go:115 | main.(*monitorImpl).Wait | main/pkg/cmd/roachtest/monitor.go:123 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.jobSurvivesNodeShutdown | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/jobs.go:154 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerImportNodeShutdown.func3 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/import.go:109 | main.(*testRunner).runTest.func2 | main/pkg/cmd/roachtest/test_runner.go:930 Wraps: (2) monitor failure Wraps: (3) attached stack trace -- stack trace: | main.(*monitorImpl).wait.func2 | main/pkg/cmd/roachtest/monitor.go:171 Wraps: (4) monitor task failed Wraps: (5) attached stack trace -- stack trace: | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.jobSurvivesNodeShutdown.func1 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/jobs.go:95 | main.(*monitorImpl).Go.func1 | main/pkg/cmd/roachtest/monitor.go:105 | golang.org/x/sync/errgroup.(*Group).Go.func1 | golang.org/x/sync/errgroup/external/org_golang_x_sync/errgroup/errgroup.go:74 | runtime.goexit | GOROOT/src/runtime/asm_amd64.s:1594 Wraps: (6) unexpectedly found job 803828990863015938 in state failed Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError ``` <p>Parameters: <code>ROACHTEST_cloud=gce</code> , <code>ROACHTEST_cpu=4</code> , <code>ROACHTEST_encrypted=false</code> , <code>ROACHTEST_ssd=0</code> </p> <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> /cc @cockroachdb/disaster-recovery <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*import/nodeShutdown/coordinator.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
2.0
roachtest: import/nodeShutdown/coordinator failed - roachtest.import/nodeShutdown/coordinator [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6859865?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6859865?buildTab=artifacts#/import/nodeShutdown/coordinator) on release-22.2.0 @ [776e9f889dec6fec9b4011e82c523c648b72254e](https://github.com/cockroachdb/cockroach/commits/776e9f889dec6fec9b4011e82c523c648b72254e): ``` test artifacts and logs in: /artifacts/import/nodeShutdown/coordinator/run_1 monitor.go:127,jobs.go:154,import.go:109,test_runner.go:930: monitor failure: monitor task failed: unexpectedly found job 803828990863015938 in state failed (1) attached stack trace -- stack trace: | main.(*monitorImpl).WaitE | main/pkg/cmd/roachtest/monitor.go:115 | main.(*monitorImpl).Wait | main/pkg/cmd/roachtest/monitor.go:123 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.jobSurvivesNodeShutdown | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/jobs.go:154 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerImportNodeShutdown.func3 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/import.go:109 | main.(*testRunner).runTest.func2 | main/pkg/cmd/roachtest/test_runner.go:930 Wraps: (2) monitor failure Wraps: (3) attached stack trace -- stack trace: | main.(*monitorImpl).wait.func2 | main/pkg/cmd/roachtest/monitor.go:171 Wraps: (4) monitor task failed Wraps: (5) attached stack trace -- stack trace: | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.jobSurvivesNodeShutdown.func1 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/jobs.go:95 | main.(*monitorImpl).Go.func1 | main/pkg/cmd/roachtest/monitor.go:105 | golang.org/x/sync/errgroup.(*Group).Go.func1 | golang.org/x/sync/errgroup/external/org_golang_x_sync/errgroup/errgroup.go:74 | runtime.goexit | GOROOT/src/runtime/asm_amd64.s:1594 Wraps: (6) unexpectedly found job 803828990863015938 in state failed Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError ``` <p>Parameters: <code>ROACHTEST_cloud=gce</code> , <code>ROACHTEST_cpu=4</code> , <code>ROACHTEST_encrypted=false</code> , <code>ROACHTEST_ssd=0</code> </p> <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> /cc @cockroachdb/disaster-recovery <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*import/nodeShutdown/coordinator.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
test
roachtest import nodeshutdown coordinator failed roachtest import nodeshutdown coordinator with on release test artifacts and logs in artifacts import nodeshutdown coordinator run monitor go jobs go import go test runner go monitor failure monitor task failed unexpectedly found job in state failed attached stack trace stack trace main monitorimpl waite main pkg cmd roachtest monitor go main monitorimpl wait main pkg cmd roachtest monitor go github com cockroachdb cockroach pkg cmd roachtest tests jobsurvivesnodeshutdown github com cockroachdb cockroach pkg cmd roachtest tests jobs go github com cockroachdb cockroach pkg cmd roachtest tests registerimportnodeshutdown github com cockroachdb cockroach pkg cmd roachtest tests import go main testrunner runtest main pkg cmd roachtest test runner go wraps monitor failure wraps attached stack trace stack trace main monitorimpl wait main pkg cmd roachtest monitor go wraps monitor task failed wraps attached stack trace stack trace github com cockroachdb cockroach pkg cmd roachtest tests jobsurvivesnodeshutdown github com cockroachdb cockroach pkg cmd roachtest tests jobs go main monitorimpl go main pkg cmd roachtest monitor go golang org x sync errgroup group go golang org x sync errgroup external org golang x sync errgroup errgroup go runtime goexit goroot src runtime asm s wraps unexpectedly found job in state failed error types withstack withstack errutil withprefix withstack withstack errutil withprefix withstack withstack errutil leaferror parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest ssd help see see cc cockroachdb disaster recovery
1
119,888
10,076,954,457
IssuesEvent
2019-07-24 17:31:33
SAP/cloud-commerce-spartacus-storefront
https://api.github.com/repos/SAP/cloud-commerce-spartacus-storefront
closed
Refactor checkout-flow e2e tests
e2e-tests
checkout-flow.e2e-spec.ts recently failed on Jenkins. I will be refactoring checkout-flow.ts as it is the helper class for all checkout tests. This test has always been inconsistent and I believe it's due to the size of the test. I'm going to attempt to make it consistent by applying cypress waits for every page redirect.
1.0
Refactor checkout-flow e2e tests - checkout-flow.e2e-spec.ts recently failed on Jenkins. I will be refactoring checkout-flow.ts as it is the helper class for all checkout tests. This test has always been inconsistent and I believe it's due to the size of the test. I'm going to attempt to make it consistent by applying cypress waits for every page redirect.
test
refactor checkout flow tests checkout flow spec ts recently failed on jenkins i will be refactoring checkout flow ts as it is the helper class for all checkout tests this test has always been inconsistent and i believe it s due to the size of the test i m going to attempt to make it consistent by applying cypress waits for every page redirect
1
68,691
7,107,628,030
IssuesEvent
2018-01-16 20:41:16
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
teamcity: failed tests on v2.0-alpha.20180116: test/TestRestoreDatabaseVersusTable, test/TestRestoreDatabaseVersusTable/into_db, test/TestRestoreDatabaseVersusTable/incomplete-db, test/TestRestoreDatabaseVersusTable/tables, test/TestRestoreDatabaseVersusTable/db-exists, test/TestRestoreDatabaseVersusTable/db, test/TestRestoreDatabaseVersusTable/tables-needs-db
Robot test-failure
The following tests appear to have failed: [#480343](https://teamcity.cockroachdb.com/viewLog.html?buildId=480343): ``` --- FAIL: test/TestRestoreDatabaseVersusTable (0.000s) Test ended in panic. ------- Stdout: ------- W180116 16:28:44.374977 148702 server/status/runtime.go:109 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006" I180116 16:28:44.377172 148702 server/config.go:516 [n?] 1 storage engine initialized I180116 16:28:44.377195 148702 server/config.go:519 [n?] RocksDB cache size: 128 MiB I180116 16:28:44.377201 148702 server/config.go:519 [n?] store 0: in-memory, size 0 B I180116 16:28:44.380489 148702 server/node.go:361 [n?] **** cluster 2ffe2780-b05d-49fe-a57a-d95539f9cf40 has been created I180116 16:28:44.380533 148702 server/server.go:934 [n?] **** add additional nodes by specifying --join=127.0.0.1:45707 I180116 16:28:44.381352 148702 storage/store.go:1209 [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available I180116 16:28:44.381408 148702 server/node.go:486 [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=3.2 KiB), ranges=1, leases=0, writes=0.00, bytesPerReplica={p10=3322.00 p25=3322.00 p50=3322.00 p75=3322.00 p90=3322.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00} I180116 16:28:44.381440 148702 server/node.go:339 [n1] node ID 1 initialized I180116 16:28:44.381493 148702 gossip/gossip.go:332 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:45707" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:8 > I180116 16:28:44.381664 148702 storage/stores.go:331 [n1] read 0 node addresses from persistent storage I180116 16:28:44.381776 148702 server/node.go:627 [n1] connecting to gossip network to verify cluster ID... I180116 16:28:44.381802 148702 server/node.go:652 [n1] node connected via gossip and verified as part of cluster "2ffe2780-b05d-49fe-a57a-d95539f9cf40" I180116 16:28:44.381836 148702 server/node.go:428 [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes [] I180116 16:28:44.381891 148702 sql/distsql_physical_planner.go:122 [n1] creating DistSQLPlanner with address {tcp 127.0.0.1:45707} I180116 16:28:44.383402 148967 storage/replica_command.go:819 [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2] I180116 16:28:44.390026 148702 server/server.go:1161 [n1] starting https server at 127.0.0.1:39153 I180116 16:28:44.390056 148702 server/server.go:1162 [n1] starting grpc/postgres server at 127.0.0.1:45707 I180116 16:28:44.390066 148702 server/server.go:1163 [n1] advertising CockroachDB node at 127.0.0.1:45707 E180116 16:28:44.391004 148968 storage/queue.go:668 [replicate,n1,s1,r1/1:/{Min-System/}] range requires a replication change, but lacks a quorum of live replicas (0/1) I180116 16:28:44.392057 148967 storage/replica_command.go:819 [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3] I180116 16:28:44.398783 148967 storage/replica_command.go:819 [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4] W180116 16:28:44.399561 149047 storage/intent_resolver.go:324 [n1,s1] failed to push during intent resolution: failed to push "sql txn implicit" id=1f3102da key=/Table/SystemConfigSpan/Start rw=true pri=0.01228340 iso=SERIALIZABLE stat=PENDING epo=0 ts=1516120124.393517247,0 orig=1516120124.393517247,0 max=1516120124.393517247,0 wto=false rop=false seq=7 I180116 16:28:44.400055 148702 sql/event_log.go:115 [n1] Event: "alter_table", target: 12, info: {TableName:eventlog Statement:ALTER TABLE system.eventlog ALTER COLUMN "uniqueID" SET DEFAULT uuid_v4() User:node MutationID:0 CascadeDroppedViews:[]} I180116 16:28:44.403524 148967 storage/replica_command.go:819 [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5] I180116 16:28:44.404276 148702 sql/lease.go:348 [n1] publish: descID=12 (eventlog) version=2 mtime=2018-01-16 16:28:44.404137559 +0000 UTC I180116 16:28:44.411017 148967 storage/replica_command.go:819 [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6] I180116 16:28:44.417412 148967 storage/replica_command.go:819 [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7] I180116 16:28:44.420306 148702 sql/event_log.go:115 [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:node} I180116 16:28:44.421670 148967 storage/replica_command.go:819 [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8] I180116 16:28:44.425632 148967 storage/replica_command.go:819 [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9] I180116 16:28:44.429319 148967 storage/replica_command.go:819 [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10] I180116 16:28:44.430672 148702 sql/event_log.go:115 [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:$1 User:node} I180116 16:28:44.433935 148967 storage/replica_command.go:819 [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11] I180116 16:28:44.435240 148702 sql/event_log.go:115 [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:node} I180116 16:28:44.440327 148967 storage/replica_command.go:819 [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12] I180116 16:28:44.444271 148967 storage/replica_command.go:819 [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13] I180116 16:28:44.447890 148967 storage/replica_command.go:819 [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14] I180116 16:28:44.449926 148702 sql/event_log.go:115 [n1] Event: "alter_table", target: 4, info: {TableName:users Statement:ALTER TABLE system.users ADD COLUMN IF NOT EXISTS "isRole" BOOL NOT NULL DEFAULT false User:node MutationID:1 CascadeDroppedViews:[]} I180116 16:28:44.451621 148967 storage/replica_command.go:819 [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15] I180116 16:28:44.452066 148702 sql/lease.go:348 [n1] publish: descID=4 (users) version=2 mtime=2018-01-16 16:28:44.451968724 +0000 UTC I180116 16:28:44.455304 148967 storage/replica_command.go:819 [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16] I180116 16:28:44.456410 148702 sql/lease.go:348 [n1] publish: descID=4 (users) version=3 mtime=2018-01-16 16:28:44.456312308 +0000 UTC I180116 16:28:44.458521 148967 storage/replica_command.go:819 [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17] I180116 16:28:44.459681 148702 sql/backfill.go:132 [n1] Running backfill for "users", v=3, m=1 I180116 16:28:44.461195 148967 storage/replica_command.go:819 [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18] I180116 16:28:44.467606 148967 storage/replica_command.go:819 [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19] I180116 16:28:44.468667 148702 sql/lease.go:348 [n1] publish: descID=4 (users) version=4 mtime=2018-01-16 16:28:44.468575266 +0000 UTC I180116 16:28:44.470853 148967 storage/replica_command.go:819 [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20] I180116 16:28:44.472250 148702 sql/event_log.go:115 [n1] Event: "finish_schema_change", target: 4, info: {MutationID:1} I180116 16:28:44.473551 148702 sql/lease.go:274 publish (count leases): descID=4 name=users version=3 count=1 I180116 16:28:44.499654 148702 server/server.go:1232 [n1] done ensuring all necessary migrations have run I180116 16:28:44.499690 148702 server/server.go:1235 [n1] serving sql connections I180116 16:28:44.501981 149389 sql/event_log.go:115 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:45707} Attrs: Locality: ServerVersion:1.1-8} ClusterID:2ffe2780-b05d-49fe-a57a-d95539f9cf40 StartedAt:1516120124381809901 LastUp:1516120124381809901} I180116 16:28:44.512797 149363 sql/event_log.go:115 [client=127.0.0.1:53982,user=root,n1] Event: "create_database", target: 50, info: {DatabaseName:data Statement:CREATE DATABASE data User:root} I180116 16:28:44.513124 148967 storage/replica_command.go:819 [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21] I180116 16:28:44.516594 149363 sql/event_log.go:115 [client=127.0.0.1:53982,user=root,n1] Event: "create_table", target: 51, info: {TableName:bank Statement:CREATE TABLE bank (id INT PRIMARY KEY, balance INT, payload STRING, FAMILY (id, balance, payload)) User:root} I180116 16:28:44.516963 148967 storage/replica_command.go:819 [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22] I180116 16:28:44.523820 149363 sql/event_log.go:115 [client=127.0.0.1:53982,user=root,n1] Event: "create_database", target: 52, info: {DatabaseName:d2 Statement:CREATE DATABASE d2 User:root} I180116 16:28:44.524163 148967 storage/replica_command.go:819 [split,n1,s1,r22/1:/{Table/51-Max}] initiating a split of this range at key /Table/52 [r23] I180116 16:28:44.526841 149363 sql/event_log.go:115 [client=127.0.0.1:53982,user=root,n1] Event: "create_database", target: 53, info: {DatabaseName:d3 Statement:CREATE DATABASE d3 User:root} I180116 16:28:44.527814 148967 storage/replica_command.go:819 [split,n1,s1,r23/1:/{Table/52-Max}] initiating a split of this range at key /Table/53 [r24] I180116 16:28:44.530589 149363 sql/event_log.go:115 [client=127.0.0.1:53982,user=root,n1] Event: "create_table", target: 54, info: {TableName:d3.foo Statement:CREATE TABLE d3.foo (a INT) User:root} I180116 16:28:44.531535 148967 storage/replica_command.go:819 [split,n1,s1,r24/1:/{Table/53-Max}] initiating a split of this range at key /Table/54 [r25] I180116 16:28:44.533922 149363 sql/event_log.go:115 [client=127.0.0.1:53982,user=root,n1] Event: "create_database", target: 55, info: {DatabaseName:d4 Statement:CREATE DATABASE d4 User:root} I180116 16:28:44.535456 148967 storage/replica_command.go:819 [split,n1,s1,r25/1:/{Table/54-Max}] initiating a split of this range at key /Table/55 [r26] I180116 16:28:44.538408 149363 sql/event_log.go:115 [client=127.0.0.1:53982,user=root,n1] Event: "create_table", target: 56, info: {TableName:d4.foo Statement:CREATE TABLE d4.foo (a INT) User:root} I180116 16:28:44.539645 148967 storage/replica_command.go:819 [split,n1,s1,r26/1:/{Table/55-Max}] initiating a split of this range at key /Table/56 [r27] I180116 16:28:44.542084 149363 sql/event_log.go:115 [client=127.0.0.1:53982,user=root,n1] Event: "create_table", target: 57, info: {TableName:d4.bar Statement:CREATE TABLE d4.bar (a INT) User:root} I180116 16:28:44.543349 148967 storage/replica_command.go:819 [split,n1,s1,r27/1:/{Table/56-Max}] initiating a split of this range at key /Table/57 [r28] I180116 16:28:44.552426 149493 ccl/storageccl/export.go:124 [n1,s1,r27/1:/Table/5{6-7}] export [/Table/56/1,/Table/56/2) I180116 16:28:44.552565 149492 ccl/storageccl/export.go:124 [n1,s1,r25/1:/Table/5{4-5}] export [/Table/54/1,/Table/54/2) I180116 16:28:44.552588 149491 ccl/storageccl/export.go:124 [n1,s1,r22/1:/Table/5{1-2}] export [/Table/51/1,/Table/51/2) I180116 16:28:44.552428 149494 ccl/storageccl/export.go:124 [n1,s1,r28/1:/{Table/57-Max}] export [/Table/57/1,/Table/57/2) I180116 16:28:44.578698 149513 ccl/storageccl/export.go:124 [n1,s1,r27/1:/Table/5{6-7}] export [/Table/56/1,/Table/56/2) I180116 16:28:44.594770 149524 ccl/storageccl/export.go:124 [n1,s1,r28/1:/{Table/57-Max}] export [/Table/57/1,/Table/57/2) I180116 16:28:44.594837 149523 ccl/storageccl/export.go:124 [n1,s1,r27/1:/Table/5{6-7}] export [/Table/56/1,/Table/56/2) I180116 16:28:44.613115 149501 ccl/storageccl/export.go:124 [n1,s1,r28/1:/{Table/57-Max}] export [/Table/57/1,/Table/57/2) I180116 16:28:44.613208 149500 ccl/storageccl/export.go:124 [n1,s1,r27/1:/Table/5{6-7}] export [/Table/56/1,/Table/56/2) --- FAIL: test/TestRestoreDatabaseVersusTable/into_db (0.000s) Test ended in panic. ------- Stdout: ------- W180116 16:28:48.138581 157820 server/status/runtime.go:109 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006" I180116 16:28:48.145014 157820 server/config.go:516 [n?] 1 storage engine initialized I180116 16:28:48.145042 157820 server/config.go:519 [n?] RocksDB cache size: 128 MiB I180116 16:28:48.145048 157820 server/config.go:519 [n?] store 0: in-memory, size 0 B I180116 16:28:48.148107 157820 server/node.go:361 [n?] **** cluster f8371979-8bbb-42c2-9861-24b11bebe89a has been created I180116 16:28:48.148140 157820 server/server.go:934 [n?] **** add additional nodes by specifying --join=127.0.0.1:44949 I180116 16:28:48.149103 157820 storage/store.go:1209 [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available I180116 16:28:48.149154 157820 server/node.go:486 [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=3.2 KiB), ranges=1, leases=0, writes=0.00, bytesPerReplica={p10=3322.00 p25=3322.00 p50=3322.00 p75=3322.00 p90=3322.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00} I180116 16:28:48.149179 157820 server/node.go:339 [n1] node ID 1 initialized I180116 16:28:48.149239 157820 gossip/gossip.go:332 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:44949" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:8 > I180116 16:28:48.149337 157820 storage/stores.go:331 [n1] read 0 node addresses from persistent storage I180116 16:28:48.149445 157820 server/node.go:627 [n1] connecting to gossip network to verify cluster ID... I180116 16:28:48.149476 157820 server/node.go:652 [n1] node connected via gossip and verified as part of cluster "f8371979-8bbb-42c2-9861-24b11bebe89a" I180116 16:28:48.149502 157820 server/node.go:428 [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes [] I180116 16:28:48.149561 157820 sql/distsql_physical_planner.go:122 [n1] creating DistSQLPlanner with address {tcp 127.0.0.1:44949} I180116 16:28:48.151288 158031 storage/replica_command.go:819 [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2] I180116 16:28:48.152525 157820 server/server.go:1161 [n1] starting https server at 127.0.0.1:39351 I180116 16:28:48.152551 157820 server/server.go:1162 [n1] starting grpc/postgres server at 127.0.0.1:44949 I180116 16:28:48.152561 157820 server/server.go:1163 [n1] advertising CockroachDB node at 127.0.0.1:44949 I180116 16:28:48.159122 158031 storage/replica_command.go:819 [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3] I180116 16:28:48.161934 157820 sql/event_log.go:115 [n1] Event: "alter_table", target: 12, info: {TableName:eventlog Statement:ALTER TABLE system.eventlog ALTER COLUMN "uniqueID" SET DEFAULT uuid_v4() User:node MutationID:0 CascadeDroppedViews:[]} I180116 16:28:48.163240 158031 storage/replica_command.go:819 [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4] I180116 16:28:48.166759 157820 sql/lease.go:348 [n1] publish: descID=12 (eventlog) version=2 mtime=2018-01-16 16:28:48.166589069 +0000 UTC I180116 16:28:48.167948 158031 storage/replica_command.go:819 [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5] I180116 16:28:48.172582 158031 storage/replica_command.go:819 [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6] I180116 16:28:48.181804 158031 storage/replica_command.go:819 [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7] I180116 16:28:48.183812 157820 sql/event_log.go:115 [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:node} I180116 16:28:48.185801 158031 storage/replica_command.go:819 [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8] I180116 16:28:48.190312 158031 storage/replica_command.go:819 [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9] I180116 16:28:48.194182 158031 storage/replica_command.go:819 [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10] I180116 16:28:48.195422 157820 sql/event_log.go:115 [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:$1 User:node} I180116 16:28:48.198615 158031 storage/replica_command.go:819 [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11] I180116 16:28:48.198916 157820 sql/event_log.go:115 [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:node} I180116 16:28:48.202975 158031 storage/replica_command.go:819 [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12] I180116 16:28:48.206413 158031 storage/replica_command.go:819 [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13] I180116 16:28:48.211016 158031 storage/replica_command.go:819 [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14] I180116 16:28:48.211425 157820 sql/event_log.go:115 [n1] Event: "alter_table", target: 4, info: {TableName:users Statement:ALTER TABLE system.users ADD COLUMN IF NOT EXISTS "isRole" BOOL NOT NULL DEFAULT false User:node MutationID:1 CascadeDroppedViews:[]} I180116 16:28:48.214201 157820 sql/lease.go:348 [n1] publish: descID=4 (users) version=2 mtime=2018-01-16 16:28:48.213891792 +0000 UTC I180116 16:28:48.215557 158031 storage/replica_command.go:819 [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15] I180116 16:28:48.218811 158031 storage/replica_command.go:819 [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16] I180116 16:28:48.223647 157820 sql/lease.go:348 [n1] publish: descID=4 (users) version=3 mtime=2018-01-16 16:28:48.223224643 +0000 UTC I180116 16:28:48.227994 158031 storage/replica_command.go:819 [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17] I180116 16:28:48.228590 157820 sql/backfill.go:132 [n1] Running backfill for "users", v=3, m=1 I180116 16:28:48.231527 158031 storage/replica_command.go:819 [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18] I180116 16:28:48.234972 158031 storage/replica_command.go:819 [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19] W180116 16:28:48.237337 158299 storage/intent_resolver.go:324 [n1,s1] failed to push during intent resolution: failed to push "split" id=710285b3 key=/Local/Range/Table/21/RangeDescriptor rw=true pri=0.03966754 iso=SERIALIZABLE stat=PENDING epo=0 ts=1516120128.234995043,0 orig=1516120128.234995043,0 max=1516120128.234995043,0 wto=false rop=false seq=3 I180116 16:28:48.238261 157820 sql/lease.go:348 [n1] publish: descID=4 (users) version=4 mtime=2018-01-16 16:28:48.238167096 +0000 UTC I180116 16:28:48.238595 158031 storage/replica_command.go:819 [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20] I180116 16:28:48.241703 157820 sql/event_log.go:115 [n1] Event: "finish_schema_change", target: 4, info: {MutationID:1} I180116 16:28:48.242876 157820 sql/lease.go:274 publish (count leases): descID=4 name=users version=3 count=1 I180116 16:28:48.273211 157820 server/server.go:1232 [n1] done ensuring all necessary migrations have run I180116 16:28:48.273253 157820 server/server.go:1235 [n1] serving sql connections I180116 16:28:48.275247 158355 sql/event_log.go:115 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:44949} Attrs: Locality: ServerVersion:1.1-8} ClusterID:f8371979-8bbb-42c2-9861-24b11bebe89a StartedAt:1516120128149480129 LastUp:1516120128149480129} W180116 16:28:48.279050 157820 server/status/runtime.go:109 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006" I180116 16:28:48.281448 157820 server/config.go:516 [n?] 1 storage engine initialized I180116 16:28:48.281471 157820 server/config.go:519 [n?] RocksDB cache size: 128 MiB I180116 16:28:48.281477 157820 server/config.go:519 [n?] store 0: in-memory, size 0 B W180116 16:28:48.281540 157820 gossip/gossip.go:1292 [n?] no incoming or outgoing connections I180116 16:28:48.281575 157820 server/server.go:936 [n?] no stores bootstrapped and --join flag specified, awaiting init command. I180116 16:28:48.291341 158204 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:44949 I180116 16:28:48.293688 158328 gossip/server.go:219 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:45903} I180116 16:28:48.297171 157820 storage/stores.go:331 [n?] read 0 node addresses from persistent storage I180116 16:28:48.297224 157820 storage/stores.go:350 [n?] wrote 1 node addresses to persistent storage I180116 16:28:48.297234 157820 server/node.go:627 [n?] connecting to gossip network to verify cluster ID... I180116 16:28:48.297258 157820 server/node.go:652 [n?] node connected via gossip and verified as part of cluster "f8371979-8bbb-42c2-9861-24b11bebe89a" I180116 16:28:48.297482 158351 kv/dist_sender.go:354 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping I180116 16:28:48.298614 158350 kv/dist_sender.go:354 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping I180116 16:28:48.299487 157820 kv/dist_sender.go:354 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping I180116 16:28:48.300547 157820 server/node.go:332 [n?] new node allocated ID 2 I180116 16:28:48.300616 157820 gossip/gossip.go:332 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:45903" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:8 > I180116 16:28:48.300675 157820 server/node.go:414 [n2] node=2: asynchronously bootstrapping engine(s) [<no-attributes>=<in-mem>] I180116 16:28:48.300697 157820 server/node.go:428 [n2] node=2: started with [] engine(s) and attributes [] I180116 16:28:48.300806 157820 sql/distsql_physical_planner.go:122 [n2] creating DistSQLPlanner with address {tcp 127.0.0.1:45903} I180116 16:28:48.301387 158376 storage/stores.go:350 [n1] wrote 1 node addresses to persistent storage I180116 16:28:48.303309 157820 server/server.go:1161 [n2] starting https server at 127.0.0.1:45763 I180116 16:28:48.303336 157820 server/server.go:1162 [n2] starting grpc/postgres server at 127.0.0.1:45903 I180116 16:28:48.303345 157820 server/server.go:1163 [n2] advertising CockroachDB node at 127.0.0.1:45903 I180116 16:28:48.303574 158392 server/node.go:608 [n2] bootstrapped store [n2,s2] I180116 16:28:48.304539 157820 server/server.go:1232 [n2] done ensuring all necessary migrations have run I180116 16:28:48.304569 157820 server/server.go:1235 [n2] serving sql connections I180116 16:28:48.309154 158336 sql/event_log.go:115 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:45903} Attrs: Locality: ServerVersion:1.1-8} ClusterID:f8371979-8bbb-42c2-9861-24b11bebe89a StartedAt:1516120128300687650 LastUp:1516120128300687650} W180116 16:28:48.311086 157820 server/status/runtime.go:109 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006" I180116 16:28:48.315364 157820 server/config.go:516 [n?] 1 storage engine initialized I180116 16:28:48.315389 157820 server/config.go:519 [n?] RocksDB cache size: 128 MiB I180116 16:28:48.315396 157820 server/config.go:519 [n?] store 0: in-memory, size 0 B W180116 16:28:48.315467 157820 gossip/gossip.go:1292 [n?] no incoming or outgoing connections I180116 16:28:48.315526 157820 server/server.go:936 [n?] no stores bootstrapped and --join flag specified, awaiting init command. I180116 16:28:48.326818 158411 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:44949 I180116 16:28:48.327185 158596 gossip/server.go:219 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:41455} I180116 16:28:48.328204 157820 storage/stores.go:331 [n?] read 0 node addresses from persistent storage I180116 16:28:48.328249 157820 storage/stores.go:350 [n?] wrote 2 node addresses to persistent storage I180116 16:28:48.328261 157820 server/node.go:627 [n?] connecting to gossip network to verify cluster ID... I180116 16:28:48.328284 157820 server/node.go:652 [n?] node connected via gossip and verified as part of cluster "f8371979-8bbb-42c2-9861-24b11bebe89a" I180116 16:28:48.328524 158618 kv/dist_sender.go:354 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping I180116 16:28:48.329371 158617 kv/dist_sender.go:354 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping I180116 16:28:48.330119 157820 kv/dist_sender.go:354 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping I180116 16:28:48.330943 157820 server/node.go:332 [n?] new node allocated ID 3 I180116 16:28:48.331011 157820 gossip/gossip.go:332 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:41455" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:8 > I180116 16:28:48.331112 157820 server/node.go:414 [n3] node=3: asynchronously bootstrapping engine(s) [<no-attributes>=<in-mem>] I180116 16:28:48.331133 157820 server/node.go:428 [n3] node=3: started with [] engine(s) and attributes [] I180116 16:28:48.331187 157820 sql/distsql_physical_planner.go:122 [n3] creating DistSQLPlanner with address {tcp 127.0.0.1:41455} I180116 16:28:48.332045 158569 storage/stores.go:350 [n1] wrote 2 node addresses to persistent storage I180116 16:28:48.332494 158378 storage/stores.go:350 [n2] wrote 2 node addresses to persistent storage I180116 16:28:48.333695 158621 server/node.go:608 [n3] bootstrapped store [n3,s3] I180116 16:28:48.334336 157820 server/server.go:1161 [n3] starting https server at 127.0.0.1:38685 I180116 16:28:48.334361 157820 server/server.go:1162 [n3] starting grpc/postgres server at 127.0.0.1:41455 I180116 16:28:48.334371 157820 server/server.go:1163 [n3] advertising CockroachDB node at 127.0.0.1:41455 I180116 16:28:48.336042 157820 server/server.go:1232 [n3] done ensuring all necessary migrations have run I180116 16:28:48.336068 157820 server/server.go:1235 [n3] serving sql connections I180116 16:28:48.336926 157837 storage/replica_raftstorage.go:540 [replicate,n1,s1,r14/1:/Table/1{7-8}] generated preemptive snapshot f64aec7c at index 16 I180116 16:28:48.338143 157820 testutils/testcluster/testcluster.go:528 [n1,s1] has 20 underreplicated ranges I180116 16:28:48.341425 158772 sql/event_log.go:115 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:41455} Attrs: Locality: ServerVersion:1.1-8} ClusterID:f8371979-8bbb-42c2-9861-24b11bebe89a StartedAt:1516120128331124840 LastUp:1516120128331124840} I180116 16:28:48.347632 157837 storage/store.go:3567 [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n2,s2):?: kv pairs: 10, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms I180116 16:28:48.347635 157820 testutils/testcluster/testcluster.go:528 [n1,s1] has 20 underreplicated ranges I180116 16:28:48.347987 158803 storage/replica_raftstorage.go:746 [n2,s2,r14/?:{-}] applying preemptive snapshot at index 16 (id=f64aec7c, encoded size=2476, 1 rocksdb batches, 6 log entries) I180116 16:28:48.348263 158803 storage/replica_raftstorage.go:752 [n2,s2,r14/?:/Table/1{7-8}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms] I180116 16:28:48.348887 157837 storage/replica_command.go:1741 [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, next=2] I180116 16:28:48.350793 157837 storage/replica.go:3185 [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3 I180116 16:28:48.351402 157837 storage/replica_raftstorage.go:540 [replicate,n1,s1,r1/1:/{Min-System/}] generated preemptive snapshot e1f75307 at index 71 I180116 16:28:48.352589 158823 storage/raft_transport.go:459 [n2] raft transport stream to node 1 established I180116 16:28:48.361942 157837 storage/store.go:3567 [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n3,s3):?: kv pairs: 54, log entries: 48, rate-limit: 8.0 MiB/sec, 1ms I180116 16:28:48.362387 158559 storage/replica_raftstorage.go:746 [n3,s3,r1/?:{-}] applying preemptive snapshot at index 71 (id=e1f75307, encoded size=13369, 1 rocksdb batches, 48 log entries) I180116 16:28:48.363289 158559 storage/replica_raftstorage.go:752 [n3,s3,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms] I180116 16:28:48.363942 157837 storage/replica_command.go:1741 [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2] I180116 16:28:48.366319 157837 storage/replica.go:3185 [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3 I180116 16:28:48.366959 158032 storage/replica_raftstorage.go:540 [replicate,n1,s1,r1/1:/{Min-System/}] generated preemptive snapshot 00ee5272 at index 74 I180116 16:28:48.367888 158032 storage/store.go:3567 [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n2,s2):?: kv pairs: 57, log entries: 51, rate-limit: 8.0 MiB/sec, 1ms I180116 16:28:48.367926 158765 storage/raft_transport.go:459 [n3] raft transport stream to node 1 established I180116 16:28:48.368247 158760 storage/replica_raftstorage.go:746 [n2,s2,r1/?:{-}] applying preemptive snapshot at index 74 (id=00ee5272, encoded size=14974, 1 rocksdb batches, 51 log entries) I180116 16:28:48.368561 158760 storage/replica_raftstorage.go:752 [n2,s2,r1/?:/{Min-System/}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms] I180116 16:28:48.369087 158032 storage/replica_command.go:1741 [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r1:/{Min-System/} [(n1,s1):1, (n3,s3):2, next=3] I180116 16:28:48.369837 157820 testutils/testcluster/testcluster.go:528 [n1,s1] has 20 underreplicated ranges I180116 16:28:48.371442 158776 storage/replica.go:3185 [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4 I180116 16:28:48.372556 157837 storage/replica_raftstorage.go:540 [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] generated preemptive snapshot 3f2e384e at index 19 panic: test timed out after 4m0s goroutine 158843 [running]: testing.startAlarm.func1() /usr/local/go/src/testing/testing.go:1145 +0xf9 created by time.goFunc /usr/local/go/src/time/sleep.go:170 +0x44 goroutine 1 [chan receive]: testing.(*T).Run(0xc4200a0780, 0x230e293, 0x1e, 0x23c7f38, 0x8d4f01) /usr/local/go/src/testing/testing.go:790 +0x2fc testing.runTests.func1(0xc4200a0780) /usr/local/go/src/testing/testing.go:1004 +0x64 testing.tRunner(0xc4200a0780, 0xc4206a5db8) /usr/local/go/src/testing/testing.go:746 +0xd0 testing.runTests(0xc4202a2080, 0x33b3600, 0x38, 0x38, 0x0) /usr/local/go/src/testing/testing.go:1002 +0x2d8 testing.(*M).Run(0xc42092bf18, 0x23c81c8) /usr/local/go/src/testing/testing.go:921 +0x111 github.com/cockroachdb/cockroach/pkg/ccl/sqlccl.TestMain(0xc4206a5f18) /go/src/github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/main_test.go:31 +0xda main.main() github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/_test/_testmain.go:172 +0xdb goroutine 6 [chan receive]: github.com/cockroachdb/cockroach/pkg/util/log.(*loggingT).flushDaemon(0x3600940) /go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:1043 +0x81 created by github.com/cockroachdb/cockroach/pkg/util/log.init.0 /go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:581 +0xbf goroutine 23 [syscall, 4 minutes]: os/signal.signal_recv(0x0) /usr/local/go/src/runtime/sigqueue.go:131 +0xa6 os/signal.loop() /usr/local/go/src/os/signal/signal_unix.go:22 +0x22 created by os/signal.init.0 /usr/local/go/src/os/signal/signal_unix.go:28 +0x41 goroutine 148702 [chan receive]: testing.(*T).Run(0xc422d000f0, 0x22e6b00, 0x7, 0xc42041aec0, 0x1) /usr/local/go/src/testing/testing.go:790 +0x2fc github.com/cockroachdb/cockroach/pkg/ccl/sqlccl_test.TestRestoreDatabaseVersusTable(0xc422d000f0) /go/src/github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/backup_test.go:2217 +0x808 testing.tRunner(0xc422d000f0, 0x23c7f38) /usr/local/go/src/testing/testing.go:746 +0xd0 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:789 +0x2de goroutine 52 [select, locked to thread]: runtime.gopark(0x23cbcc0, 0x0, 0x22e54ff, 0x6, 0x18, 0x1) /usr/local/go/src/runtime/proc.go:277 +0x12c runtime.selectgo(0xc42003ff50, 0xc42044ac00) /usr/local/go/src/runtime/select.go:395 +0x1138 runtime.ensureSigM.func1() /usr/local/go/src/runtime/signal_unix.go:511 +0x220 runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:2337 +0x1 goroutine 158708 [semacquire]: sync.runtime_notifyListWait(0xc422edce50, 0xc40000003b) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422edce40) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc4224a1290, 0x33de160, 0xc421c0cd50) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc421c0cd50) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4202db080, 0xc4225a1950, 0xc4202db070) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158505 [semacquire]: sync.runtime_notifyListWait(0xc422422350, 0xc400000036) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422422340) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc421996240, 0x33de160, 0xc4226530b0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc4226530b0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4208ec9a0, 0xc421bc1050, 0xc4208ec990) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158333 [select]: github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*http2Server).keepalive(0xc421dfe160) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/http2_server.go:935 +0x264 created by github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.newHTTP2Server /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/http2_server.go:230 +0x8fa goroutine 158015 [semacquire]: sync.runtime_notifyListWait(0xc422242450, 0xc40000016b) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422242440) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc4226081b0, 0x33de160, 0xc4213c6a50) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc4213c6a50) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4203c6c90, 0xc4228a1a70, 0xc4203c6c80) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158070 [select]: github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChangeManager).Start.func1(0x33de160, 0xc422d02a20) /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:1023 +0x9b6 github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4214b6660, 0xc4228a1a70, 0xc4202be240) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158396 [select]: github.com/cockroachdb/cockroach/pkg/server.(*Server).startSampleEnvironment.func1(0x33de160, 0xc420846ff0) /go/src/github.com/cockroachdb/cockroach/pkg/server/server.go:1416 +0x16c github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc420e0a540, 0xc421bc1050, 0xc423a7ef80) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158468 [semacquire]: sync.runtime_notifyListWait(0xc422422350, 0xc400000033) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422422340) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc421996240, 0x33de160, 0xc423b6c7b0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc423b6c7b0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4208ec500, 0xc421bc1050, 0xc4208ec4f0) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 149011 [semacquire]: sync.runtime_notifyListWait(0xc421b20590, 0xc400000236) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc421b20580) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc4221ce240, 0x33de160, 0xc4225f87e0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc4225f87e0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc420400560, 0xc422db45a0, 0xc420400550) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 148982 [semacquire]: sync.runtime_notifyListWait(0xc421b20590, 0xc400000249) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc421b20580) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc4221ce240, 0x33de160, 0xc42446a120) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc42446a120) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc420400090, 0xc422db45a0, 0xc420400080) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158005 [semacquire]: sync.runtime_notifyListWait(0xc422242450, 0xc400000170) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422242440) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc4226081b0, 0x33de160, 0xc4229586f0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc4229586f0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4203c6ab0, 0xc4228a1a70, 0xc4203c6aa0) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158624 [select]: github.com/cockroachdb/cockroach/pkg/server.(*Server).refreshSettings.func2(0x33de160, 0xc42518a510) /go/src/github.com/cockroachdb/cockroach/pkg/server/settingsworker.go:112 +0x2db github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc423ab7b60, 0xc4225a1950, 0xc4203507e0) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158440 [chan receive]: github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/cmux.muxListener.Accept(...) /go/src/github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/cmux/cmux.go:184 github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/cmux.(*muxListener).Accept(0xc421a56540, 0xc425609530, 0xc42475be80, 0x96d4e5, 0xc425609558) <autogenerated>:1 +0x65 crypto/tls.(*listener).Accept(0xc421a565a0, 0xc420014108, 0x2068180, 0x3397560, 0x22ae720) /usr/local/go/src/crypto/tls/tls.go:52 +0x37 net/http.(*Server).Serve(0xc42139be10, 0x33cf8a0, 0xc421a565a0, 0x0, 0x0) /usr/local/go/src/net/http/server.go:2695 +0x1b2 github.com/cockroachdb/cockroach/pkg/server.(*Server).Start.func5(0x33de160, 0xc4256094d0) /go/src/github.com/cockroachdb/cockroach/pkg/server/server.go:779 +0x42 github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4203c6880, 0xc4225a1950, 0xc421a565c0) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158623 [select]: github.com/cockroachdb/cockroach/pkg/server.(*Node).startGossip.func1(0x33de160, 0xc42518a4e0) /go/src/github.com/cockroachdb/cockroach/pkg/server/node.go:680 +0x2dd github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc423ab7b30, 0xc4225a1950, 0xc4203507c0) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158370 [chan receive]: github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/cmux.muxListener.Accept(...) /go/src/github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/cmux/cmux.go:184 github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/cmux.(*muxListener).Accept(0xc422adae20, 0x23cb0a0, 0xc4209328c0, 0x33e9f60, 0xc42324eaa0) <autogenerated>:1 +0x65 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).Serve(0xc4209328c0, 0x33dd2e0, 0xc422adae20, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:463 +0x196 github.com/cockroachdb/cockroach/pkg/server.(*Server).Start.func7(0x33de160, 0xc4214a0fc0) /go/src/github.com/cockroachdb/cockroach/pkg/server/server.go:795 +0x43 github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc420c96c20, 0xc421bc1050, 0xc422adaf40) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158496 [semacquire]: sync.runtime_notifyListWait(0xc422422350, 0xc400000024) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422422340) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc421996240, 0x33de160, 0xc422652ed0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc422652ed0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4208ec880, 0xc421bc1050, 0xc4208ec870) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158271 [semacquire]: sync.runtime_notifyListWait(0xc422422350, 0xc400000006) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422422340) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc421996240, 0x33de160, 0xc423b6c360) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc423b6c360) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4208ec260, 0xc421bc1050, 0xc4208ec250) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158098 [select]: github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*http2Client).keepalive(0xc420c48d80) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/http2_client.go:1266 +0x14b created by github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.newHTTP2Client /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/http2_client.go:302 +0xe51 goroutine 158796 [select]: github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*recvBufferReader).read(0xc4213d3270, 0xc4231bafd0, 0x5, 0x5, 0xf, 0xc4238fbb48, 0x8719e8) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:133 +0x28b github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*recvBufferReader).Read(0xc4213d3270, 0xc4231bafd0, 0x5, 0x5, 0x3, 0xc420022000, 0xc420022070) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:122 +0x67 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*transportReader).Read(0xc422026f30, 0xc4231bafd0, 0x5, 0x5, 0x8bcfce, 0xc421e010c8, 0xc421af6100) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:403 +0x55 io.ReadAtLeast(0x33c43a0, 0xc422026f30, 0xc4231bafd0, 0x5, 0x5, 0x5, 0xc4221e6580, 0xc421e01000, 0xc400000005) /usr/local/go/src/io/io.go:309 +0x86 io.ReadFull(0x33c43a0, 0xc422026f30, 0xc4231bafd0, 0x5, 0x5, 0x87495b, 0xc4238fbc40, 0x89ff20) /usr/local/go/src/io/io.go:327 +0x58 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*Stream).Read(0xc421e01000, 0xc4231bafd0, 0x5, 0x5, 0xc4238fbcc0, 0x88770a, 0xc4222663c0) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:387 +0xbf github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*parser).recvMsg(0xc4231bafc0, 0x7fffffff, 0x60, 0x58, 0x220a4a0, 0xc4238fbdb8, 0xe0000000006, 0x58) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/rpc_util.go:270 +0x65 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.recv(0xc4231bafc0, 0x33dd960, 0x364bc88, 0xc421e01000, 0x33cdf60, 0x364bc88, 0x21c71a0, 0xc42040a4a0, 0x7fffffff, 0xc42378c7e0, ...) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/rpc_util.go:356 +0x4d github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*serverStream).RecvMsg(0xc4237e2680, 0x21c71a0, 0xc42040a4a0, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/stream.go:644 +0x10e github.com/cockroachdb/cockroach/pkg/storage.(*multiRaftRaftMessageBatchServer).Recv(0xc4208ecdc0, 0x33de160, 0xc422026fc0, 0xc424d00900) /go/src/github.com/cockroachdb/cockroach/pkg/storage/raft.pb.go:394 +0x62 github.com/cockroachdb/cockroach/pkg/storage.(*RaftTransport).RaftMessageBatch.func1.1.1(0x33e9e40, 0xc4208ecdc0, 0xc4221dc5a0, 0x33de160, 0xc422026fc0, 0x18, 0xfe67b0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/raft_transport.go:330 +0x83 github.com/cockroachdb/cockroach/pkg/storage.(*RaftTransport).RaftMessageBatch.func1.1(0x33de160, 0xc422026fc0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/raft_transport.go:353 +0x5d github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4208ecde0, 0xc4228a1a70, 0xc422026f90) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158720 [select]: github.com/cockroachdb/cockroach/pkg/storage.(*Store).startGossip.func4(0x33de160, 0xc421c0d020) /go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:1313 +0x3e9 github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4202db1d0, 0xc4225a1950, 0xc420f92640) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158821 [select]: github.com/cockroachdb/cockroach/pkg/storage.(*RaftTransport).RaftMessageBatch(0xc422d01950, 0x33e9e40, 0xc420e0be60, 0x33cb720, 0xc422d01950) /go/src/github.com/cockroachdb/cockroach/pkg/storage/raft_transport.go:359 +0x215 github.com/cockroachdb/cockroach/pkg/storage._MultiRaft_RaftMessageBatch_Handler(0x22a9c60, 0xc422d01950, 0x33e7200, 0xc420a36980, 0x8580c8, 0x20) /go/src/github.com/cockroachdb/cockroach/pkg/storage/raft.pb.go:375 +0xb2 github.com/cockroachdb/cockroach/pkg/rpc.NewServerWithInterceptor.func2(0x22a9c60, 0xc422d01950, 0x33e7200, 0xc420a36980, 0xc421a57b80, 0x23c9b20, 0x40, 0x3f) /go/src/github.com/cockroachdb/cockroach/pkg/rpc/context.go:187 +0x103 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).processStreamingRPC(0xc4209328c0, 0x33e92a0, 0xc4238e0160, 0xc423370900, 0xc422822390, 0x33a50c0, 0x0, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:940 +0x2e8 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).handleStream(0xc4209328c0, 0x33e92a0, 0xc4238e0160, 0xc423370900, 0x0) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:1027 +0x14c1 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc4202cc500, 0xc4209328c0, 0x33e92a0, 0xc4238e0160, 0xc423370900) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:572 +0x9f created by github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).serveStreams.func1 /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:570 +0xa1 goroutine 158364 [chan receive]: github.com/cockroachdb/cockroach/pkg/util/netutil.MakeServer.func2(0x33de160, 0xc4208df9b0) /go/src/github.com/cockroachdb/cockroach/pkg/util/netutil/net.go:100 +0x64 github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc420c96b40, 0xc421bc1050, 0xc422adad40) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158668 [semacquire]: sync.runtime_notifyListWait(0xc422edce50, 0xc400000013) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422edce40) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc4224a1290, 0x33de160, 0xc421c0c450) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc421c0c450) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4202da9f0, 0xc4225a1950, 0xc4202da9d0) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 157984 [semacquire]: sync.runtime_notifyListWait(0xc422242450, 0xc400000169) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422242440) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc4226081b0, 0x33de160, 0xc4213c6990) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc4213c6990) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4203c6780, 0xc4228a1a70, 0xc4203c6770) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158319 [select]: github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processLoop.func1(0x33de160, 0xc421c0d140) /go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:514 +0x1bb github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4208ec080, 0xc4225a1950, 0xc4203de180) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158019 [semacquire]: sync.runtime_notifyListWait(0xc422242450, 0xc40000017c) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422242440) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc4226081b0, 0x33de160, 0xc422d02690) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc422d02690) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4203c6d10, 0xc4228a1a70, 0xc4203c6d00) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158596 [select]: github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*recvBufferReader).read(0xc421a615e0, 0xc422adb050, 0x5, 0x5, 0xc42042f301, 0xc421dfd830, 0x8719e8) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:133 +0x28b github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*recvBufferReader).Read(0xc421a615e0, 0xc422adb050, 0x5, 0x5, 0x3, 0xc420024600, 0xc420024670) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:122 +0x67 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*transportReader).Read(0xc4245636e0, 0xc422adb050, 0x5, 0x5, 0x8bcfce, 0xc4201ce4c8, 0xc422a99580) /go/src/github.com/cockroachdb/cockroach/vendor/google.gola ``` Please assign, take a look and update the issue accordingly.
1.0
teamcity: failed tests on v2.0-alpha.20180116: test/TestRestoreDatabaseVersusTable, test/TestRestoreDatabaseVersusTable/into_db, test/TestRestoreDatabaseVersusTable/incomplete-db, test/TestRestoreDatabaseVersusTable/tables, test/TestRestoreDatabaseVersusTable/db-exists, test/TestRestoreDatabaseVersusTable/db, test/TestRestoreDatabaseVersusTable/tables-needs-db - The following tests appear to have failed: [#480343](https://teamcity.cockroachdb.com/viewLog.html?buildId=480343): ``` --- FAIL: test/TestRestoreDatabaseVersusTable (0.000s) Test ended in panic. ------- Stdout: ------- W180116 16:28:44.374977 148702 server/status/runtime.go:109 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006" I180116 16:28:44.377172 148702 server/config.go:516 [n?] 1 storage engine initialized I180116 16:28:44.377195 148702 server/config.go:519 [n?] RocksDB cache size: 128 MiB I180116 16:28:44.377201 148702 server/config.go:519 [n?] store 0: in-memory, size 0 B I180116 16:28:44.380489 148702 server/node.go:361 [n?] **** cluster 2ffe2780-b05d-49fe-a57a-d95539f9cf40 has been created I180116 16:28:44.380533 148702 server/server.go:934 [n?] **** add additional nodes by specifying --join=127.0.0.1:45707 I180116 16:28:44.381352 148702 storage/store.go:1209 [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available I180116 16:28:44.381408 148702 server/node.go:486 [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=3.2 KiB), ranges=1, leases=0, writes=0.00, bytesPerReplica={p10=3322.00 p25=3322.00 p50=3322.00 p75=3322.00 p90=3322.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00} I180116 16:28:44.381440 148702 server/node.go:339 [n1] node ID 1 initialized I180116 16:28:44.381493 148702 gossip/gossip.go:332 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:45707" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:8 > I180116 16:28:44.381664 148702 storage/stores.go:331 [n1] read 0 node addresses from persistent storage I180116 16:28:44.381776 148702 server/node.go:627 [n1] connecting to gossip network to verify cluster ID... I180116 16:28:44.381802 148702 server/node.go:652 [n1] node connected via gossip and verified as part of cluster "2ffe2780-b05d-49fe-a57a-d95539f9cf40" I180116 16:28:44.381836 148702 server/node.go:428 [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes [] I180116 16:28:44.381891 148702 sql/distsql_physical_planner.go:122 [n1] creating DistSQLPlanner with address {tcp 127.0.0.1:45707} I180116 16:28:44.383402 148967 storage/replica_command.go:819 [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2] I180116 16:28:44.390026 148702 server/server.go:1161 [n1] starting https server at 127.0.0.1:39153 I180116 16:28:44.390056 148702 server/server.go:1162 [n1] starting grpc/postgres server at 127.0.0.1:45707 I180116 16:28:44.390066 148702 server/server.go:1163 [n1] advertising CockroachDB node at 127.0.0.1:45707 E180116 16:28:44.391004 148968 storage/queue.go:668 [replicate,n1,s1,r1/1:/{Min-System/}] range requires a replication change, but lacks a quorum of live replicas (0/1) I180116 16:28:44.392057 148967 storage/replica_command.go:819 [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3] I180116 16:28:44.398783 148967 storage/replica_command.go:819 [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4] W180116 16:28:44.399561 149047 storage/intent_resolver.go:324 [n1,s1] failed to push during intent resolution: failed to push "sql txn implicit" id=1f3102da key=/Table/SystemConfigSpan/Start rw=true pri=0.01228340 iso=SERIALIZABLE stat=PENDING epo=0 ts=1516120124.393517247,0 orig=1516120124.393517247,0 max=1516120124.393517247,0 wto=false rop=false seq=7 I180116 16:28:44.400055 148702 sql/event_log.go:115 [n1] Event: "alter_table", target: 12, info: {TableName:eventlog Statement:ALTER TABLE system.eventlog ALTER COLUMN "uniqueID" SET DEFAULT uuid_v4() User:node MutationID:0 CascadeDroppedViews:[]} I180116 16:28:44.403524 148967 storage/replica_command.go:819 [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5] I180116 16:28:44.404276 148702 sql/lease.go:348 [n1] publish: descID=12 (eventlog) version=2 mtime=2018-01-16 16:28:44.404137559 +0000 UTC I180116 16:28:44.411017 148967 storage/replica_command.go:819 [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6] I180116 16:28:44.417412 148967 storage/replica_command.go:819 [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7] I180116 16:28:44.420306 148702 sql/event_log.go:115 [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:node} I180116 16:28:44.421670 148967 storage/replica_command.go:819 [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8] I180116 16:28:44.425632 148967 storage/replica_command.go:819 [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9] I180116 16:28:44.429319 148967 storage/replica_command.go:819 [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10] I180116 16:28:44.430672 148702 sql/event_log.go:115 [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:$1 User:node} I180116 16:28:44.433935 148967 storage/replica_command.go:819 [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11] I180116 16:28:44.435240 148702 sql/event_log.go:115 [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:node} I180116 16:28:44.440327 148967 storage/replica_command.go:819 [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12] I180116 16:28:44.444271 148967 storage/replica_command.go:819 [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13] I180116 16:28:44.447890 148967 storage/replica_command.go:819 [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14] I180116 16:28:44.449926 148702 sql/event_log.go:115 [n1] Event: "alter_table", target: 4, info: {TableName:users Statement:ALTER TABLE system.users ADD COLUMN IF NOT EXISTS "isRole" BOOL NOT NULL DEFAULT false User:node MutationID:1 CascadeDroppedViews:[]} I180116 16:28:44.451621 148967 storage/replica_command.go:819 [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15] I180116 16:28:44.452066 148702 sql/lease.go:348 [n1] publish: descID=4 (users) version=2 mtime=2018-01-16 16:28:44.451968724 +0000 UTC I180116 16:28:44.455304 148967 storage/replica_command.go:819 [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16] I180116 16:28:44.456410 148702 sql/lease.go:348 [n1] publish: descID=4 (users) version=3 mtime=2018-01-16 16:28:44.456312308 +0000 UTC I180116 16:28:44.458521 148967 storage/replica_command.go:819 [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17] I180116 16:28:44.459681 148702 sql/backfill.go:132 [n1] Running backfill for "users", v=3, m=1 I180116 16:28:44.461195 148967 storage/replica_command.go:819 [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18] I180116 16:28:44.467606 148967 storage/replica_command.go:819 [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19] I180116 16:28:44.468667 148702 sql/lease.go:348 [n1] publish: descID=4 (users) version=4 mtime=2018-01-16 16:28:44.468575266 +0000 UTC I180116 16:28:44.470853 148967 storage/replica_command.go:819 [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20] I180116 16:28:44.472250 148702 sql/event_log.go:115 [n1] Event: "finish_schema_change", target: 4, info: {MutationID:1} I180116 16:28:44.473551 148702 sql/lease.go:274 publish (count leases): descID=4 name=users version=3 count=1 I180116 16:28:44.499654 148702 server/server.go:1232 [n1] done ensuring all necessary migrations have run I180116 16:28:44.499690 148702 server/server.go:1235 [n1] serving sql connections I180116 16:28:44.501981 149389 sql/event_log.go:115 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:45707} Attrs: Locality: ServerVersion:1.1-8} ClusterID:2ffe2780-b05d-49fe-a57a-d95539f9cf40 StartedAt:1516120124381809901 LastUp:1516120124381809901} I180116 16:28:44.512797 149363 sql/event_log.go:115 [client=127.0.0.1:53982,user=root,n1] Event: "create_database", target: 50, info: {DatabaseName:data Statement:CREATE DATABASE data User:root} I180116 16:28:44.513124 148967 storage/replica_command.go:819 [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21] I180116 16:28:44.516594 149363 sql/event_log.go:115 [client=127.0.0.1:53982,user=root,n1] Event: "create_table", target: 51, info: {TableName:bank Statement:CREATE TABLE bank (id INT PRIMARY KEY, balance INT, payload STRING, FAMILY (id, balance, payload)) User:root} I180116 16:28:44.516963 148967 storage/replica_command.go:819 [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22] I180116 16:28:44.523820 149363 sql/event_log.go:115 [client=127.0.0.1:53982,user=root,n1] Event: "create_database", target: 52, info: {DatabaseName:d2 Statement:CREATE DATABASE d2 User:root} I180116 16:28:44.524163 148967 storage/replica_command.go:819 [split,n1,s1,r22/1:/{Table/51-Max}] initiating a split of this range at key /Table/52 [r23] I180116 16:28:44.526841 149363 sql/event_log.go:115 [client=127.0.0.1:53982,user=root,n1] Event: "create_database", target: 53, info: {DatabaseName:d3 Statement:CREATE DATABASE d3 User:root} I180116 16:28:44.527814 148967 storage/replica_command.go:819 [split,n1,s1,r23/1:/{Table/52-Max}] initiating a split of this range at key /Table/53 [r24] I180116 16:28:44.530589 149363 sql/event_log.go:115 [client=127.0.0.1:53982,user=root,n1] Event: "create_table", target: 54, info: {TableName:d3.foo Statement:CREATE TABLE d3.foo (a INT) User:root} I180116 16:28:44.531535 148967 storage/replica_command.go:819 [split,n1,s1,r24/1:/{Table/53-Max}] initiating a split of this range at key /Table/54 [r25] I180116 16:28:44.533922 149363 sql/event_log.go:115 [client=127.0.0.1:53982,user=root,n1] Event: "create_database", target: 55, info: {DatabaseName:d4 Statement:CREATE DATABASE d4 User:root} I180116 16:28:44.535456 148967 storage/replica_command.go:819 [split,n1,s1,r25/1:/{Table/54-Max}] initiating a split of this range at key /Table/55 [r26] I180116 16:28:44.538408 149363 sql/event_log.go:115 [client=127.0.0.1:53982,user=root,n1] Event: "create_table", target: 56, info: {TableName:d4.foo Statement:CREATE TABLE d4.foo (a INT) User:root} I180116 16:28:44.539645 148967 storage/replica_command.go:819 [split,n1,s1,r26/1:/{Table/55-Max}] initiating a split of this range at key /Table/56 [r27] I180116 16:28:44.542084 149363 sql/event_log.go:115 [client=127.0.0.1:53982,user=root,n1] Event: "create_table", target: 57, info: {TableName:d4.bar Statement:CREATE TABLE d4.bar (a INT) User:root} I180116 16:28:44.543349 148967 storage/replica_command.go:819 [split,n1,s1,r27/1:/{Table/56-Max}] initiating a split of this range at key /Table/57 [r28] I180116 16:28:44.552426 149493 ccl/storageccl/export.go:124 [n1,s1,r27/1:/Table/5{6-7}] export [/Table/56/1,/Table/56/2) I180116 16:28:44.552565 149492 ccl/storageccl/export.go:124 [n1,s1,r25/1:/Table/5{4-5}] export [/Table/54/1,/Table/54/2) I180116 16:28:44.552588 149491 ccl/storageccl/export.go:124 [n1,s1,r22/1:/Table/5{1-2}] export [/Table/51/1,/Table/51/2) I180116 16:28:44.552428 149494 ccl/storageccl/export.go:124 [n1,s1,r28/1:/{Table/57-Max}] export [/Table/57/1,/Table/57/2) I180116 16:28:44.578698 149513 ccl/storageccl/export.go:124 [n1,s1,r27/1:/Table/5{6-7}] export [/Table/56/1,/Table/56/2) I180116 16:28:44.594770 149524 ccl/storageccl/export.go:124 [n1,s1,r28/1:/{Table/57-Max}] export [/Table/57/1,/Table/57/2) I180116 16:28:44.594837 149523 ccl/storageccl/export.go:124 [n1,s1,r27/1:/Table/5{6-7}] export [/Table/56/1,/Table/56/2) I180116 16:28:44.613115 149501 ccl/storageccl/export.go:124 [n1,s1,r28/1:/{Table/57-Max}] export [/Table/57/1,/Table/57/2) I180116 16:28:44.613208 149500 ccl/storageccl/export.go:124 [n1,s1,r27/1:/Table/5{6-7}] export [/Table/56/1,/Table/56/2) --- FAIL: test/TestRestoreDatabaseVersusTable/into_db (0.000s) Test ended in panic. ------- Stdout: ------- W180116 16:28:48.138581 157820 server/status/runtime.go:109 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006" I180116 16:28:48.145014 157820 server/config.go:516 [n?] 1 storage engine initialized I180116 16:28:48.145042 157820 server/config.go:519 [n?] RocksDB cache size: 128 MiB I180116 16:28:48.145048 157820 server/config.go:519 [n?] store 0: in-memory, size 0 B I180116 16:28:48.148107 157820 server/node.go:361 [n?] **** cluster f8371979-8bbb-42c2-9861-24b11bebe89a has been created I180116 16:28:48.148140 157820 server/server.go:934 [n?] **** add additional nodes by specifying --join=127.0.0.1:44949 I180116 16:28:48.149103 157820 storage/store.go:1209 [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available I180116 16:28:48.149154 157820 server/node.go:486 [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=3.2 KiB), ranges=1, leases=0, writes=0.00, bytesPerReplica={p10=3322.00 p25=3322.00 p50=3322.00 p75=3322.00 p90=3322.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00} I180116 16:28:48.149179 157820 server/node.go:339 [n1] node ID 1 initialized I180116 16:28:48.149239 157820 gossip/gossip.go:332 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:44949" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:8 > I180116 16:28:48.149337 157820 storage/stores.go:331 [n1] read 0 node addresses from persistent storage I180116 16:28:48.149445 157820 server/node.go:627 [n1] connecting to gossip network to verify cluster ID... I180116 16:28:48.149476 157820 server/node.go:652 [n1] node connected via gossip and verified as part of cluster "f8371979-8bbb-42c2-9861-24b11bebe89a" I180116 16:28:48.149502 157820 server/node.go:428 [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes [] I180116 16:28:48.149561 157820 sql/distsql_physical_planner.go:122 [n1] creating DistSQLPlanner with address {tcp 127.0.0.1:44949} I180116 16:28:48.151288 158031 storage/replica_command.go:819 [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2] I180116 16:28:48.152525 157820 server/server.go:1161 [n1] starting https server at 127.0.0.1:39351 I180116 16:28:48.152551 157820 server/server.go:1162 [n1] starting grpc/postgres server at 127.0.0.1:44949 I180116 16:28:48.152561 157820 server/server.go:1163 [n1] advertising CockroachDB node at 127.0.0.1:44949 I180116 16:28:48.159122 158031 storage/replica_command.go:819 [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3] I180116 16:28:48.161934 157820 sql/event_log.go:115 [n1] Event: "alter_table", target: 12, info: {TableName:eventlog Statement:ALTER TABLE system.eventlog ALTER COLUMN "uniqueID" SET DEFAULT uuid_v4() User:node MutationID:0 CascadeDroppedViews:[]} I180116 16:28:48.163240 158031 storage/replica_command.go:819 [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4] I180116 16:28:48.166759 157820 sql/lease.go:348 [n1] publish: descID=12 (eventlog) version=2 mtime=2018-01-16 16:28:48.166589069 +0000 UTC I180116 16:28:48.167948 158031 storage/replica_command.go:819 [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5] I180116 16:28:48.172582 158031 storage/replica_command.go:819 [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6] I180116 16:28:48.181804 158031 storage/replica_command.go:819 [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7] I180116 16:28:48.183812 157820 sql/event_log.go:115 [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:node} I180116 16:28:48.185801 158031 storage/replica_command.go:819 [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8] I180116 16:28:48.190312 158031 storage/replica_command.go:819 [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9] I180116 16:28:48.194182 158031 storage/replica_command.go:819 [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10] I180116 16:28:48.195422 157820 sql/event_log.go:115 [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:$1 User:node} I180116 16:28:48.198615 158031 storage/replica_command.go:819 [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11] I180116 16:28:48.198916 157820 sql/event_log.go:115 [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:node} I180116 16:28:48.202975 158031 storage/replica_command.go:819 [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12] I180116 16:28:48.206413 158031 storage/replica_command.go:819 [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13] I180116 16:28:48.211016 158031 storage/replica_command.go:819 [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14] I180116 16:28:48.211425 157820 sql/event_log.go:115 [n1] Event: "alter_table", target: 4, info: {TableName:users Statement:ALTER TABLE system.users ADD COLUMN IF NOT EXISTS "isRole" BOOL NOT NULL DEFAULT false User:node MutationID:1 CascadeDroppedViews:[]} I180116 16:28:48.214201 157820 sql/lease.go:348 [n1] publish: descID=4 (users) version=2 mtime=2018-01-16 16:28:48.213891792 +0000 UTC I180116 16:28:48.215557 158031 storage/replica_command.go:819 [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15] I180116 16:28:48.218811 158031 storage/replica_command.go:819 [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16] I180116 16:28:48.223647 157820 sql/lease.go:348 [n1] publish: descID=4 (users) version=3 mtime=2018-01-16 16:28:48.223224643 +0000 UTC I180116 16:28:48.227994 158031 storage/replica_command.go:819 [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17] I180116 16:28:48.228590 157820 sql/backfill.go:132 [n1] Running backfill for "users", v=3, m=1 I180116 16:28:48.231527 158031 storage/replica_command.go:819 [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18] I180116 16:28:48.234972 158031 storage/replica_command.go:819 [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19] W180116 16:28:48.237337 158299 storage/intent_resolver.go:324 [n1,s1] failed to push during intent resolution: failed to push "split" id=710285b3 key=/Local/Range/Table/21/RangeDescriptor rw=true pri=0.03966754 iso=SERIALIZABLE stat=PENDING epo=0 ts=1516120128.234995043,0 orig=1516120128.234995043,0 max=1516120128.234995043,0 wto=false rop=false seq=3 I180116 16:28:48.238261 157820 sql/lease.go:348 [n1] publish: descID=4 (users) version=4 mtime=2018-01-16 16:28:48.238167096 +0000 UTC I180116 16:28:48.238595 158031 storage/replica_command.go:819 [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20] I180116 16:28:48.241703 157820 sql/event_log.go:115 [n1] Event: "finish_schema_change", target: 4, info: {MutationID:1} I180116 16:28:48.242876 157820 sql/lease.go:274 publish (count leases): descID=4 name=users version=3 count=1 I180116 16:28:48.273211 157820 server/server.go:1232 [n1] done ensuring all necessary migrations have run I180116 16:28:48.273253 157820 server/server.go:1235 [n1] serving sql connections I180116 16:28:48.275247 158355 sql/event_log.go:115 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:44949} Attrs: Locality: ServerVersion:1.1-8} ClusterID:f8371979-8bbb-42c2-9861-24b11bebe89a StartedAt:1516120128149480129 LastUp:1516120128149480129} W180116 16:28:48.279050 157820 server/status/runtime.go:109 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006" I180116 16:28:48.281448 157820 server/config.go:516 [n?] 1 storage engine initialized I180116 16:28:48.281471 157820 server/config.go:519 [n?] RocksDB cache size: 128 MiB I180116 16:28:48.281477 157820 server/config.go:519 [n?] store 0: in-memory, size 0 B W180116 16:28:48.281540 157820 gossip/gossip.go:1292 [n?] no incoming or outgoing connections I180116 16:28:48.281575 157820 server/server.go:936 [n?] no stores bootstrapped and --join flag specified, awaiting init command. I180116 16:28:48.291341 158204 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:44949 I180116 16:28:48.293688 158328 gossip/server.go:219 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:45903} I180116 16:28:48.297171 157820 storage/stores.go:331 [n?] read 0 node addresses from persistent storage I180116 16:28:48.297224 157820 storage/stores.go:350 [n?] wrote 1 node addresses to persistent storage I180116 16:28:48.297234 157820 server/node.go:627 [n?] connecting to gossip network to verify cluster ID... I180116 16:28:48.297258 157820 server/node.go:652 [n?] node connected via gossip and verified as part of cluster "f8371979-8bbb-42c2-9861-24b11bebe89a" I180116 16:28:48.297482 158351 kv/dist_sender.go:354 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping I180116 16:28:48.298614 158350 kv/dist_sender.go:354 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping I180116 16:28:48.299487 157820 kv/dist_sender.go:354 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping I180116 16:28:48.300547 157820 server/node.go:332 [n?] new node allocated ID 2 I180116 16:28:48.300616 157820 gossip/gossip.go:332 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:45903" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:8 > I180116 16:28:48.300675 157820 server/node.go:414 [n2] node=2: asynchronously bootstrapping engine(s) [<no-attributes>=<in-mem>] I180116 16:28:48.300697 157820 server/node.go:428 [n2] node=2: started with [] engine(s) and attributes [] I180116 16:28:48.300806 157820 sql/distsql_physical_planner.go:122 [n2] creating DistSQLPlanner with address {tcp 127.0.0.1:45903} I180116 16:28:48.301387 158376 storage/stores.go:350 [n1] wrote 1 node addresses to persistent storage I180116 16:28:48.303309 157820 server/server.go:1161 [n2] starting https server at 127.0.0.1:45763 I180116 16:28:48.303336 157820 server/server.go:1162 [n2] starting grpc/postgres server at 127.0.0.1:45903 I180116 16:28:48.303345 157820 server/server.go:1163 [n2] advertising CockroachDB node at 127.0.0.1:45903 I180116 16:28:48.303574 158392 server/node.go:608 [n2] bootstrapped store [n2,s2] I180116 16:28:48.304539 157820 server/server.go:1232 [n2] done ensuring all necessary migrations have run I180116 16:28:48.304569 157820 server/server.go:1235 [n2] serving sql connections I180116 16:28:48.309154 158336 sql/event_log.go:115 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:45903} Attrs: Locality: ServerVersion:1.1-8} ClusterID:f8371979-8bbb-42c2-9861-24b11bebe89a StartedAt:1516120128300687650 LastUp:1516120128300687650} W180116 16:28:48.311086 157820 server/status/runtime.go:109 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006" I180116 16:28:48.315364 157820 server/config.go:516 [n?] 1 storage engine initialized I180116 16:28:48.315389 157820 server/config.go:519 [n?] RocksDB cache size: 128 MiB I180116 16:28:48.315396 157820 server/config.go:519 [n?] store 0: in-memory, size 0 B W180116 16:28:48.315467 157820 gossip/gossip.go:1292 [n?] no incoming or outgoing connections I180116 16:28:48.315526 157820 server/server.go:936 [n?] no stores bootstrapped and --join flag specified, awaiting init command. I180116 16:28:48.326818 158411 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:44949 I180116 16:28:48.327185 158596 gossip/server.go:219 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:41455} I180116 16:28:48.328204 157820 storage/stores.go:331 [n?] read 0 node addresses from persistent storage I180116 16:28:48.328249 157820 storage/stores.go:350 [n?] wrote 2 node addresses to persistent storage I180116 16:28:48.328261 157820 server/node.go:627 [n?] connecting to gossip network to verify cluster ID... I180116 16:28:48.328284 157820 server/node.go:652 [n?] node connected via gossip and verified as part of cluster "f8371979-8bbb-42c2-9861-24b11bebe89a" I180116 16:28:48.328524 158618 kv/dist_sender.go:354 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping I180116 16:28:48.329371 158617 kv/dist_sender.go:354 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping I180116 16:28:48.330119 157820 kv/dist_sender.go:354 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping I180116 16:28:48.330943 157820 server/node.go:332 [n?] new node allocated ID 3 I180116 16:28:48.331011 157820 gossip/gossip.go:332 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:41455" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:8 > I180116 16:28:48.331112 157820 server/node.go:414 [n3] node=3: asynchronously bootstrapping engine(s) [<no-attributes>=<in-mem>] I180116 16:28:48.331133 157820 server/node.go:428 [n3] node=3: started with [] engine(s) and attributes [] I180116 16:28:48.331187 157820 sql/distsql_physical_planner.go:122 [n3] creating DistSQLPlanner with address {tcp 127.0.0.1:41455} I180116 16:28:48.332045 158569 storage/stores.go:350 [n1] wrote 2 node addresses to persistent storage I180116 16:28:48.332494 158378 storage/stores.go:350 [n2] wrote 2 node addresses to persistent storage I180116 16:28:48.333695 158621 server/node.go:608 [n3] bootstrapped store [n3,s3] I180116 16:28:48.334336 157820 server/server.go:1161 [n3] starting https server at 127.0.0.1:38685 I180116 16:28:48.334361 157820 server/server.go:1162 [n3] starting grpc/postgres server at 127.0.0.1:41455 I180116 16:28:48.334371 157820 server/server.go:1163 [n3] advertising CockroachDB node at 127.0.0.1:41455 I180116 16:28:48.336042 157820 server/server.go:1232 [n3] done ensuring all necessary migrations have run I180116 16:28:48.336068 157820 server/server.go:1235 [n3] serving sql connections I180116 16:28:48.336926 157837 storage/replica_raftstorage.go:540 [replicate,n1,s1,r14/1:/Table/1{7-8}] generated preemptive snapshot f64aec7c at index 16 I180116 16:28:48.338143 157820 testutils/testcluster/testcluster.go:528 [n1,s1] has 20 underreplicated ranges I180116 16:28:48.341425 158772 sql/event_log.go:115 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:41455} Attrs: Locality: ServerVersion:1.1-8} ClusterID:f8371979-8bbb-42c2-9861-24b11bebe89a StartedAt:1516120128331124840 LastUp:1516120128331124840} I180116 16:28:48.347632 157837 storage/store.go:3567 [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n2,s2):?: kv pairs: 10, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms I180116 16:28:48.347635 157820 testutils/testcluster/testcluster.go:528 [n1,s1] has 20 underreplicated ranges I180116 16:28:48.347987 158803 storage/replica_raftstorage.go:746 [n2,s2,r14/?:{-}] applying preemptive snapshot at index 16 (id=f64aec7c, encoded size=2476, 1 rocksdb batches, 6 log entries) I180116 16:28:48.348263 158803 storage/replica_raftstorage.go:752 [n2,s2,r14/?:/Table/1{7-8}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms] I180116 16:28:48.348887 157837 storage/replica_command.go:1741 [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, next=2] I180116 16:28:48.350793 157837 storage/replica.go:3185 [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3 I180116 16:28:48.351402 157837 storage/replica_raftstorage.go:540 [replicate,n1,s1,r1/1:/{Min-System/}] generated preemptive snapshot e1f75307 at index 71 I180116 16:28:48.352589 158823 storage/raft_transport.go:459 [n2] raft transport stream to node 1 established I180116 16:28:48.361942 157837 storage/store.go:3567 [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n3,s3):?: kv pairs: 54, log entries: 48, rate-limit: 8.0 MiB/sec, 1ms I180116 16:28:48.362387 158559 storage/replica_raftstorage.go:746 [n3,s3,r1/?:{-}] applying preemptive snapshot at index 71 (id=e1f75307, encoded size=13369, 1 rocksdb batches, 48 log entries) I180116 16:28:48.363289 158559 storage/replica_raftstorage.go:752 [n3,s3,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms] I180116 16:28:48.363942 157837 storage/replica_command.go:1741 [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2] I180116 16:28:48.366319 157837 storage/replica.go:3185 [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3 I180116 16:28:48.366959 158032 storage/replica_raftstorage.go:540 [replicate,n1,s1,r1/1:/{Min-System/}] generated preemptive snapshot 00ee5272 at index 74 I180116 16:28:48.367888 158032 storage/store.go:3567 [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n2,s2):?: kv pairs: 57, log entries: 51, rate-limit: 8.0 MiB/sec, 1ms I180116 16:28:48.367926 158765 storage/raft_transport.go:459 [n3] raft transport stream to node 1 established I180116 16:28:48.368247 158760 storage/replica_raftstorage.go:746 [n2,s2,r1/?:{-}] applying preemptive snapshot at index 74 (id=00ee5272, encoded size=14974, 1 rocksdb batches, 51 log entries) I180116 16:28:48.368561 158760 storage/replica_raftstorage.go:752 [n2,s2,r1/?:/{Min-System/}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms] I180116 16:28:48.369087 158032 storage/replica_command.go:1741 [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r1:/{Min-System/} [(n1,s1):1, (n3,s3):2, next=3] I180116 16:28:48.369837 157820 testutils/testcluster/testcluster.go:528 [n1,s1] has 20 underreplicated ranges I180116 16:28:48.371442 158776 storage/replica.go:3185 [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4 I180116 16:28:48.372556 157837 storage/replica_raftstorage.go:540 [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] generated preemptive snapshot 3f2e384e at index 19 panic: test timed out after 4m0s goroutine 158843 [running]: testing.startAlarm.func1() /usr/local/go/src/testing/testing.go:1145 +0xf9 created by time.goFunc /usr/local/go/src/time/sleep.go:170 +0x44 goroutine 1 [chan receive]: testing.(*T).Run(0xc4200a0780, 0x230e293, 0x1e, 0x23c7f38, 0x8d4f01) /usr/local/go/src/testing/testing.go:790 +0x2fc testing.runTests.func1(0xc4200a0780) /usr/local/go/src/testing/testing.go:1004 +0x64 testing.tRunner(0xc4200a0780, 0xc4206a5db8) /usr/local/go/src/testing/testing.go:746 +0xd0 testing.runTests(0xc4202a2080, 0x33b3600, 0x38, 0x38, 0x0) /usr/local/go/src/testing/testing.go:1002 +0x2d8 testing.(*M).Run(0xc42092bf18, 0x23c81c8) /usr/local/go/src/testing/testing.go:921 +0x111 github.com/cockroachdb/cockroach/pkg/ccl/sqlccl.TestMain(0xc4206a5f18) /go/src/github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/main_test.go:31 +0xda main.main() github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/_test/_testmain.go:172 +0xdb goroutine 6 [chan receive]: github.com/cockroachdb/cockroach/pkg/util/log.(*loggingT).flushDaemon(0x3600940) /go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:1043 +0x81 created by github.com/cockroachdb/cockroach/pkg/util/log.init.0 /go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:581 +0xbf goroutine 23 [syscall, 4 minutes]: os/signal.signal_recv(0x0) /usr/local/go/src/runtime/sigqueue.go:131 +0xa6 os/signal.loop() /usr/local/go/src/os/signal/signal_unix.go:22 +0x22 created by os/signal.init.0 /usr/local/go/src/os/signal/signal_unix.go:28 +0x41 goroutine 148702 [chan receive]: testing.(*T).Run(0xc422d000f0, 0x22e6b00, 0x7, 0xc42041aec0, 0x1) /usr/local/go/src/testing/testing.go:790 +0x2fc github.com/cockroachdb/cockroach/pkg/ccl/sqlccl_test.TestRestoreDatabaseVersusTable(0xc422d000f0) /go/src/github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/backup_test.go:2217 +0x808 testing.tRunner(0xc422d000f0, 0x23c7f38) /usr/local/go/src/testing/testing.go:746 +0xd0 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:789 +0x2de goroutine 52 [select, locked to thread]: runtime.gopark(0x23cbcc0, 0x0, 0x22e54ff, 0x6, 0x18, 0x1) /usr/local/go/src/runtime/proc.go:277 +0x12c runtime.selectgo(0xc42003ff50, 0xc42044ac00) /usr/local/go/src/runtime/select.go:395 +0x1138 runtime.ensureSigM.func1() /usr/local/go/src/runtime/signal_unix.go:511 +0x220 runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:2337 +0x1 goroutine 158708 [semacquire]: sync.runtime_notifyListWait(0xc422edce50, 0xc40000003b) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422edce40) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc4224a1290, 0x33de160, 0xc421c0cd50) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc421c0cd50) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4202db080, 0xc4225a1950, 0xc4202db070) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158505 [semacquire]: sync.runtime_notifyListWait(0xc422422350, 0xc400000036) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422422340) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc421996240, 0x33de160, 0xc4226530b0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc4226530b0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4208ec9a0, 0xc421bc1050, 0xc4208ec990) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158333 [select]: github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*http2Server).keepalive(0xc421dfe160) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/http2_server.go:935 +0x264 created by github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.newHTTP2Server /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/http2_server.go:230 +0x8fa goroutine 158015 [semacquire]: sync.runtime_notifyListWait(0xc422242450, 0xc40000016b) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422242440) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc4226081b0, 0x33de160, 0xc4213c6a50) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc4213c6a50) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4203c6c90, 0xc4228a1a70, 0xc4203c6c80) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158070 [select]: github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChangeManager).Start.func1(0x33de160, 0xc422d02a20) /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:1023 +0x9b6 github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4214b6660, 0xc4228a1a70, 0xc4202be240) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158396 [select]: github.com/cockroachdb/cockroach/pkg/server.(*Server).startSampleEnvironment.func1(0x33de160, 0xc420846ff0) /go/src/github.com/cockroachdb/cockroach/pkg/server/server.go:1416 +0x16c github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc420e0a540, 0xc421bc1050, 0xc423a7ef80) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158468 [semacquire]: sync.runtime_notifyListWait(0xc422422350, 0xc400000033) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422422340) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc421996240, 0x33de160, 0xc423b6c7b0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc423b6c7b0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4208ec500, 0xc421bc1050, 0xc4208ec4f0) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 149011 [semacquire]: sync.runtime_notifyListWait(0xc421b20590, 0xc400000236) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc421b20580) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc4221ce240, 0x33de160, 0xc4225f87e0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc4225f87e0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc420400560, 0xc422db45a0, 0xc420400550) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 148982 [semacquire]: sync.runtime_notifyListWait(0xc421b20590, 0xc400000249) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc421b20580) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc4221ce240, 0x33de160, 0xc42446a120) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc42446a120) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc420400090, 0xc422db45a0, 0xc420400080) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158005 [semacquire]: sync.runtime_notifyListWait(0xc422242450, 0xc400000170) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422242440) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc4226081b0, 0x33de160, 0xc4229586f0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc4229586f0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4203c6ab0, 0xc4228a1a70, 0xc4203c6aa0) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158624 [select]: github.com/cockroachdb/cockroach/pkg/server.(*Server).refreshSettings.func2(0x33de160, 0xc42518a510) /go/src/github.com/cockroachdb/cockroach/pkg/server/settingsworker.go:112 +0x2db github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc423ab7b60, 0xc4225a1950, 0xc4203507e0) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158440 [chan receive]: github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/cmux.muxListener.Accept(...) /go/src/github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/cmux/cmux.go:184 github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/cmux.(*muxListener).Accept(0xc421a56540, 0xc425609530, 0xc42475be80, 0x96d4e5, 0xc425609558) <autogenerated>:1 +0x65 crypto/tls.(*listener).Accept(0xc421a565a0, 0xc420014108, 0x2068180, 0x3397560, 0x22ae720) /usr/local/go/src/crypto/tls/tls.go:52 +0x37 net/http.(*Server).Serve(0xc42139be10, 0x33cf8a0, 0xc421a565a0, 0x0, 0x0) /usr/local/go/src/net/http/server.go:2695 +0x1b2 github.com/cockroachdb/cockroach/pkg/server.(*Server).Start.func5(0x33de160, 0xc4256094d0) /go/src/github.com/cockroachdb/cockroach/pkg/server/server.go:779 +0x42 github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4203c6880, 0xc4225a1950, 0xc421a565c0) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158623 [select]: github.com/cockroachdb/cockroach/pkg/server.(*Node).startGossip.func1(0x33de160, 0xc42518a4e0) /go/src/github.com/cockroachdb/cockroach/pkg/server/node.go:680 +0x2dd github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc423ab7b30, 0xc4225a1950, 0xc4203507c0) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158370 [chan receive]: github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/cmux.muxListener.Accept(...) /go/src/github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/cmux/cmux.go:184 github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/cmux.(*muxListener).Accept(0xc422adae20, 0x23cb0a0, 0xc4209328c0, 0x33e9f60, 0xc42324eaa0) <autogenerated>:1 +0x65 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).Serve(0xc4209328c0, 0x33dd2e0, 0xc422adae20, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:463 +0x196 github.com/cockroachdb/cockroach/pkg/server.(*Server).Start.func7(0x33de160, 0xc4214a0fc0) /go/src/github.com/cockroachdb/cockroach/pkg/server/server.go:795 +0x43 github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc420c96c20, 0xc421bc1050, 0xc422adaf40) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158496 [semacquire]: sync.runtime_notifyListWait(0xc422422350, 0xc400000024) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422422340) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc421996240, 0x33de160, 0xc422652ed0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc422652ed0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4208ec880, 0xc421bc1050, 0xc4208ec870) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158271 [semacquire]: sync.runtime_notifyListWait(0xc422422350, 0xc400000006) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422422340) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc421996240, 0x33de160, 0xc423b6c360) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc423b6c360) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4208ec260, 0xc421bc1050, 0xc4208ec250) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158098 [select]: github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*http2Client).keepalive(0xc420c48d80) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/http2_client.go:1266 +0x14b created by github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.newHTTP2Client /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/http2_client.go:302 +0xe51 goroutine 158796 [select]: github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*recvBufferReader).read(0xc4213d3270, 0xc4231bafd0, 0x5, 0x5, 0xf, 0xc4238fbb48, 0x8719e8) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:133 +0x28b github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*recvBufferReader).Read(0xc4213d3270, 0xc4231bafd0, 0x5, 0x5, 0x3, 0xc420022000, 0xc420022070) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:122 +0x67 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*transportReader).Read(0xc422026f30, 0xc4231bafd0, 0x5, 0x5, 0x8bcfce, 0xc421e010c8, 0xc421af6100) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:403 +0x55 io.ReadAtLeast(0x33c43a0, 0xc422026f30, 0xc4231bafd0, 0x5, 0x5, 0x5, 0xc4221e6580, 0xc421e01000, 0xc400000005) /usr/local/go/src/io/io.go:309 +0x86 io.ReadFull(0x33c43a0, 0xc422026f30, 0xc4231bafd0, 0x5, 0x5, 0x87495b, 0xc4238fbc40, 0x89ff20) /usr/local/go/src/io/io.go:327 +0x58 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*Stream).Read(0xc421e01000, 0xc4231bafd0, 0x5, 0x5, 0xc4238fbcc0, 0x88770a, 0xc4222663c0) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:387 +0xbf github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*parser).recvMsg(0xc4231bafc0, 0x7fffffff, 0x60, 0x58, 0x220a4a0, 0xc4238fbdb8, 0xe0000000006, 0x58) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/rpc_util.go:270 +0x65 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.recv(0xc4231bafc0, 0x33dd960, 0x364bc88, 0xc421e01000, 0x33cdf60, 0x364bc88, 0x21c71a0, 0xc42040a4a0, 0x7fffffff, 0xc42378c7e0, ...) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/rpc_util.go:356 +0x4d github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*serverStream).RecvMsg(0xc4237e2680, 0x21c71a0, 0xc42040a4a0, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/stream.go:644 +0x10e github.com/cockroachdb/cockroach/pkg/storage.(*multiRaftRaftMessageBatchServer).Recv(0xc4208ecdc0, 0x33de160, 0xc422026fc0, 0xc424d00900) /go/src/github.com/cockroachdb/cockroach/pkg/storage/raft.pb.go:394 +0x62 github.com/cockroachdb/cockroach/pkg/storage.(*RaftTransport).RaftMessageBatch.func1.1.1(0x33e9e40, 0xc4208ecdc0, 0xc4221dc5a0, 0x33de160, 0xc422026fc0, 0x18, 0xfe67b0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/raft_transport.go:330 +0x83 github.com/cockroachdb/cockroach/pkg/storage.(*RaftTransport).RaftMessageBatch.func1.1(0x33de160, 0xc422026fc0) /go/src/github.com/cockroachdb/cockroach/pkg/storage/raft_transport.go:353 +0x5d github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4208ecde0, 0xc4228a1a70, 0xc422026f90) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158720 [select]: github.com/cockroachdb/cockroach/pkg/storage.(*Store).startGossip.func4(0x33de160, 0xc421c0d020) /go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:1313 +0x3e9 github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4202db1d0, 0xc4225a1950, 0xc420f92640) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158821 [select]: github.com/cockroachdb/cockroach/pkg/storage.(*RaftTransport).RaftMessageBatch(0xc422d01950, 0x33e9e40, 0xc420e0be60, 0x33cb720, 0xc422d01950) /go/src/github.com/cockroachdb/cockroach/pkg/storage/raft_transport.go:359 +0x215 github.com/cockroachdb/cockroach/pkg/storage._MultiRaft_RaftMessageBatch_Handler(0x22a9c60, 0xc422d01950, 0x33e7200, 0xc420a36980, 0x8580c8, 0x20) /go/src/github.com/cockroachdb/cockroach/pkg/storage/raft.pb.go:375 +0xb2 github.com/cockroachdb/cockroach/pkg/rpc.NewServerWithInterceptor.func2(0x22a9c60, 0xc422d01950, 0x33e7200, 0xc420a36980, 0xc421a57b80, 0x23c9b20, 0x40, 0x3f) /go/src/github.com/cockroachdb/cockroach/pkg/rpc/context.go:187 +0x103 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).processStreamingRPC(0xc4209328c0, 0x33e92a0, 0xc4238e0160, 0xc423370900, 0xc422822390, 0x33a50c0, 0x0, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:940 +0x2e8 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).handleStream(0xc4209328c0, 0x33e92a0, 0xc4238e0160, 0xc423370900, 0x0) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:1027 +0x14c1 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc4202cc500, 0xc4209328c0, 0x33e92a0, 0xc4238e0160, 0xc423370900) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:572 +0x9f created by github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).serveStreams.func1 /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:570 +0xa1 goroutine 158364 [chan receive]: github.com/cockroachdb/cockroach/pkg/util/netutil.MakeServer.func2(0x33de160, 0xc4208df9b0) /go/src/github.com/cockroachdb/cockroach/pkg/util/netutil/net.go:100 +0x64 github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc420c96b40, 0xc421bc1050, 0xc422adad40) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158668 [semacquire]: sync.runtime_notifyListWait(0xc422edce50, 0xc400000013) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422edce40) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc4224a1290, 0x33de160, 0xc421c0c450) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc421c0c450) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4202da9f0, 0xc4225a1950, 0xc4202da9d0) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 157984 [semacquire]: sync.runtime_notifyListWait(0xc422242450, 0xc400000169) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422242440) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc4226081b0, 0x33de160, 0xc4213c6990) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc4213c6990) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4203c6780, 0xc4228a1a70, 0xc4203c6770) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158319 [select]: github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processLoop.func1(0x33de160, 0xc421c0d140) /go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:514 +0x1bb github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4208ec080, 0xc4225a1950, 0xc4203de180) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158019 [semacquire]: sync.runtime_notifyListWait(0xc422242450, 0xc40000017c) /usr/local/go/src/runtime/sema.go:507 +0x110 sync.(*Cond).Wait(0xc422242440) /usr/local/go/src/sync/cond.go:56 +0x80 github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc4226081b0, 0x33de160, 0xc422d02690) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:197 +0x7c github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x33de160, 0xc422d02690) /go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x3e github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4203c6d10, 0xc4228a1a70, 0xc4203c6d00) /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:195 +0xf3 created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:188 +0xad goroutine 158596 [select]: github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*recvBufferReader).read(0xc421a615e0, 0xc422adb050, 0x5, 0x5, 0xc42042f301, 0xc421dfd830, 0x8719e8) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:133 +0x28b github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*recvBufferReader).Read(0xc421a615e0, 0xc422adb050, 0x5, 0x5, 0x3, 0xc420024600, 0xc420024670) /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:122 +0x67 github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*transportReader).Read(0xc4245636e0, 0xc422adb050, 0x5, 0x5, 0x8bcfce, 0xc4201ce4c8, 0xc422a99580) /go/src/github.com/cockroachdb/cockroach/vendor/google.gola ``` Please assign, take a look and update the issue accordingly.
test
teamcity failed tests on alpha test testrestoredatabaseversustable test testrestoredatabaseversustable into db test testrestoredatabaseversustable incomplete db test testrestoredatabaseversustable tables test testrestoredatabaseversustable db exists test testrestoredatabaseversustable db test testrestoredatabaseversustable tables needs db the following tests appear to have failed fail test testrestoredatabaseversustable test ended in panic stdout server status runtime go could not parse build timestamp parsing time as cannot parse as server config go storage engine initialized server config go rocksdb cache size mib server config go store in memory size b server node go cluster has been created server server go add additional nodes by specifying join storage store go failed initial metrics computation system config not yet available server node go initialized store disk capacity mib available mib used b logicalbytes kib ranges leases writes bytesperreplica writesperreplica server node go node id initialized gossip gossip go nodedescriptor set to node id address attrs locality serverversion storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster server node go node started with engine s and attributes sql distsql physical planner go creating distsqlplanner with address tcp storage replica command go initiating a split of this range at key system server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at storage queue go range requires a replication change but lacks a quorum of live replicas storage replica command go initiating a split of this range at key system nodeliveness storage replica command go initiating a split of this range at key system nodelivenessmax storage intent resolver go failed to push during intent resolution failed to push sql txn implicit id key table systemconfigspan start rw true pri iso serializable stat pending epo ts orig max wto false rop false seq sql event log go event alter table target info tablename eventlog statement alter table system eventlog alter column uniqueid set default uuid user node mutationid cascadedroppedviews storage replica command go initiating a split of this range at key system tsd sql lease go publish descid eventlog version mtime utc storage replica command go initiating a split of this range at key system tse storage replica command go initiating a split of this range at key table systemconfigspan start sql event log go event set cluster setting target info settingname diagnostics reporting enabled value true user node storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table sql event log go event set cluster setting target info settingname version value user node storage replica command go initiating a split of this range at key table sql event log go event set cluster setting target info settingname trace debug enable value false user node storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table sql event log go event alter table target info tablename users statement alter table system users add column if not exists isrole bool not null default false user node mutationid cascadedroppedviews storage replica command go initiating a split of this range at key table sql lease go publish descid users version mtime utc storage replica command go initiating a split of this range at key table sql lease go publish descid users version mtime utc storage replica command go initiating a split of this range at key table sql backfill go running backfill for users v m storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table sql lease go publish descid users version mtime utc storage replica command go initiating a split of this range at key table sql event log go event finish schema change target info mutationid sql lease go publish count leases descid name users version count server server go done ensuring all necessary migrations have run server server go serving sql connections sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality serverversion clusterid startedat lastup sql event log go event create database target info databasename data statement create database data user root storage replica command go initiating a split of this range at key table sql event log go event create table target info tablename bank statement create table bank id int primary key balance int payload string family id balance payload user root storage replica command go initiating a split of this range at key table sql event log go event create database target info databasename statement create database user root storage replica command go initiating a split of this range at key table sql event log go event create database target info databasename statement create database user root storage replica command go initiating a split of this range at key table sql event log go event create table target info tablename foo statement create table foo a int user root storage replica command go initiating a split of this range at key table sql event log go event create database target info databasename statement create database user root storage replica command go initiating a split of this range at key table sql event log go event create table target info tablename foo statement create table foo a int user root storage replica command go initiating a split of this range at key table sql event log go event create table target info tablename bar statement create table bar a int user root storage replica command go initiating a split of this range at key table ccl storageccl export go export table table ccl storageccl export go export table table ccl storageccl export go export table table ccl storageccl export go export table table ccl storageccl export go export table table ccl storageccl export go export table table ccl storageccl export go export table table ccl storageccl export go export table table ccl storageccl export go export table table fail test testrestoredatabaseversustable into db test ended in panic stdout server status runtime go could not parse build timestamp parsing time as cannot parse as server config go storage engine initialized server config go rocksdb cache size mib server config go store in memory size b server node go cluster has been created server server go add additional nodes by specifying join storage store go failed initial metrics computation system config not yet available server node go initialized store disk capacity mib available mib used b logicalbytes kib ranges leases writes bytesperreplica writesperreplica server node go node id initialized gossip gossip go nodedescriptor set to node id address attrs locality serverversion storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster server node go node started with engine s and attributes sql distsql physical planner go creating distsqlplanner with address tcp storage replica command go initiating a split of this range at key system server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at storage replica command go initiating a split of this range at key system nodeliveness sql event log go event alter table target info tablename eventlog statement alter table system eventlog alter column uniqueid set default uuid user node mutationid cascadedroppedviews storage replica command go initiating a split of this range at key system nodelivenessmax sql lease go publish descid eventlog version mtime utc storage replica command go initiating a split of this range at key system tsd storage replica command go initiating a split of this range at key system tse storage replica command go initiating a split of this range at key table systemconfigspan start sql event log go event set cluster setting target info settingname diagnostics reporting enabled value true user node storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table sql event log go event set cluster setting target info settingname version value user node storage replica command go initiating a split of this range at key table sql event log go event set cluster setting target info settingname trace debug enable value false user node storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table sql event log go event alter table target info tablename users statement alter table system users add column if not exists isrole bool not null default false user node mutationid cascadedroppedviews sql lease go publish descid users version mtime utc storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table sql lease go publish descid users version mtime utc storage replica command go initiating a split of this range at key table sql backfill go running backfill for users v m storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage intent resolver go failed to push during intent resolution failed to push split id key local range table rangedescriptor rw true pri iso serializable stat pending epo ts orig max wto false rop false seq sql lease go publish descid users version mtime utc storage replica command go initiating a split of this range at key table sql event log go event finish schema change target info mutationid sql lease go publish count leases descid name users version count server server go done ensuring all necessary migrations have run server server go serving sql connections sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality serverversion clusterid startedat lastup server status runtime go could not parse build timestamp parsing time as cannot parse as server config go storage engine initialized server config go rocksdb cache size mib server config go store in memory size b gossip gossip go no incoming or outgoing connections server server go no stores bootstrapped and join flag specified awaiting init command gossip client go started gossip client to gossip server go received initial cluster verification connection from tcp storage stores go read node addresses from persistent storage storage stores go wrote node addresses to persistent storage server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping server node go new node allocated id gossip gossip go nodedescriptor set to node id address attrs locality serverversion server node go node asynchronously bootstrapping engine s server node go node started with engine s and attributes sql distsql physical planner go creating distsqlplanner with address tcp storage stores go wrote node addresses to persistent storage server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at server node go bootstrapped store server server go done ensuring all necessary migrations have run server server go serving sql connections sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality serverversion clusterid startedat lastup server status runtime go could not parse build timestamp parsing time as cannot parse as server config go storage engine initialized server config go rocksdb cache size mib server config go store in memory size b gossip gossip go no incoming or outgoing connections server server go no stores bootstrapped and join flag specified awaiting init command gossip client go started gossip client to gossip server go received initial cluster verification connection from tcp storage stores go read node addresses from persistent storage storage stores go wrote node addresses to persistent storage server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping server node go new node allocated id gossip gossip go nodedescriptor set to node id address attrs locality serverversion server node go node asynchronously bootstrapping engine s server node go node started with engine s and attributes sql distsql physical planner go creating distsqlplanner with address tcp storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage server node go bootstrapped store server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at server server go done ensuring all necessary migrations have run server server go serving sql connections storage replica raftstorage go generated preemptive snapshot at index testutils testcluster testcluster go has underreplicated ranges sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality serverversion clusterid startedat lastup storage store go streamed snapshot to kv pairs log entries rate limit mib sec testutils testcluster testcluster go has underreplicated ranges storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage raft transport go raft transport stream to node established storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor min system storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage raft transport go raft transport stream to node established storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor min system testutils testcluster testcluster go has underreplicated ranges storage replica go proposing add replica updated next storage replica raftstorage go generated preemptive snapshot at index panic test timed out after goroutine testing startalarm usr local go src testing testing go created by time gofunc usr local go src time sleep go goroutine testing t run usr local go src testing testing go testing runtests usr local go src testing testing go testing trunner usr local go src testing testing go testing runtests usr local go src testing testing go testing m run usr local go src testing testing go github com cockroachdb cockroach pkg ccl sqlccl testmain go src github com cockroachdb cockroach pkg ccl sqlccl main test go main main github com cockroachdb cockroach pkg ccl sqlccl test testmain go goroutine github com cockroachdb cockroach pkg util log loggingt flushdaemon go src github com cockroachdb cockroach pkg util log clog go created by github com cockroachdb cockroach pkg util log init go src github com cockroachdb cockroach pkg util log clog go goroutine os signal signal recv usr local go src runtime sigqueue go os signal loop usr local go src os signal signal unix go created by os signal init usr local go src os signal signal unix go goroutine testing t run usr local go src testing testing go github com cockroachdb cockroach pkg ccl sqlccl test testrestoredatabaseversustable go src github com cockroachdb cockroach pkg ccl sqlccl backup test go testing trunner usr local go src testing testing go created by testing t run usr local go src testing testing go goroutine runtime gopark usr local go src runtime proc go runtime selectgo usr local go src runtime select go runtime ensuresigm usr local go src runtime signal unix go runtime goexit usr local go src runtime asm s goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach vendor google golang org grpc transport keepalive go src github com cockroachdb cockroach vendor google golang org grpc transport server go created by github com cockroachdb cockroach vendor google golang org grpc transport go src github com cockroachdb cockroach vendor google golang org grpc transport server go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg sql schemachangemanager start go src github com cockroachdb cockroach pkg sql schema changer go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg server server startsampleenvironment go src github com cockroachdb cockroach pkg server server go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg server server refreshsettings go src github com cockroachdb cockroach pkg server settingsworker go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach vendor github com cockroachdb cmux muxlistener accept go src github com cockroachdb cockroach vendor github com cockroachdb cmux cmux go github com cockroachdb cockroach vendor github com cockroachdb cmux muxlistener accept crypto tls listener accept usr local go src crypto tls tls go net http server serve usr local go src net http server go github com cockroachdb cockroach pkg server server start go src github com cockroachdb cockroach pkg server server go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg server node startgossip go src github com cockroachdb cockroach pkg server node go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach vendor github com cockroachdb cmux muxlistener accept go src github com cockroachdb cockroach vendor github com cockroachdb cmux cmux go github com cockroachdb cockroach vendor github com cockroachdb cmux muxlistener accept github com cockroachdb cockroach vendor google golang org grpc server serve go src github com cockroachdb cockroach vendor google golang org grpc server go github com cockroachdb cockroach pkg server server start go src github com cockroachdb cockroach pkg server server go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach vendor google golang org grpc transport keepalive go src github com cockroachdb cockroach vendor google golang org grpc transport client go created by github com cockroachdb cockroach vendor google golang org grpc transport go src github com cockroachdb cockroach vendor google golang org grpc transport client go goroutine github com cockroachdb cockroach vendor google golang org grpc transport recvbufferreader read go src github com cockroachdb cockroach vendor google golang org grpc transport transport go github com cockroachdb cockroach vendor google golang org grpc transport recvbufferreader read go src github com cockroachdb cockroach vendor google golang org grpc transport transport go github com cockroachdb cockroach vendor google golang org grpc transport transportreader read go src github com cockroachdb cockroach vendor google golang org grpc transport transport go io readatleast usr local go src io io go io readfull usr local go src io io go github com cockroachdb cockroach vendor google golang org grpc transport stream read go src github com cockroachdb cockroach vendor google golang org grpc transport transport go github com cockroachdb cockroach vendor google golang org grpc parser recvmsg go src github com cockroachdb cockroach vendor google golang org grpc rpc util go github com cockroachdb cockroach vendor google golang org grpc recv go src github com cockroachdb cockroach vendor google golang org grpc rpc util go github com cockroachdb cockroach vendor google golang org grpc serverstream recvmsg go src github com cockroachdb cockroach vendor google golang org grpc stream go github com cockroachdb cockroach pkg storage multiraftraftmessagebatchserver recv go src github com cockroachdb cockroach pkg storage raft pb go github com cockroachdb cockroach pkg storage rafttransport raftmessagebatch go src github com cockroachdb cockroach pkg storage raft transport go github com cockroachdb cockroach pkg storage rafttransport raftmessagebatch go src github com cockroachdb cockroach pkg storage raft transport go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg storage store startgossip go src github com cockroachdb cockroach pkg storage store go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg storage rafttransport raftmessagebatch go src github com cockroachdb cockroach pkg storage raft transport go github com cockroachdb cockroach pkg storage multiraft raftmessagebatch handler go src github com cockroachdb cockroach pkg storage raft pb go github com cockroachdb cockroach pkg rpc newserverwithinterceptor go src github com cockroachdb cockroach pkg rpc context go github com cockroachdb cockroach vendor google golang org grpc server processstreamingrpc go src github com cockroachdb cockroach vendor google golang org grpc server go github com cockroachdb cockroach vendor google golang org grpc server handlestream go src github com cockroachdb cockroach vendor google golang org grpc server go github com cockroachdb cockroach vendor google golang org grpc server servestreams go src github com cockroachdb cockroach vendor google golang org grpc server go created by github com cockroachdb cockroach vendor google golang org grpc server servestreams go src github com cockroachdb cockroach vendor google golang org grpc server go goroutine github com cockroachdb cockroach pkg util netutil makeserver go src github com cockroachdb cockroach pkg util netutil net go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg storage basequeue processloop go src github com cockroachdb cockroach pkg storage queue go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach vendor google golang org grpc transport recvbufferreader read go src github com cockroachdb cockroach vendor google golang org grpc transport transport go github com cockroachdb cockroach vendor google golang org grpc transport recvbufferreader read go src github com cockroachdb cockroach vendor google golang org grpc transport transport go github com cockroachdb cockroach vendor google golang org grpc transport transportreader read go src github com cockroachdb cockroach vendor google gola please assign take a look and update the issue accordingly
1
43,855
7,080,118,941
IssuesEvent
2018-01-10 12:17:46
openebs/openebs
https://api.github.com/repos/openebs/openebs
closed
Create document for running OpenEBS on multi-node CentOS 7 OpenShift Cluster
documentation
Choose one: FEATURE REQUEST **What happened**: New feature to document steps. **What you expected to happen**: Document steps for running OpenEBS on multi-node CentOS 7 OpenShift Cluster
1.0
Create document for running OpenEBS on multi-node CentOS 7 OpenShift Cluster - Choose one: FEATURE REQUEST **What happened**: New feature to document steps. **What you expected to happen**: Document steps for running OpenEBS on multi-node CentOS 7 OpenShift Cluster
non_test
create document for running openebs on multi node centos openshift cluster choose one feature request what happened new feature to document steps what you expected to happen document steps for running openebs on multi node centos openshift cluster
0
333,848
29,812,833,983
IssuesEvent
2023-06-16 16:24:15
privacy-tech-lab/privacy-pioneer
https://api.github.com/repos/privacy-tech-lab/privacy-pioneer
opened
Check that extension is ready for recording data for data science study
testing
As we want to analyze a set of, say, 500 sites, is our extension ready to reliably record the data? Can we export it reliably? As discussed, @jjeancharles will take a close look and make any necessary changes.
1.0
Check that extension is ready for recording data for data science study - As we want to analyze a set of, say, 500 sites, is our extension ready to reliably record the data? Can we export it reliably? As discussed, @jjeancharles will take a close look and make any necessary changes.
test
check that extension is ready for recording data for data science study as we want to analyze a set of say sites is our extension ready to reliably record the data can we export it reliably as discussed jjeancharles will take a close look and make any necessary changes
1
331,196
28,606,054,591
IssuesEvent
2023-04-24 00:39:47
harrisonfloam/crypto-trading-algorithm
https://api.github.com/repos/harrisonfloam/crypto-trading-algorithm
closed
Add methods to testing framework
testing
Testing framework has been created but needs work. Current methods need to be reviewed, new methods need to be added to test other parts of CryptoTrader. Figure out how to use unittest library to test methods using assert statements.
1.0
Add methods to testing framework - Testing framework has been created but needs work. Current methods need to be reviewed, new methods need to be added to test other parts of CryptoTrader. Figure out how to use unittest library to test methods using assert statements.
test
add methods to testing framework testing framework has been created but needs work current methods need to be reviewed new methods need to be added to test other parts of cryptotrader figure out how to use unittest library to test methods using assert statements
1
311,242
26,779,156,724
IssuesEvent
2023-01-31 19:36:27
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
Failing test: Chrome UI Functional Tests.test/functional/apps/dashboard_elements/index·ts - dashboard elements dashboard elements "after all" hook: afterTestSuite.trigger in "dashboard elements"
Team:Presentation failed-test
A test failed on a tracked branch ``` NoSuchSessionError: invalid session id at Object.throwDecodedError (node_modules/selenium-webdriver/lib/error.js:522:15) at parseHttpResponse (node_modules/selenium-webdriver/lib/http.js:589:13) at Executor.execute (node_modules/selenium-webdriver/lib/http.js:514:28) at runMicrotasks (<anonymous>) at processTicksAndRejections (node:internal/process/task_queues:96:5) at Task.exec (test/functional/services/remote/prevent_parallel_calls.ts:28:20) { remoteStacktrace: '#0 0x5632d74bd2c3 <unknown>\n' + '#1 0x5632d72c6700 <unknown>\n' + '#2 0x5632d72f2067 <unknown>\n' + '#3 0x5632d731de3c <unknown>\n' + '#4 0x5632d731ac90 <unknown>\n' + '#5 0x5632d731a35b <unknown>\n' + '#6 0x5632d729aab4 <unknown>\n' + '#7 0x5632d729b8a3 <unknown>\n' + '#8 0x5632d750b18e <unknown>\n' + '#9 0x5632d750e622 <unknown>\n' + '#10 0x5632d74f1aae <unknown>\n' + '#11 0x5632d750f2a3 <unknown>\n' + '#12 0x5632d74e5ecf <unknown>\n' + '#13 0x5632d729a55c <unknown>\n' + '#14 0x7f7c3584c083 <unknown>\n' } ``` First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/22680#0183f606-d795-4a7d-8714-b5d7165024ca) <!-- kibanaCiData = {"failed-test":{"test.class":"Chrome UI Functional Tests.test/functional/apps/dashboard_elements/index·ts","test.name":"dashboard elements dashboard elements \"after all\" hook: afterTestSuite.trigger in \"dashboard elements\"","test.failCount":1}} -->
1.0
Failing test: Chrome UI Functional Tests.test/functional/apps/dashboard_elements/index·ts - dashboard elements dashboard elements "after all" hook: afterTestSuite.trigger in "dashboard elements" - A test failed on a tracked branch ``` NoSuchSessionError: invalid session id at Object.throwDecodedError (node_modules/selenium-webdriver/lib/error.js:522:15) at parseHttpResponse (node_modules/selenium-webdriver/lib/http.js:589:13) at Executor.execute (node_modules/selenium-webdriver/lib/http.js:514:28) at runMicrotasks (<anonymous>) at processTicksAndRejections (node:internal/process/task_queues:96:5) at Task.exec (test/functional/services/remote/prevent_parallel_calls.ts:28:20) { remoteStacktrace: '#0 0x5632d74bd2c3 <unknown>\n' + '#1 0x5632d72c6700 <unknown>\n' + '#2 0x5632d72f2067 <unknown>\n' + '#3 0x5632d731de3c <unknown>\n' + '#4 0x5632d731ac90 <unknown>\n' + '#5 0x5632d731a35b <unknown>\n' + '#6 0x5632d729aab4 <unknown>\n' + '#7 0x5632d729b8a3 <unknown>\n' + '#8 0x5632d750b18e <unknown>\n' + '#9 0x5632d750e622 <unknown>\n' + '#10 0x5632d74f1aae <unknown>\n' + '#11 0x5632d750f2a3 <unknown>\n' + '#12 0x5632d74e5ecf <unknown>\n' + '#13 0x5632d729a55c <unknown>\n' + '#14 0x7f7c3584c083 <unknown>\n' } ``` First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/22680#0183f606-d795-4a7d-8714-b5d7165024ca) <!-- kibanaCiData = {"failed-test":{"test.class":"Chrome UI Functional Tests.test/functional/apps/dashboard_elements/index·ts","test.name":"dashboard elements dashboard elements \"after all\" hook: afterTestSuite.trigger in \"dashboard elements\"","test.failCount":1}} -->
test
failing test chrome ui functional tests test functional apps dashboard elements index·ts dashboard elements dashboard elements after all hook aftertestsuite trigger in dashboard elements a test failed on a tracked branch nosuchsessionerror invalid session id at object throwdecodederror node modules selenium webdriver lib error js at parsehttpresponse node modules selenium webdriver lib http js at executor execute node modules selenium webdriver lib http js at runmicrotasks at processticksandrejections node internal process task queues at task exec test functional services remote prevent parallel calls ts remotestacktrace n n n n n n n n n n n n n n n first failure
1
217,465
16,855,780,249
IssuesEvent
2021-06-21 06:25:23
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
raftstore::test_merge::test_merge_isloated_stale_learner failed
component/test-bench
raftstore::test_merge::test_merge_isloated_stale_learner Latest failed builds: https://internal.pingcap.net/idc-jenkins/job/tikv_ghpr_test/21670/consoleFull
1.0
raftstore::test_merge::test_merge_isloated_stale_learner failed - raftstore::test_merge::test_merge_isloated_stale_learner Latest failed builds: https://internal.pingcap.net/idc-jenkins/job/tikv_ghpr_test/21670/consoleFull
test
raftstore test merge test merge isloated stale learner failed raftstore test merge test merge isloated stale learner latest failed builds
1
209,820
16,062,558,020
IssuesEvent
2021-04-23 14:26:13
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
opened
Upgrade Nightwatch from 0.9 to 1.x
VSP-testing-team
Version 1.4 of Nightwatch claims to improve network stability and performance. >This version significantly improves the network stability and performance of the test runner, by improving the underlying http retry mechanism and error detection and reporting of API commands – a particularly important update for when using cloud testing services. - https://github.com/nightwatchjs/nightwatch/releases/tag/v1.4.0 [We're currently using `nightwatch@0.9.21`](https://github.com/department-of-veterans-affairs/vets-website/blob/4ebb0a9a79733ccb4731ebdd16ffded7e8e09175/yarn.lock#L13515-L13529) and the [latest release](https://github.com/nightwatchjs/nightwatch/releases) is `1.6.3`. Perhaps we should try upgrading from major version `0` to` major version `1`. ## Links - [PR from June 2020 to upgrade from 0.9.16 to 0.9.21](https://github.com/department-of-veterans-affairs/vets-website/pull/13066) - [Release notes for `nightwatch@1.0.1`](https://github.com/nightwatchjs/nightwatch/releases/tag/v1.0.1) - [Release notes for `nightwatch@1.1.0](https://github.com/nightwatchjs/nightwatch/releases/tag/v1.1.0) - [Release notes for `nightwatch@1.2.1](https://github.com/nightwatchjs/nightwatch/releases/tag/v1.2.1) - [Release notes for `nightwatch@1.3.0](https://github.com/nightwatchjs/nightwatch/releases/tag/v1.3.0) - [Release notes for `nightwatch@1.4.0](https://github.com/nightwatchjs/nightwatch/releases/tag/v1.4.0) - [Release notes for `nightwatch@1.5.0](https://github.com/nightwatchjs/nightwatch/releases/tag/v1.5.0) - [Release notes for `nightwatch@1.6.0](https://github.com/nightwatchjs/nightwatch/releases/tag/v1.6.0) - [Migrating to Nightwatch 1.0](https://github.com/nightwatchjs/nightwatch/wiki/Migrating-to-Nightwatch-1.0) ## Branches - [`upgrade-nightwatch-from-0.9-to-1.x`](https://github.com/department-of-veterans-affairs/vets-website/compare/upgrade-nightwatch-from-0.9-to-1.x)
1.0
Upgrade Nightwatch from 0.9 to 1.x - Version 1.4 of Nightwatch claims to improve network stability and performance. >This version significantly improves the network stability and performance of the test runner, by improving the underlying http retry mechanism and error detection and reporting of API commands – a particularly important update for when using cloud testing services. - https://github.com/nightwatchjs/nightwatch/releases/tag/v1.4.0 [We're currently using `nightwatch@0.9.21`](https://github.com/department-of-veterans-affairs/vets-website/blob/4ebb0a9a79733ccb4731ebdd16ffded7e8e09175/yarn.lock#L13515-L13529) and the [latest release](https://github.com/nightwatchjs/nightwatch/releases) is `1.6.3`. Perhaps we should try upgrading from major version `0` to` major version `1`. ## Links - [PR from June 2020 to upgrade from 0.9.16 to 0.9.21](https://github.com/department-of-veterans-affairs/vets-website/pull/13066) - [Release notes for `nightwatch@1.0.1`](https://github.com/nightwatchjs/nightwatch/releases/tag/v1.0.1) - [Release notes for `nightwatch@1.1.0](https://github.com/nightwatchjs/nightwatch/releases/tag/v1.1.0) - [Release notes for `nightwatch@1.2.1](https://github.com/nightwatchjs/nightwatch/releases/tag/v1.2.1) - [Release notes for `nightwatch@1.3.0](https://github.com/nightwatchjs/nightwatch/releases/tag/v1.3.0) - [Release notes for `nightwatch@1.4.0](https://github.com/nightwatchjs/nightwatch/releases/tag/v1.4.0) - [Release notes for `nightwatch@1.5.0](https://github.com/nightwatchjs/nightwatch/releases/tag/v1.5.0) - [Release notes for `nightwatch@1.6.0](https://github.com/nightwatchjs/nightwatch/releases/tag/v1.6.0) - [Migrating to Nightwatch 1.0](https://github.com/nightwatchjs/nightwatch/wiki/Migrating-to-Nightwatch-1.0) ## Branches - [`upgrade-nightwatch-from-0.9-to-1.x`](https://github.com/department-of-veterans-affairs/vets-website/compare/upgrade-nightwatch-from-0.9-to-1.x)
test
upgrade nightwatch from to x version of nightwatch claims to improve network stability and performance this version significantly improves the network stability and performance of the test runner by improving the underlying http retry mechanism and error detection and reporting of api commands – a particularly important update for when using cloud testing services and the is perhaps we should try upgrading from major version to major version links branches
1
36,739
4,757,911,462
IssuesEvent
2016-10-24 17:58:30
naturenet/naturenet_ios
https://api.github.com/repos/naturenet/naturenet_ios
closed
Define or remove green "Open" and "Completed" words on Project screens
design question usability
![image2 1](https://cloud.githubusercontent.com/assets/21040279/17683815/4463e364-6324-11e6-94c6-0e2217b77358.PNG) ![image1 1](https://cloud.githubusercontent.com/assets/21040279/17683818/46136f0e-6324-11e6-959d-c2211568fae0.PNG) It isn't clear what these mean. I see both Open and Completed (green check) associated with various projects. Doesn't seem to correspond with whether I have submitted any photos...
1.0
Define or remove green "Open" and "Completed" words on Project screens - ![image2 1](https://cloud.githubusercontent.com/assets/21040279/17683815/4463e364-6324-11e6-94c6-0e2217b77358.PNG) ![image1 1](https://cloud.githubusercontent.com/assets/21040279/17683818/46136f0e-6324-11e6-959d-c2211568fae0.PNG) It isn't clear what these mean. I see both Open and Completed (green check) associated with various projects. Doesn't seem to correspond with whether I have submitted any photos...
non_test
define or remove green open and completed words on project screens it isn t clear what these mean i see both open and completed green check associated with various projects doesn t seem to correspond with whether i have submitted any photos
0
420,123
12,233,567,072
IssuesEvent
2020-05-04 11:53:47
cdnjs/cdnjs
https://api.github.com/repos/cdnjs/cdnjs
closed
[Request] Add file-upload-with-preview
Low Priority 🏷 Library Request
**Library name:** file-upload-with-preview **Library description:** This is a simple frontend utility to help the file-upload process on your website. It is written in pure JavaScript, has no dependencies, and is a small 13.55 kB (gzipped). **Git repository url:** https://github.com/promosis/file-upload-with-preview **npm package name or url** (if there is one): file-upload-with-preview **License (List them all if it's multiple):** MIT **Official homepage:** https://promosis.github.io/file-upload-with-preview/ **Wanna say something? Leave message here:** ===================== Notes from cdnjs maintainer(please remove this section after you read it): Please read the [README.md](https://github.com/cdnjs/cdnjs#cdnjs-library-repository) and [CONTRIBUTING.md](https://github.com/cdnjs/cdnjs/blob/master/CONTRIBUTING.md) document first. We encourage you to add a library via sending pull request, it'll be faster than just opening a request issue, since there are tons of issues, please wait with patience, and please don't forget to read the guidelines for contributing, thanks!! <bountysource-plugin> --- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/50444273-request-add-file-upload-with-preview?utm_campaign=plugin&utm_content=tracker%2F32893&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F32893&utm_medium=issues&utm_source=github). </bountysource-plugin>
1.0
[Request] Add file-upload-with-preview - **Library name:** file-upload-with-preview **Library description:** This is a simple frontend utility to help the file-upload process on your website. It is written in pure JavaScript, has no dependencies, and is a small 13.55 kB (gzipped). **Git repository url:** https://github.com/promosis/file-upload-with-preview **npm package name or url** (if there is one): file-upload-with-preview **License (List them all if it's multiple):** MIT **Official homepage:** https://promosis.github.io/file-upload-with-preview/ **Wanna say something? Leave message here:** ===================== Notes from cdnjs maintainer(please remove this section after you read it): Please read the [README.md](https://github.com/cdnjs/cdnjs#cdnjs-library-repository) and [CONTRIBUTING.md](https://github.com/cdnjs/cdnjs/blob/master/CONTRIBUTING.md) document first. We encourage you to add a library via sending pull request, it'll be faster than just opening a request issue, since there are tons of issues, please wait with patience, and please don't forget to read the guidelines for contributing, thanks!! <bountysource-plugin> --- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/50444273-request-add-file-upload-with-preview?utm_campaign=plugin&utm_content=tracker%2F32893&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F32893&utm_medium=issues&utm_source=github). </bountysource-plugin>
non_test
add file upload with preview library name file upload with preview library description this is a simple frontend utility to help the file upload process on your website it is written in pure javascript has no dependencies and is a small kb gzipped git repository url npm package name or url if there is one file upload with preview license list them all if it s multiple mit official homepage wanna say something leave message here notes from cdnjs maintainer please remove this section after you read it please read the and document first we encourage you to add a library via sending pull request it ll be faster than just opening a request issue since there are tons of issues please wait with patience and please don t forget to read the guidelines for contributing thanks want to back this issue we accept bounties via
0
191,455
6,829,121,915
IssuesEvent
2017-11-08 22:55:58
18F/web-design-standards
https://api.github.com/repos/18F/web-design-standards
closed
Link underlining control in the footer is too coarse
[Component] Footer [Priority] Minor
Currently we disable link underlining in the footer with ``` .usa-footer a { font-weight: 400; text-decoration: none; } ``` This rule is too coarse, and should be better scoped to allow underlining in a case like https://federalist-proxy.app.cloud.gov/preview/18f/web-design-standards-docs/update-footer/ (See the `Federalist` link in the footer graf.) Where do we _need_ to disable underlining and how can we scope to just those instances?
1.0
Link underlining control in the footer is too coarse - Currently we disable link underlining in the footer with ``` .usa-footer a { font-weight: 400; text-decoration: none; } ``` This rule is too coarse, and should be better scoped to allow underlining in a case like https://federalist-proxy.app.cloud.gov/preview/18f/web-design-standards-docs/update-footer/ (See the `Federalist` link in the footer graf.) Where do we _need_ to disable underlining and how can we scope to just those instances?
non_test
link underlining control in the footer is too coarse currently we disable link underlining in the footer with usa footer a font weight text decoration none this rule is too coarse and should be better scoped to allow underlining in a case like see the federalist link in the footer graf where do we need to disable underlining and how can we scope to just those instances
0
237,491
19,649,770,693
IssuesEvent
2022-01-10 04:43:01
Amog-OS/AmogOS
https://api.github.com/repos/Amog-OS/AmogOS
closed
Boot with QEMU? [iMac i386]
third-party issue needs input and testing
Can you boot AmogOS with [QEMU](https://www.qemu.org/)? After downloading [AmogOS-1.4.1-x86_64.iso](https://github.com/Amog-OS/AmogOS/releases/download/x64-1.4.1/AmogOS-1.4.1-x86_64.iso) from your website, I tried to use QEMU to run it on my computer (I am on MacOS). I ran the following command in the terminal and I got no results, it showed no GUI and gave me no errors: ```sh qemu-system-x86_64 -boot d -cdrom AmogOS-1.4.1-x86_64.iso -m 1024 ``` How can I boot AmogOS with QEMU?
1.0
Boot with QEMU? [iMac i386] - Can you boot AmogOS with [QEMU](https://www.qemu.org/)? After downloading [AmogOS-1.4.1-x86_64.iso](https://github.com/Amog-OS/AmogOS/releases/download/x64-1.4.1/AmogOS-1.4.1-x86_64.iso) from your website, I tried to use QEMU to run it on my computer (I am on MacOS). I ran the following command in the terminal and I got no results, it showed no GUI and gave me no errors: ```sh qemu-system-x86_64 -boot d -cdrom AmogOS-1.4.1-x86_64.iso -m 1024 ``` How can I boot AmogOS with QEMU?
test
boot with qemu can you boot amogos with after downloading from your website i tried to use qemu to run it on my computer i am on macos i ran the following command in the terminal and i got no results it showed no gui and gave me no errors sh qemu system boot d cdrom amogos iso m how can i boot amogos with qemu
1
308,129
26,578,960,339
IssuesEvent
2023-01-22 07:09:11
KendallDoesCoding/mogul-christmas
https://api.github.com/repos/KendallDoesCoding/mogul-christmas
closed
[TESTING] Does the user get ads on the website?
enhancement help wanted good first issue feedback testings 🟩 priority: low EddieHub:good-first-issue 🏁 status: ready for dev 🔢 points: 2 todo Hacktoberfest-Accepted
In the Important section in the README.md, I suggested that the user uses a adblocker or they will get ads on the website. I just went through this because when I started playing a song, my adblocker showed that it is blocking {number} of ads. It make sense that there will be ads, as this is using the YouTube API, and it is actually embedding the christmas album (but invisible). Although, I just want to confirm that the user does get ads on the website and the information provided in the README is valid. Incase, the user doesn't get any ads. I will remove that from the README.md. I require around 4-5 people to do a 5-10 minute analysis and tell me if they get any ads or no.
1.0
[TESTING] Does the user get ads on the website? - In the Important section in the README.md, I suggested that the user uses a adblocker or they will get ads on the website. I just went through this because when I started playing a song, my adblocker showed that it is blocking {number} of ads. It make sense that there will be ads, as this is using the YouTube API, and it is actually embedding the christmas album (but invisible). Although, I just want to confirm that the user does get ads on the website and the information provided in the README is valid. Incase, the user doesn't get any ads. I will remove that from the README.md. I require around 4-5 people to do a 5-10 minute analysis and tell me if they get any ads or no.
test
does the user get ads on the website in the important section in the readme md i suggested that the user uses a adblocker or they will get ads on the website i just went through this because when i started playing a song my adblocker showed that it is blocking number of ads it make sense that there will be ads as this is using the youtube api and it is actually embedding the christmas album but invisible although i just want to confirm that the user does get ads on the website and the information provided in the readme is valid incase the user doesn t get any ads i will remove that from the readme md i require around people to do a minute analysis and tell me if they get any ads or no
1
449,753
31,859,630,984
IssuesEvent
2023-09-15 09:57:05
microsoft/vcpkg
https://api.github.com/repos/microsoft/vcpkg
closed
Manifest docs point to JSON schema URL that no longer exists:
category:documentation
See: https://vcpkg.io/en/docs/users/manifests.html#simple-example-manifest The `$schema` property is a 404: ```json { "$schema": "https://raw.githubusercontent.com/microsoft/vcpkg/master/scripts/vcpkg.schema.json", } ``` Where has this gone? EDIT: Nevermind, it seems to be updated in the repo, so the website is just outdated - https://github.com/microsoft/vcpkg/blob/master/docs/users/manifests.md#simple-example-manifest ```json { "$schema": "https://raw.githubusercontent.com/microsoft/vcpkg-tool/main/docs/vcpkg.schema.json", "name": "my-application", } ```
1.0
Manifest docs point to JSON schema URL that no longer exists: - See: https://vcpkg.io/en/docs/users/manifests.html#simple-example-manifest The `$schema` property is a 404: ```json { "$schema": "https://raw.githubusercontent.com/microsoft/vcpkg/master/scripts/vcpkg.schema.json", } ``` Where has this gone? EDIT: Nevermind, it seems to be updated in the repo, so the website is just outdated - https://github.com/microsoft/vcpkg/blob/master/docs/users/manifests.md#simple-example-manifest ```json { "$schema": "https://raw.githubusercontent.com/microsoft/vcpkg-tool/main/docs/vcpkg.schema.json", "name": "my-application", } ```
non_test
manifest docs point to json schema url that no longer exists see the schema property is a json schema where has this gone edit nevermind it seems to be updated in the repo so the website is just outdated json schema name my application
0
236,275
19,528,201,006
IssuesEvent
2021-12-30 11:55:41
Marfeel/testXB
https://api.github.com/repos/Marfeel/testXB
closed
thread 5
testLabel
### Discussed in https://github.com/Marfeel/testXB/discussions/17 <div type='discussions-op-text'> <sup>Originally posted by **xbeumala** December 29, 2021</sup> adsfa a asdfasf asfasdf asdfasfsaf</div>
1.0
thread 5 - ### Discussed in https://github.com/Marfeel/testXB/discussions/17 <div type='discussions-op-text'> <sup>Originally posted by **xbeumala** December 29, 2021</sup> adsfa a asdfasf asfasdf asdfasfsaf</div>
test
thread discussed in originally posted by xbeumala december adsfa a asdfasf asfasdf asdfasfsaf
1
25,725
7,744,066,077
IssuesEvent
2018-05-29 14:29:25
tensorflow/tensorflow
https://api.github.com/repos/tensorflow/tensorflow
closed
Issues running tensorflow-GPU: Couldn't open CUDA library libcuda.so.1.
stat:community support type:build/install
Have I written custom code: No OS Platform and Distribution: Scientific Linux release 6.9 (Carbon) TensorFlow installed from: wheel TensorFlow version: 0.11.0rc0 Bazel version: N/A CUDA/cuDNN version: 7.5 / N/A GPU model and memory: Tesla K20m 5GB Exact command to reproduce: Error occurs when importing tensorflow I am having issues running tensorflow-gpu on the cluster computer at my university. There are a number of Tesla K20m GPUs available. The system only has CUDA 7.5 installed so I installed tensorflow in a conda environment using a wheel from version 0.11 which supports CUDA 7.5. I also needed to use the dirty trick posted [here](https://stackoverflow.com/questions/33655731/error-while-importing-tensorflow-in-python2-7-in-ubuntu-12-04-glibc-2-17-not-f?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa) to get this wheel to work, as we have GLIC_2.12 installed. However I don't think the above is causing the problem I am now facing. When I try and run Tensorflow I get the following error: ``` I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:105] Couldn't open CUDA library libcudnn.so. LD_LIBRARY_PATH: /cm/shared/apps/cuda/7.5/lib64:/cm/shared/apps/cuda/7.5/targets/x86_64-linux/lib I tensorflow/stream_executor/cuda/cuda_dnn.cc:3448] Unable to load cuDNN DSO I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:105] Couldn't open CUDA library libcuda.so.1. LD_LIBRARY_PATH: /its/home/tjb32/new-tensorflow-workspace/glib-download/libc6_2.17/lib/x86_64-linux-gnu/:/its/home/tjb32/new-tensorflow-workspace/glib-download/libc6_2.17/usr/lib64/:/cm/shared/apps/cuda/7.5/lib64:/cm/shared/apps/cuda/7.5/targets/x86_64-linux/lib I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:160] hostname: node152 I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:185] libcuda reported version is: Not found: was unable to find libcuda.so DSO loaded into this program I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:189] kernel reported version is: Permission denied: could not open driver version path for reading: /proc/driver/nvidia/version I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1080] LD_LIBRARY_PATH: /its/home/tjb32/new-tensorflow-workspace/glib-download/libc6_2.17/lib/x86_64-linux-gnu/:/its/home/tjb32/new-tensorflow-workspace/glib-download/libc6_2.17/usr/lib64/:/cm/shared/apps/cuda/7.5/lib64:/cm/shared/apps/cuda/7.5/targets/x86_64-linux/lib I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1081] failed to find libcuda.so on this system: Failed precondition: could not dlopen DSO: libcuda.so.1; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcurand.so locally E tensorflow/stream_executor/cuda/cuda_driver.cc:491] failed call to cuInit: CUDA_ERROR_NO_DEVICE I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:140] kernel driver does not appear to be running on this host (node152): /proc/driver/nvidia/version does not exist ``` I know cuDNN isn't installed on the system, but it seems the fatal error is the fact the tenorflow cannot find libcuda.so.1. Running `locate libcuda*` on a login node returns the following: `/usr/lib64/nvidia/libcuda.so /usr/lib64/nvidia/libcuda.so.1 /usr/lib64/nvidia/libcuda.so.384.98` I can also find libcuda.so if I navigate to the `/cm/shared/apps/conda/7.5/lib64/stubs`. However if I add `locate libcuda*` to the job I submit to the GPU queue it returns nothing. So adding the `/usr/lib64/nvidia` to my path does not solve the issue. It seems that the GPU nodes do not have access to llibcuda.so leading CUDA to believe there is no GPU device. Am I missing some additional configuration that I need to do to get Tensorflow working? Or is this more likely an issue with the underlying CUDA installation that I will need to get an system admin to help with?
1.0
Issues running tensorflow-GPU: Couldn't open CUDA library libcuda.so.1. - Have I written custom code: No OS Platform and Distribution: Scientific Linux release 6.9 (Carbon) TensorFlow installed from: wheel TensorFlow version: 0.11.0rc0 Bazel version: N/A CUDA/cuDNN version: 7.5 / N/A GPU model and memory: Tesla K20m 5GB Exact command to reproduce: Error occurs when importing tensorflow I am having issues running tensorflow-gpu on the cluster computer at my university. There are a number of Tesla K20m GPUs available. The system only has CUDA 7.5 installed so I installed tensorflow in a conda environment using a wheel from version 0.11 which supports CUDA 7.5. I also needed to use the dirty trick posted [here](https://stackoverflow.com/questions/33655731/error-while-importing-tensorflow-in-python2-7-in-ubuntu-12-04-glibc-2-17-not-f?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa) to get this wheel to work, as we have GLIC_2.12 installed. However I don't think the above is causing the problem I am now facing. When I try and run Tensorflow I get the following error: ``` I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:105] Couldn't open CUDA library libcudnn.so. LD_LIBRARY_PATH: /cm/shared/apps/cuda/7.5/lib64:/cm/shared/apps/cuda/7.5/targets/x86_64-linux/lib I tensorflow/stream_executor/cuda/cuda_dnn.cc:3448] Unable to load cuDNN DSO I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:105] Couldn't open CUDA library libcuda.so.1. LD_LIBRARY_PATH: /its/home/tjb32/new-tensorflow-workspace/glib-download/libc6_2.17/lib/x86_64-linux-gnu/:/its/home/tjb32/new-tensorflow-workspace/glib-download/libc6_2.17/usr/lib64/:/cm/shared/apps/cuda/7.5/lib64:/cm/shared/apps/cuda/7.5/targets/x86_64-linux/lib I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:160] hostname: node152 I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:185] libcuda reported version is: Not found: was unable to find libcuda.so DSO loaded into this program I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:189] kernel reported version is: Permission denied: could not open driver version path for reading: /proc/driver/nvidia/version I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1080] LD_LIBRARY_PATH: /its/home/tjb32/new-tensorflow-workspace/glib-download/libc6_2.17/lib/x86_64-linux-gnu/:/its/home/tjb32/new-tensorflow-workspace/glib-download/libc6_2.17/usr/lib64/:/cm/shared/apps/cuda/7.5/lib64:/cm/shared/apps/cuda/7.5/targets/x86_64-linux/lib I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1081] failed to find libcuda.so on this system: Failed precondition: could not dlopen DSO: libcuda.so.1; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcurand.so locally E tensorflow/stream_executor/cuda/cuda_driver.cc:491] failed call to cuInit: CUDA_ERROR_NO_DEVICE I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:140] kernel driver does not appear to be running on this host (node152): /proc/driver/nvidia/version does not exist ``` I know cuDNN isn't installed on the system, but it seems the fatal error is the fact the tenorflow cannot find libcuda.so.1. Running `locate libcuda*` on a login node returns the following: `/usr/lib64/nvidia/libcuda.so /usr/lib64/nvidia/libcuda.so.1 /usr/lib64/nvidia/libcuda.so.384.98` I can also find libcuda.so if I navigate to the `/cm/shared/apps/conda/7.5/lib64/stubs`. However if I add `locate libcuda*` to the job I submit to the GPU queue it returns nothing. So adding the `/usr/lib64/nvidia` to my path does not solve the issue. It seems that the GPU nodes do not have access to llibcuda.so leading CUDA to believe there is no GPU device. Am I missing some additional configuration that I need to do to get Tensorflow working? Or is this more likely an issue with the underlying CUDA installation that I will need to get an system admin to help with?
non_test
issues running tensorflow gpu couldn t open cuda library libcuda so have i written custom code no os platform and distribution scientific linux release carbon tensorflow installed from wheel tensorflow version bazel version n a cuda cudnn version n a gpu model and memory tesla exact command to reproduce error occurs when importing tensorflow i am having issues running tensorflow gpu on the cluster computer at my university there are a number of tesla gpus available the system only has cuda installed so i installed tensorflow in a conda environment using a wheel from version which supports cuda i also needed to use the dirty trick posted to get this wheel to work as we have glic installed however i don t think the above is causing the problem i am now facing when i try and run tensorflow i get the following error i tensorflow stream executor dso loader cc successfully opened cuda library libcublas so locally i tensorflow stream executor dso loader cc couldn t open cuda library libcudnn so ld library path cm shared apps cuda cm shared apps cuda targets linux lib i tensorflow stream executor cuda cuda dnn cc unable to load cudnn dso i tensorflow stream executor dso loader cc successfully opened cuda library libcufft so locally i tensorflow stream executor dso loader cc couldn t open cuda library libcuda so ld library path its home new tensorflow workspace glib download lib linux gnu its home new tensorflow workspace glib download usr cm shared apps cuda cm shared apps cuda targets linux lib i tensorflow stream executor cuda cuda diagnostics cc hostname i tensorflow stream executor cuda cuda diagnostics cc libcuda reported version is not found was unable to find libcuda so dso loaded into this program i tensorflow stream executor cuda cuda diagnostics cc kernel reported version is permission denied could not open driver version path for reading proc driver nvidia version i tensorflow stream executor cuda cuda gpu executor cc ld library path its home new tensorflow workspace glib download lib linux gnu its home new tensorflow workspace glib download usr cm shared apps cuda cm shared apps cuda targets linux lib i tensorflow stream executor cuda cuda gpu executor cc failed to find libcuda so on this system failed precondition could not dlopen dso libcuda so dlerror libcuda so cannot open shared object file no such file or directory i tensorflow stream executor dso loader cc successfully opened cuda library libcurand so locally e tensorflow stream executor cuda cuda driver cc failed call to cuinit cuda error no device i tensorflow stream executor cuda cuda diagnostics cc kernel driver does not appear to be running on this host proc driver nvidia version does not exist i know cudnn isn t installed on the system but it seems the fatal error is the fact the tenorflow cannot find libcuda so running locate libcuda on a login node returns the following usr nvidia libcuda so usr nvidia libcuda so usr nvidia libcuda so i can also find libcuda so if i navigate to the cm shared apps conda stubs however if i add locate libcuda to the job i submit to the gpu queue it returns nothing so adding the usr nvidia to my path does not solve the issue it seems that the gpu nodes do not have access to llibcuda so leading cuda to believe there is no gpu device am i missing some additional configuration that i need to do to get tensorflow working or is this more likely an issue with the underlying cuda installation that i will need to get an system admin to help with
0
197,579
15,684,717,366
IssuesEvent
2021-03-25 10:19:50
db0/Fragment-Forge
https://api.github.com/repos/db0/Fragment-Forge
opened
In-Game Tutorial
documentation enhancement
I need to create a tutorial to learn the game, without having to read the SP guide.
1.0
In-Game Tutorial - I need to create a tutorial to learn the game, without having to read the SP guide.
non_test
in game tutorial i need to create a tutorial to learn the game without having to read the sp guide
0
276,270
20,976,799,562
IssuesEvent
2022-03-28 15:54:02
jmxf/tracker-app
https://api.github.com/repos/jmxf/tracker-app
opened
Document directory and relevant gitignore
documentation
Explain the folder structure and the not uploaded parts of packaging the app as well as details on the tutorial
1.0
Document directory and relevant gitignore - Explain the folder structure and the not uploaded parts of packaging the app as well as details on the tutorial
non_test
document directory and relevant gitignore explain the folder structure and the not uploaded parts of packaging the app as well as details on the tutorial
0
124,962
17,794,684,464
IssuesEvent
2021-08-31 20:27:45
ghc-dev/Raymond-Massey
https://api.github.com/repos/ghc-dev/Raymond-Massey
opened
CVE-2020-14332 (Medium) detected in ansible-2.9.9.tar.gz
security vulnerability
## CVE-2020-14332 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary> <p>Radically simple IT automation</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p> <p>Path to dependency file: Raymond-Massey/requirements.txt</p> <p>Path to vulnerable library: /requirements.txt</p> <p> Dependency Hierarchy: - :x: **ansible-2.9.9.tar.gz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Raymond-Massey/commit/ae0d77c11ca4fd387b50d99cc5f550e5c1a4d08f">ae0d77c11ca4fd387b50d99cc5f550e5c1a4d08f</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in the Ansible Engine when using module_args. Tasks executed with check mode (--check-mode) do not properly neutralize sensitive data exposed in the event data. This flaw allows unauthorized users to read this data. The highest threat from this vulnerability is to confidentiality. <p>Publish Date: 2020-09-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14332>CVE-2020-14332</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-14332">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-14332</a></p> <p>Release Date: 2020-07-21</p> <p>Fix Resolution: 2.8.14,2.9.12</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"ansible","packageVersion":"2.9.9","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"ansible:2.9.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.8.14,2.9.12"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-14332","vulnerabilityDetails":"A flaw was found in the Ansible Engine when using module_args. Tasks executed with check mode (--check-mode) do not properly neutralize sensitive data exposed in the event data. This flaw allows unauthorized users to read this data. The highest threat from this vulnerability is to confidentiality.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14332","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-14332 (Medium) detected in ansible-2.9.9.tar.gz - ## CVE-2020-14332 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary> <p>Radically simple IT automation</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p> <p>Path to dependency file: Raymond-Massey/requirements.txt</p> <p>Path to vulnerable library: /requirements.txt</p> <p> Dependency Hierarchy: - :x: **ansible-2.9.9.tar.gz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Raymond-Massey/commit/ae0d77c11ca4fd387b50d99cc5f550e5c1a4d08f">ae0d77c11ca4fd387b50d99cc5f550e5c1a4d08f</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in the Ansible Engine when using module_args. Tasks executed with check mode (--check-mode) do not properly neutralize sensitive data exposed in the event data. This flaw allows unauthorized users to read this data. The highest threat from this vulnerability is to confidentiality. <p>Publish Date: 2020-09-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14332>CVE-2020-14332</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-14332">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-14332</a></p> <p>Release Date: 2020-07-21</p> <p>Fix Resolution: 2.8.14,2.9.12</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"ansible","packageVersion":"2.9.9","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"ansible:2.9.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.8.14,2.9.12"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-14332","vulnerabilityDetails":"A flaw was found in the Ansible Engine when using module_args. Tasks executed with check mode (--check-mode) do not properly neutralize sensitive data exposed in the event data. This flaw allows unauthorized users to read this data. The highest threat from this vulnerability is to confidentiality.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14332","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
non_test
cve medium detected in ansible tar gz cve medium severity vulnerability vulnerable library ansible tar gz radically simple it automation library home page a href path to dependency file raymond massey requirements txt path to vulnerable library requirements txt dependency hierarchy x ansible tar gz vulnerable library found in head commit a href found in base branch master vulnerability details a flaw was found in the ansible engine when using module args tasks executed with check mode check mode do not properly neutralize sensitive data exposed in the event data this flaw allows unauthorized users to read this data the highest threat from this vulnerability is to confidentiality publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree ansible isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails a flaw was found in the ansible engine when using module args tasks executed with check mode check mode do not properly neutralize sensitive data exposed in the event data this flaw allows unauthorized users to read this data the highest threat from this vulnerability is to confidentiality vulnerabilityurl
0
218,451
16,991,502,288
IssuesEvent
2021-06-30 21:11:55
RockefellerArchiveCenter/dimes
https://api.github.com/repos/RockefellerArchiveCenter/dimes
closed
Improve styles for collection content tree
records content ux testing
## Is your feature request related to a problem? Please describe. When collections are mixed in with objects in the same level of the collection content tree, the styles are a little funky ## Describe the solution you'd like We have a design solution for this: See [https://projects.invisionapp.com/share/4T10HL91ZEBV#/screens](https://projects.invisionapp.com/share/4T10HL91ZEBV#/screens).
1.0
Improve styles for collection content tree - ## Is your feature request related to a problem? Please describe. When collections are mixed in with objects in the same level of the collection content tree, the styles are a little funky ## Describe the solution you'd like We have a design solution for this: See [https://projects.invisionapp.com/share/4T10HL91ZEBV#/screens](https://projects.invisionapp.com/share/4T10HL91ZEBV#/screens).
test
improve styles for collection content tree is your feature request related to a problem please describe when collections are mixed in with objects in the same level of the collection content tree the styles are a little funky describe the solution you d like we have a design solution for this see
1
162,460
12,676,808,739
IssuesEvent
2020-06-19 06:18:23
openshift/odo
https://api.github.com/repos/openshift/odo
opened
Remove hard coded container images from e2e_image_test script
area/testing
/kind feature <!-- Welcome! - We kindly ask you to: 1. Fill out the issue template below 2. Use the Google group if you have a question rather than a bug or feature request. The group is at: https://groups.google.com/forum/#!forum/odo-users Thanks for understanding, and for contributing to the project! --> ## Which functionality do you think we should add? Parse the json output of odo catalog list components and then use each of the versions programmatically for testing. ## Why is this needed? Removes hard coded container images from e2e_image_test script
1.0
Remove hard coded container images from e2e_image_test script - /kind feature <!-- Welcome! - We kindly ask you to: 1. Fill out the issue template below 2. Use the Google group if you have a question rather than a bug or feature request. The group is at: https://groups.google.com/forum/#!forum/odo-users Thanks for understanding, and for contributing to the project! --> ## Which functionality do you think we should add? Parse the json output of odo catalog list components and then use each of the versions programmatically for testing. ## Why is this needed? Removes hard coded container images from e2e_image_test script
test
remove hard coded container images from image test script kind feature welcome we kindly ask you to fill out the issue template below use the google group if you have a question rather than a bug or feature request the group is at thanks for understanding and for contributing to the project which functionality do you think we should add parse the json output of odo catalog list components and then use each of the versions programmatically for testing why is this needed removes hard coded container images from image test script
1
51,791
6,197,535,826
IssuesEvent
2017-07-05 17:02:37
PowerShell/PSDscResources
https://api.github.com/repos/PowerShell/PSDscResources
closed
WindowsProcess Log File integration test sometimes failing
tests only
(Not a high priority issue) One of the integration tests that checks if a log file was created sometimes fails. Based on the proceeding test results, the log file does get created eventually.
1.0
WindowsProcess Log File integration test sometimes failing - (Not a high priority issue) One of the integration tests that checks if a log file was created sometimes fails. Based on the proceeding test results, the log file does get created eventually.
test
windowsprocess log file integration test sometimes failing not a high priority issue one of the integration tests that checks if a log file was created sometimes fails based on the proceeding test results the log file does get created eventually
1
750,043
26,186,453,994
IssuesEvent
2023-01-03 01:27:47
brave/brave-browser
https://api.github.com/repos/brave/brave-browser
closed
`DCHECK` failure when tab is detached from vertical tab
crash priority/P3 QA/No release-notes/exclude OS/Desktop feature/vertical-tabs
``` [237317:237317:1225/184438.549694:FATAL:bounds_animator.cc(49)] Check failed: view->parent() == parent_ (0x55ec3ff37390 vs. 0x55ec3f3e7020) #0 0x7f6f2e4e9da2 base::debug::CollectStackTrace() #1 0x7f6f2e3cf1e3 base::debug::StackTrace::StackTrace() #2 0x7f6f2e3f0df3 logging::LogMessage::~LogMessage() #3 0x7f6f2e3f1bae logging::LogMessage::~LogMessage() #4 0x7f6f25a60889 views::BoundsAnimator::AnimateViewTo() #5 0x55ec3b9d791b TabStrip::TabDragContextImpl::StoppedDragging() #6 0x55ec3b9b419f TabDragControllerChromium::CompleteDrag() #7 0x55ec3b9b0dab TabDragControllerChromium::EndDragImpl() #8 0x55ec3b9ae7b9 TabDragControllerChromium::EndDrag() #9 0x55ec3b9b06bd TabDragControllerChromium::RunMoveLoop() #10 0x55ec3b9b2140 TabDragControllerChromium::DetachIntoNewBrowserAndRunMoveLoop() #11 0x55ec3b9b1bb6 TabDragControllerChromium::DragBrowserToNewTabStrip() #12 0x55ec3b9b08b1 TabDragControllerChromium::ContinueDragging() #13 0x55ec3b9aedd2 TabDragControllerChromium::Drag() #14 0x55ec3b9d38ef TabStrip::TabDragContextImpl::ContinueDrag() #15 0x55ec3b9d65ef TabStrip::TabDragContextImpl::OnMouseDragged() #16 0x7f6f25b42143 views::View::ProcessMouseDragged() #17 0x7f6f2802ce67 ui::EventDispatcher::ProcessEvent() #18 0x7f6f2802cbc4 ui::EventDispatcherDelegate::DispatchEvent() #19 0x7f6f25b50500 views::internal::RootView::OnMouseDragged() #20 0x7f6f25b5e57f views::Widget::OnMouseEvent() #21 0x7f6f25b85e47 views::NativeWidgetAura::OnMouseEvent() #22 0x7f6f2802ce67 ui::EventDispatcher::ProcessEvent() ```
1.0
`DCHECK` failure when tab is detached from vertical tab - ``` [237317:237317:1225/184438.549694:FATAL:bounds_animator.cc(49)] Check failed: view->parent() == parent_ (0x55ec3ff37390 vs. 0x55ec3f3e7020) #0 0x7f6f2e4e9da2 base::debug::CollectStackTrace() #1 0x7f6f2e3cf1e3 base::debug::StackTrace::StackTrace() #2 0x7f6f2e3f0df3 logging::LogMessage::~LogMessage() #3 0x7f6f2e3f1bae logging::LogMessage::~LogMessage() #4 0x7f6f25a60889 views::BoundsAnimator::AnimateViewTo() #5 0x55ec3b9d791b TabStrip::TabDragContextImpl::StoppedDragging() #6 0x55ec3b9b419f TabDragControllerChromium::CompleteDrag() #7 0x55ec3b9b0dab TabDragControllerChromium::EndDragImpl() #8 0x55ec3b9ae7b9 TabDragControllerChromium::EndDrag() #9 0x55ec3b9b06bd TabDragControllerChromium::RunMoveLoop() #10 0x55ec3b9b2140 TabDragControllerChromium::DetachIntoNewBrowserAndRunMoveLoop() #11 0x55ec3b9b1bb6 TabDragControllerChromium::DragBrowserToNewTabStrip() #12 0x55ec3b9b08b1 TabDragControllerChromium::ContinueDragging() #13 0x55ec3b9aedd2 TabDragControllerChromium::Drag() #14 0x55ec3b9d38ef TabStrip::TabDragContextImpl::ContinueDrag() #15 0x55ec3b9d65ef TabStrip::TabDragContextImpl::OnMouseDragged() #16 0x7f6f25b42143 views::View::ProcessMouseDragged() #17 0x7f6f2802ce67 ui::EventDispatcher::ProcessEvent() #18 0x7f6f2802cbc4 ui::EventDispatcherDelegate::DispatchEvent() #19 0x7f6f25b50500 views::internal::RootView::OnMouseDragged() #20 0x7f6f25b5e57f views::Widget::OnMouseEvent() #21 0x7f6f25b85e47 views::NativeWidgetAura::OnMouseEvent() #22 0x7f6f2802ce67 ui::EventDispatcher::ProcessEvent() ```
non_test
dcheck failure when tab is detached from vertical tab check failed view parent parent vs base debug collectstacktrace base debug stacktrace stacktrace logging logmessage logmessage logging logmessage logmessage views boundsanimator animateviewto tabstrip tabdragcontextimpl stoppeddragging tabdragcontrollerchromium completedrag tabdragcontrollerchromium enddragimpl tabdragcontrollerchromium enddrag tabdragcontrollerchromium runmoveloop tabdragcontrollerchromium detachintonewbrowserandrunmoveloop tabdragcontrollerchromium dragbrowsertonewtabstrip tabdragcontrollerchromium continuedragging tabdragcontrollerchromium drag tabstrip tabdragcontextimpl continuedrag tabstrip tabdragcontextimpl onmousedragged views view processmousedragged ui eventdispatcher processevent ui eventdispatcherdelegate dispatchevent views internal rootview onmousedragged views widget onmouseevent views nativewidgetaura onmouseevent ui eventdispatcher processevent
0
199,043
15,732,446,106
IssuesEvent
2021-03-29 18:18:23
scikit-learn/scikit-learn
https://api.github.com/repos/scikit-learn/scikit-learn
opened
Spherical k-means confusion
Documentation
#### Describe the issue linked to the documentation https://github.com/scikit-learn/scikit-learn/blob/114616d9f6ce9eba7c1aacd3d4a254f868010e25/examples/text/plot_document_clustering.py#L166-L172 In this line the assumption is that projecting the data points to the unit hypersphere makes k-means behave like spherical k-means. <!-- Tell us about the confusion introduced in the documentation. --> I'm confused about the mention of spherical k-means. As far as I can tell there is no constraint to project the centroids computed in k-means to the hypersphere, so the normalization is not enough for this to be an implementation of spherical k-means with cosine distance as described in the paper here: https://www.cs.utexas.edu/users/inderjit/public_papers/iterative_icdm02.pdf And implementation here: https://github.com/jasonlaska/spherecluster > Spherical K-means differs from conventional K-means in that it projects the estimated cluster centroids onto the the unit sphere at the end of each maximization step (i.e., normalizes the centroids). #### Suggest a potential alternative/fix Unless I am missing something here it might make sense to caveat the analogy? I can make a pull request if this can be agreed. <!-- Tell us how we could improve the documentation in this regard. -->
1.0
Spherical k-means confusion - #### Describe the issue linked to the documentation https://github.com/scikit-learn/scikit-learn/blob/114616d9f6ce9eba7c1aacd3d4a254f868010e25/examples/text/plot_document_clustering.py#L166-L172 In this line the assumption is that projecting the data points to the unit hypersphere makes k-means behave like spherical k-means. <!-- Tell us about the confusion introduced in the documentation. --> I'm confused about the mention of spherical k-means. As far as I can tell there is no constraint to project the centroids computed in k-means to the hypersphere, so the normalization is not enough for this to be an implementation of spherical k-means with cosine distance as described in the paper here: https://www.cs.utexas.edu/users/inderjit/public_papers/iterative_icdm02.pdf And implementation here: https://github.com/jasonlaska/spherecluster > Spherical K-means differs from conventional K-means in that it projects the estimated cluster centroids onto the the unit sphere at the end of each maximization step (i.e., normalizes the centroids). #### Suggest a potential alternative/fix Unless I am missing something here it might make sense to caveat the analogy? I can make a pull request if this can be agreed. <!-- Tell us how we could improve the documentation in this regard. -->
non_test
spherical k means confusion describe the issue linked to the documentation in this line the assumption is that projecting the data points to the unit hypersphere makes k means behave like spherical k means tell us about the confusion introduced in the documentation i m confused about the mention of spherical k means as far as i can tell there is no constraint to project the centroids computed in k means to the hypersphere so the normalization is not enough for this to be an implementation of spherical k means with cosine distance as described in the paper here and implementation here spherical k means differs from conventional k means in that it projects the estimated cluster centroids onto the the unit sphere at the end of each maximization step i e normalizes the centroids suggest a potential alternative fix unless i am missing something here it might make sense to caveat the analogy i can make a pull request if this can be agreed tell us how we could improve the documentation in this regard
0
78,878
15,586,082,309
IssuesEvent
2021-03-18 01:07:55
vlaship/websocket-stomp
https://api.github.com/repos/vlaship/websocket-stomp
opened
CVE-2020-14060 (High) detected in jackson-databind-2.8.11.3.jar
security vulnerability
## CVE-2020-14060 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.11.3.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/ws/build.gradle</p> <p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.11.3/844df5aba5a1a56e00905b165b12bb34116ee858/jackson-databind-2.8.11.3.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.11.3/844df5aba5a1a56e00905b165b12bb34116ee858/jackson-databind-2.8.11.3.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-websocket-1.5.22.RELEASE.jar (Root Library) - spring-boot-starter-web-1.5.22.RELEASE.jar - :x: **jackson-databind-2.8.11.3.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to oadd.org.apache.xalan.lib.sql.JNDIConnectionPool (aka apache/drill). <p>Publish Date: 2020-06-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14060>CVE-2020-14060</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14060">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14060</a></p> <p>Release Date: 2020-06-14</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-14060 (High) detected in jackson-databind-2.8.11.3.jar - ## CVE-2020-14060 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.11.3.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/ws/build.gradle</p> <p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.11.3/844df5aba5a1a56e00905b165b12bb34116ee858/jackson-databind-2.8.11.3.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.11.3/844df5aba5a1a56e00905b165b12bb34116ee858/jackson-databind-2.8.11.3.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-websocket-1.5.22.RELEASE.jar (Root Library) - spring-boot-starter-web-1.5.22.RELEASE.jar - :x: **jackson-databind-2.8.11.3.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to oadd.org.apache.xalan.lib.sql.JNDIConnectionPool (aka apache/drill). <p>Publish Date: 2020-06-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14060>CVE-2020-14060</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14060">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14060</a></p> <p>Release Date: 2020-06-14</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm ws build gradle path to vulnerable library root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter websocket release jar root library spring boot starter web release jar x jackson databind jar vulnerable library vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to oadd org apache xalan lib sql jndiconnectionpool aka apache drill publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource
0
107,738
11,569,794,999
IssuesEvent
2020-02-20 18:15:22
KylaBendrik/fuzzy-computing-machine
https://api.github.com/repos/KylaBendrik/fuzzy-computing-machine
reopened
Documentation for Developers
documentation good first issue
- [ ] Document Process for running program (users) - [ ] Document Process for working on repo (developers)
1.0
Documentation for Developers - - [ ] Document Process for running program (users) - [ ] Document Process for working on repo (developers)
non_test
documentation for developers document process for running program users document process for working on repo developers
0
65,833
6,975,903,065
IssuesEvent
2017-12-12 09:11:03
SatelliteQE/robottelo
https://api.github.com/repos/SatelliteQE/robottelo
closed
[CLI] activation-key product-content 'Enabled?' field renamed to 'Default Enabled?'
6.3 CLI Medium test-failure
``` 2017-12-10 01:21:29 - robottelo.ssh - INFO - >>> LANG=en_US.UTF-8 hammer -v -u admin -p changeme --output=csv activation-key product-content --id="25" --organization-id="304" 2017-12-10 01:21:31 - robottelo.ssh - INFO - <<< stdout ID,Name,Type,URL,GPG Key,Label,Default Enabled?,Override 1512886793877,lkzFhxkfKdNTXZM,,,,NMNZK8_ZqyKSTIkLsrfliHAiyfW_lkzFhxkfKdNTXZM,true,"" ``` this affects cli.test_activationkey.ActivationKeyTestCase.test_positive_create_content_and_check_enabled
1.0
[CLI] activation-key product-content 'Enabled?' field renamed to 'Default Enabled?' - ``` 2017-12-10 01:21:29 - robottelo.ssh - INFO - >>> LANG=en_US.UTF-8 hammer -v -u admin -p changeme --output=csv activation-key product-content --id="25" --organization-id="304" 2017-12-10 01:21:31 - robottelo.ssh - INFO - <<< stdout ID,Name,Type,URL,GPG Key,Label,Default Enabled?,Override 1512886793877,lkzFhxkfKdNTXZM,,,,NMNZK8_ZqyKSTIkLsrfliHAiyfW_lkzFhxkfKdNTXZM,true,"" ``` this affects cli.test_activationkey.ActivationKeyTestCase.test_positive_create_content_and_check_enabled
test
activation key product content enabled field renamed to default enabled robottelo ssh info lang en us utf hammer v u admin p changeme output csv activation key product content id organization id robottelo ssh info stdout id name type url gpg key label default enabled override lkzfhxkfkdntxzm zqykstiklsrflihaiyfw lkzfhxkfkdntxzm true this affects cli test activationkey activationkeytestcase test positive create content and check enabled
1
416,269
28,074,948,487
IssuesEvent
2023-03-29 22:23:05
quantified-uncertainty/squiggle
https://api.github.com/repos/quantified-uncertainty/squiggle
closed
No examples of IF statements or Ternary statements in the documentation
Documentation Language
<!-- mark one with an x --> - \_ Is refactor - \_ Is new feature - x Concerns documentation # Description of suggestion or shortcoming: It would probably be good to have a "Syntax Guide" or something. There's no amazing place for it, but more work here seems pretty good.
1.0
No examples of IF statements or Ternary statements in the documentation - <!-- mark one with an x --> - \_ Is refactor - \_ Is new feature - x Concerns documentation # Description of suggestion or shortcoming: It would probably be good to have a "Syntax Guide" or something. There's no amazing place for it, but more work here seems pretty good.
non_test
no examples of if statements or ternary statements in the documentation is refactor is new feature x concerns documentation description of suggestion or shortcoming it would probably be good to have a syntax guide or something there s no amazing place for it but more work here seems pretty good
0
264,982
23,145,528,869
IssuesEvent
2022-07-29 00:03:09
MPMG-DCC-UFMG/F01
https://api.github.com/repos/MPMG-DCC-UFMG/F01
closed
Teste de generalizacao para a tag Servidores - Proventos de aposentadoria - Ibiaí
generalization test development template-Síntese tecnologia informatica subtag-Proventos de Aposentadoria tag-Servidores
DoD: Realizar o teste de Generalização do validador da tag Servidores - Proventos de aposentadoria para o Município de Ibiaí.
1.0
Teste de generalizacao para a tag Servidores - Proventos de aposentadoria - Ibiaí - DoD: Realizar o teste de Generalização do validador da tag Servidores - Proventos de aposentadoria para o Município de Ibiaí.
test
teste de generalizacao para a tag servidores proventos de aposentadoria ibiaí dod realizar o teste de generalização do validador da tag servidores proventos de aposentadoria para o município de ibiaí
1
62,443
7,601,139,608
IssuesEvent
2018-04-28 10:08:53
WordPress/gutenberg
https://api.github.com/repos/WordPress/gutenberg
closed
Insert link warning (button block) and usability - consistency
Accessibility Blocks Needs Design Feedback
Not sure what are the plans for the insert link "inline toolbar" or if this is already on the radar, so please close this issue if it's pointless. Noticed two issues: **Button block: inserting the URL triggers a React warning:** As soon as I type the first character in the URL field: ![screen shot 2017-07-03 at 16 48 44](https://user-images.githubusercontent.com/1682452/27797890-94acd2d4-600f-11e7-8061-4ea818c99e8d.png) ``` Warning: A component is changing an uncontrolled input of type url to be controlled. Input elements should not switch from uncontrolled to controlled (or vice versa). Decide between using a controlled or uncontrolled input element for the lifetime of the component. More info: https://fb.me/react-controlled-components in input (created by edit) ``` Looks similar to #680 **Usability/consistency:** When inserting a link on some selected text, the inline toolbar "Apply" button changes to an "Edit" button. The URL displayed in the toolbar is clickable and points to the linked resource, and clicking on "Edit" makes the URL editable again: <img width="793" alt="screen shot 2017-07-03 at 16 18 37" src="https://user-images.githubusercontent.com/1682452/27798057-297517c8-6010-11e7-8def-06a07e0ffc9c.png"> Instead, on the button block the behavior is different: after a link is inserted, the "Apply" button doesn't change. There's no "Edit" button and the URL displayed in the toolbar is always editable: <img width="448" alt="screen shot 2017-07-03 at 16 43 29" src="https://user-images.githubusercontent.com/1682452/27798148-7fbd0c80-6010-11e7-828e-6f9842ebe31f.png"> Not sure if there are technical reasons for this. ideally, the behavior should be similar. I can understand the button block doesn't necessarily need a "Remove link" button but I'd say the insert/edit user experience should be as similar as possible with the one of standard links.
1.0
Insert link warning (button block) and usability - consistency - Not sure what are the plans for the insert link "inline toolbar" or if this is already on the radar, so please close this issue if it's pointless. Noticed two issues: **Button block: inserting the URL triggers a React warning:** As soon as I type the first character in the URL field: ![screen shot 2017-07-03 at 16 48 44](https://user-images.githubusercontent.com/1682452/27797890-94acd2d4-600f-11e7-8061-4ea818c99e8d.png) ``` Warning: A component is changing an uncontrolled input of type url to be controlled. Input elements should not switch from uncontrolled to controlled (or vice versa). Decide between using a controlled or uncontrolled input element for the lifetime of the component. More info: https://fb.me/react-controlled-components in input (created by edit) ``` Looks similar to #680 **Usability/consistency:** When inserting a link on some selected text, the inline toolbar "Apply" button changes to an "Edit" button. The URL displayed in the toolbar is clickable and points to the linked resource, and clicking on "Edit" makes the URL editable again: <img width="793" alt="screen shot 2017-07-03 at 16 18 37" src="https://user-images.githubusercontent.com/1682452/27798057-297517c8-6010-11e7-8def-06a07e0ffc9c.png"> Instead, on the button block the behavior is different: after a link is inserted, the "Apply" button doesn't change. There's no "Edit" button and the URL displayed in the toolbar is always editable: <img width="448" alt="screen shot 2017-07-03 at 16 43 29" src="https://user-images.githubusercontent.com/1682452/27798148-7fbd0c80-6010-11e7-828e-6f9842ebe31f.png"> Not sure if there are technical reasons for this. ideally, the behavior should be similar. I can understand the button block doesn't necessarily need a "Remove link" button but I'd say the insert/edit user experience should be as similar as possible with the one of standard links.
non_test
insert link warning button block and usability consistency not sure what are the plans for the insert link inline toolbar or if this is already on the radar so please close this issue if it s pointless noticed two issues button block inserting the url triggers a react warning as soon as i type the first character in the url field warning a component is changing an uncontrolled input of type url to be controlled input elements should not switch from uncontrolled to controlled or vice versa decide between using a controlled or uncontrolled input element for the lifetime of the component more info in input created by edit looks similar to usability consistency when inserting a link on some selected text the inline toolbar apply button changes to an edit button the url displayed in the toolbar is clickable and points to the linked resource and clicking on edit makes the url editable again img width alt screen shot at src instead on the button block the behavior is different after a link is inserted the apply button doesn t change there s no edit button and the url displayed in the toolbar is always editable img width alt screen shot at src not sure if there are technical reasons for this ideally the behavior should be similar i can understand the button block doesn t necessarily need a remove link button but i d say the insert edit user experience should be as similar as possible with the one of standard links
0
191,000
6,824,697,575
IssuesEvent
2017-11-08 07:38:16
OHDSI/WhiteRabbit
https://api.github.com/repos/OHDSI/WhiteRabbit
closed
Can't delete arrows in Rabbit-In-A-Hat
bug help wanted Priority
For some reason the keyboard event doesn't make it down to the component. Suggest adding an action with a keyboard shortcut instead of trying to capture event.
1.0
Can't delete arrows in Rabbit-In-A-Hat - For some reason the keyboard event doesn't make it down to the component. Suggest adding an action with a keyboard shortcut instead of trying to capture event.
non_test
can t delete arrows in rabbit in a hat for some reason the keyboard event doesn t make it down to the component suggest adding an action with a keyboard shortcut instead of trying to capture event
0
165,045
12,826,986,690
IssuesEvent
2020-07-06 17:35:20
dotnet/aspnetcore
https://api.github.com/repos/dotnet/aspnetcore
closed
FlakyTest: ContentLengthReadAsyncSingleBytesAtATime
area-servers test-failure test-investigation
``` [0.002s] [TestLifetime] [Information] Starting test ContentLengthReadAsyncSingleBytesAtATime at 2020-06-30T22:20:39 [0.005s] [Microsoft.AspNetCore.Hosting.Diagnostics] [Debug] Hosting starting [0.006s] [Microsoft.AspNetCore.Hosting.Diagnostics] [Debug] Hosting started [0.006s] [Microsoft.AspNetCore.Hosting.Diagnostics] [Debug] Loaded hosting startup assembly testhost [0.006s] [Microsoft.AspNetCore.Server.Kestrel] [Debug] Connection id "0HM0T8F5LIPBL" accepted. [0.006s] [Microsoft.AspNetCore.Server.Kestrel] [Debug] Connection id "0HM0T8F5LIPBL" started. [0.007s] [Microsoft.AspNetCore.Hosting.Diagnostics] [Information] Request starting HTTP/1.0 POST http:/// - 5 [0.010s] [Microsoft.AspNetCore.Server.Kestrel] [Debug] Connection id "0HM0T8F5LIPBL", Request id "0HM0T8F5LIPBL:00000002": started reading request body. [0.012s] [Microsoft.AspNetCore.Server.Kestrel] [Error] Connection id "0HM0T8F5LIPBL", Request id "0HM0T8F5LIPBL:00000002": An unhandled exception was thrown by the application. Xunit.Sdk.EqualException: Assert.Equal() Failure Expected: 3 Actual: 1 at Xunit.Assert.Equal[T](T expected, T actual, IEqualityComparer`1 comparer) in C:\Dev\xunit\xunit\src\xunit.assert\Asserts\EqualityAsserts.cs:line 40 at Xunit.Assert.Equal[T](T expected, T actual) in C:\Dev\xunit\xunit\src\xunit.assert\Asserts\EqualityAsserts.cs:line 24 at Microsoft.AspNetCore.Server.Kestrel.InMemory.FunctionalTests.RequestTests.<>c__DisplayClass26_0.<<ContentLengthReadAsyncSingleBytesAtATime>b__0>d.MoveNext() in /_/src/Servers/Kestrel/test/InMemory.FunctionalTests/RequestTests.cs:line 1059 --- End of stack trace from previous location --- at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequest[TContext](IHttpApplication`1 application) in /_/src/Servers/Kestrel/Core/src/Internal/Http/HttpProtocol.cs:line 659 [0.013s] [Microsoft.AspNetCore.Hosting.Diagnostics] [Information] Request finished HTTP/1.0 POST http:/// - 5 - 500 0 - 5.3132ms [0.013s] [Microsoft.AspNetCore.Server.Kestrel] [Error] Connection id "0HM0T8F5LIPBL", Request id "0HM0T8F5LIPBL:00000002": An unhandled exception was thrown by the application. Microsoft.AspNetCore.Connections.ConnectionAbortedException: The connection was aborted by the application. ---> System.InvalidOperationException: Reading is already in progress. at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1ContentLengthMessageBody.TryReadInternal(ReadResult& readResult) in /_/src/Servers/Kestrel/Core/src/Internal/Http/Http1ContentLengthMessageBody.cs:line 129 at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1MessageBody.OnConsumeAsync() in /_/src/Servers/Kestrel/Core/src/Internal/Http/Http1MessageBody.cs:line 47 --- End of inner exception stack trace --- [0.013s] [Microsoft.AspNetCore.Server.Kestrel] [Debug] Connection id "0HM0T8F5LIPBL", Request id "0HM0T8F5LIPBL:00000002": done reading request body. [0.013s] [Microsoft.AspNetCore.Server.Kestrel] [Debug] Connection id "0HM0T8F5LIPBL" disconnecting. [0.013s] [Microsoft.AspNetCore.Server.Kestrel] [Debug] Connection id "0HM0T8F5LIPBL" stopped. [30.019s] [Microsoft.AspNetCore.Hosting.Diagnostics] [Debug] Hosting shutdown [30.022s] [TestLifetime] [Information] Finished test ContentLengthReadAsyncSingleBytesAtATime in 30.0199414s ```
2.0
FlakyTest: ContentLengthReadAsyncSingleBytesAtATime - ``` [0.002s] [TestLifetime] [Information] Starting test ContentLengthReadAsyncSingleBytesAtATime at 2020-06-30T22:20:39 [0.005s] [Microsoft.AspNetCore.Hosting.Diagnostics] [Debug] Hosting starting [0.006s] [Microsoft.AspNetCore.Hosting.Diagnostics] [Debug] Hosting started [0.006s] [Microsoft.AspNetCore.Hosting.Diagnostics] [Debug] Loaded hosting startup assembly testhost [0.006s] [Microsoft.AspNetCore.Server.Kestrel] [Debug] Connection id "0HM0T8F5LIPBL" accepted. [0.006s] [Microsoft.AspNetCore.Server.Kestrel] [Debug] Connection id "0HM0T8F5LIPBL" started. [0.007s] [Microsoft.AspNetCore.Hosting.Diagnostics] [Information] Request starting HTTP/1.0 POST http:/// - 5 [0.010s] [Microsoft.AspNetCore.Server.Kestrel] [Debug] Connection id "0HM0T8F5LIPBL", Request id "0HM0T8F5LIPBL:00000002": started reading request body. [0.012s] [Microsoft.AspNetCore.Server.Kestrel] [Error] Connection id "0HM0T8F5LIPBL", Request id "0HM0T8F5LIPBL:00000002": An unhandled exception was thrown by the application. Xunit.Sdk.EqualException: Assert.Equal() Failure Expected: 3 Actual: 1 at Xunit.Assert.Equal[T](T expected, T actual, IEqualityComparer`1 comparer) in C:\Dev\xunit\xunit\src\xunit.assert\Asserts\EqualityAsserts.cs:line 40 at Xunit.Assert.Equal[T](T expected, T actual) in C:\Dev\xunit\xunit\src\xunit.assert\Asserts\EqualityAsserts.cs:line 24 at Microsoft.AspNetCore.Server.Kestrel.InMemory.FunctionalTests.RequestTests.<>c__DisplayClass26_0.<<ContentLengthReadAsyncSingleBytesAtATime>b__0>d.MoveNext() in /_/src/Servers/Kestrel/test/InMemory.FunctionalTests/RequestTests.cs:line 1059 --- End of stack trace from previous location --- at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequest[TContext](IHttpApplication`1 application) in /_/src/Servers/Kestrel/Core/src/Internal/Http/HttpProtocol.cs:line 659 [0.013s] [Microsoft.AspNetCore.Hosting.Diagnostics] [Information] Request finished HTTP/1.0 POST http:/// - 5 - 500 0 - 5.3132ms [0.013s] [Microsoft.AspNetCore.Server.Kestrel] [Error] Connection id "0HM0T8F5LIPBL", Request id "0HM0T8F5LIPBL:00000002": An unhandled exception was thrown by the application. Microsoft.AspNetCore.Connections.ConnectionAbortedException: The connection was aborted by the application. ---> System.InvalidOperationException: Reading is already in progress. at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1ContentLengthMessageBody.TryReadInternal(ReadResult& readResult) in /_/src/Servers/Kestrel/Core/src/Internal/Http/Http1ContentLengthMessageBody.cs:line 129 at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1MessageBody.OnConsumeAsync() in /_/src/Servers/Kestrel/Core/src/Internal/Http/Http1MessageBody.cs:line 47 --- End of inner exception stack trace --- [0.013s] [Microsoft.AspNetCore.Server.Kestrel] [Debug] Connection id "0HM0T8F5LIPBL", Request id "0HM0T8F5LIPBL:00000002": done reading request body. [0.013s] [Microsoft.AspNetCore.Server.Kestrel] [Debug] Connection id "0HM0T8F5LIPBL" disconnecting. [0.013s] [Microsoft.AspNetCore.Server.Kestrel] [Debug] Connection id "0HM0T8F5LIPBL" stopped. [30.019s] [Microsoft.AspNetCore.Hosting.Diagnostics] [Debug] Hosting shutdown [30.022s] [TestLifetime] [Information] Finished test ContentLengthReadAsyncSingleBytesAtATime in 30.0199414s ```
test
flakytest contentlengthreadasyncsinglebytesatatime starting test contentlengthreadasyncsinglebytesatatime at hosting starting hosting started loaded hosting startup assembly testhost connection id accepted connection id started request starting http post connection id request id started reading request body connection id request id an unhandled exception was thrown by the application xunit sdk equalexception assert equal failure expected actual at xunit assert equal t expected t actual iequalitycomparer comparer in c dev xunit xunit src xunit assert asserts equalityasserts cs line at xunit assert equal t expected t actual in c dev xunit xunit src xunit assert asserts equalityasserts cs line at microsoft aspnetcore server kestrel inmemory functionaltests requesttests c b d movenext in src servers kestrel test inmemory functionaltests requesttests cs line end of stack trace from previous location at microsoft aspnetcore server kestrel core internal http httpprotocol processrequest ihttpapplication application in src servers kestrel core src internal http httpprotocol cs line request finished http post connection id request id an unhandled exception was thrown by the application microsoft aspnetcore connections connectionabortedexception the connection was aborted by the application system invalidoperationexception reading is already in progress at microsoft aspnetcore server kestrel core internal http tryreadinternal readresult readresult in src servers kestrel core src internal http cs line at microsoft aspnetcore server kestrel core internal http onconsumeasync in src servers kestrel core src internal http cs line end of inner exception stack trace connection id request id done reading request body connection id disconnecting connection id stopped hosting shutdown finished test contentlengthreadasyncsinglebytesatatime in
1
74,016
7,372,629,258
IssuesEvent
2018-03-13 15:12:24
apache/incubator-mxnet
https://api.github.com/repos/apache/incubator-mxnet
opened
test_autograd.test_unary_func @ Python3: MKLDNN-CPU
Bug Test
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/master/474/pipeline/485/ ``` [INFO] Setting module np/mx/python random seeds, use MXNET_MODULE_SEED=56561434 to reproduce. test_autograd.test_unary_func ... /work/runtime_functions.sh: line 312: 6 Segmentation fault (core dumped) nosetests-3.4 --verbose tests/python/unittest build.py: 2018-03-11 11:21:48,801 Running of command in container failed: docker run --rm -v /home/jenkins_slave/workspace/ut-python3-mkldnn-cpu:/work/mxnet -v /home/jenkins_slave/workspace/ut-python3-mkldnn-cpu/build:/work/build -u 1001:1001 mxnet/build.ubuntu_cpu /work/runtime_functions.sh unittest_ubuntu_python3_cpu build.py: 2018-03-11 11:21:48,801 You can try to get into the container by using the following command: docker run --rm -v /home/jenkins_slave/workspace/ut-python3-mkldnn-cpu:/work/mxnet -v /home/jenkins_slave/workspace/ut-python3-mkldnn-cpu/build:/work/build -u 1001:1001 -ti --entrypoint bash mxnet/build.ubuntu_cpu /work/runtime_functions.sh unittest_ubuntu_python3_cpu Traceback (most recent call last): File "ci/build.py", line 179, in <module> sys.exit(main()) File "ci/build.py", line 159, in main container_run(platform, docker_binary, command) File "ci/build.py", line 110, in container_run raise subprocess.CalledProcessError(ret, cmd) subprocess.CalledProcessError: Command 'docker run --rm -v /home/jenkins_slave/workspace/ut-python3-mkldnn-cpu:/work/mxnet -v /home/jenkins_slave/workspace/ut-python3-mkldnn-cpu/build:/work/build -u 1001:1001 mxnet/build.ubuntu_cpu /work/runtime_functions.sh unittest_ubuntu_python3_cpu' returned non-zero exit status 139 script returned exit code 1 ```
1.0
test_autograd.test_unary_func @ Python3: MKLDNN-CPU - http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/master/474/pipeline/485/ ``` [INFO] Setting module np/mx/python random seeds, use MXNET_MODULE_SEED=56561434 to reproduce. test_autograd.test_unary_func ... /work/runtime_functions.sh: line 312: 6 Segmentation fault (core dumped) nosetests-3.4 --verbose tests/python/unittest build.py: 2018-03-11 11:21:48,801 Running of command in container failed: docker run --rm -v /home/jenkins_slave/workspace/ut-python3-mkldnn-cpu:/work/mxnet -v /home/jenkins_slave/workspace/ut-python3-mkldnn-cpu/build:/work/build -u 1001:1001 mxnet/build.ubuntu_cpu /work/runtime_functions.sh unittest_ubuntu_python3_cpu build.py: 2018-03-11 11:21:48,801 You can try to get into the container by using the following command: docker run --rm -v /home/jenkins_slave/workspace/ut-python3-mkldnn-cpu:/work/mxnet -v /home/jenkins_slave/workspace/ut-python3-mkldnn-cpu/build:/work/build -u 1001:1001 -ti --entrypoint bash mxnet/build.ubuntu_cpu /work/runtime_functions.sh unittest_ubuntu_python3_cpu Traceback (most recent call last): File "ci/build.py", line 179, in <module> sys.exit(main()) File "ci/build.py", line 159, in main container_run(platform, docker_binary, command) File "ci/build.py", line 110, in container_run raise subprocess.CalledProcessError(ret, cmd) subprocess.CalledProcessError: Command 'docker run --rm -v /home/jenkins_slave/workspace/ut-python3-mkldnn-cpu:/work/mxnet -v /home/jenkins_slave/workspace/ut-python3-mkldnn-cpu/build:/work/build -u 1001:1001 mxnet/build.ubuntu_cpu /work/runtime_functions.sh unittest_ubuntu_python3_cpu' returned non-zero exit status 139 script returned exit code 1 ```
test
test autograd test unary func mkldnn cpu setting module np mx python random seeds use mxnet module seed to reproduce test autograd test unary func work runtime functions sh line segmentation fault core dumped nosetests verbose tests python unittest build py running of command in container failed docker run rm v home jenkins slave workspace ut mkldnn cpu work mxnet v home jenkins slave workspace ut mkldnn cpu build work build u mxnet build ubuntu cpu work runtime functions sh unittest ubuntu cpu build py you can try to get into the container by using the following command docker run rm v home jenkins slave workspace ut mkldnn cpu work mxnet v home jenkins slave workspace ut mkldnn cpu build work build u ti entrypoint bash mxnet build ubuntu cpu work runtime functions sh unittest ubuntu cpu traceback most recent call last file ci build py line in sys exit main file ci build py line in main container run platform docker binary command file ci build py line in container run raise subprocess calledprocesserror ret cmd subprocess calledprocesserror command docker run rm v home jenkins slave workspace ut mkldnn cpu work mxnet v home jenkins slave workspace ut mkldnn cpu build work build u mxnet build ubuntu cpu work runtime functions sh unittest ubuntu cpu returned non zero exit status script returned exit code
1
92,701
8,376,225,648
IssuesEvent
2018-10-05 19:03:24
cosmos/cosmos-sdk
https://api.github.com/repos/cosmos/cosmos-sdk
reopened
simulation: Change parameters to get more slashes
game-of-stakes tests
A majority of the fuzzer bugs were caught due to slashings induced by governance. Now that we're removing governance slashing, we need to adapt the parameters within simulation to slash more often / get many slashes in one block. We'll probably have to make a new operation for this, since liveness slashing due to stochastic processes probably won't be reliable enough while also getting a meaningful portion live, and double signs execute a different code path entirely.
1.0
simulation: Change parameters to get more slashes - A majority of the fuzzer bugs were caught due to slashings induced by governance. Now that we're removing governance slashing, we need to adapt the parameters within simulation to slash more often / get many slashes in one block. We'll probably have to make a new operation for this, since liveness slashing due to stochastic processes probably won't be reliable enough while also getting a meaningful portion live, and double signs execute a different code path entirely.
test
simulation change parameters to get more slashes a majority of the fuzzer bugs were caught due to slashings induced by governance now that we re removing governance slashing we need to adapt the parameters within simulation to slash more often get many slashes in one block we ll probably have to make a new operation for this since liveness slashing due to stochastic processes probably won t be reliable enough while also getting a meaningful portion live and double signs execute a different code path entirely
1
21,706
3,916,772,148
IssuesEvent
2016-04-21 04:07:53
Awesome-Support/Awesome-Support
https://api.github.com/repos/Awesome-Support/Awesome-Support
closed
WooCommerce users cannot submit tickets
bug needs testing
@julien731 After the 20b03f31e8511653d7733eedfba4a1c5e5e2e7d5 commit, WC users cannot submit tickets. They are registering on WooCommerce's "My Account" login/register page, receive the role "Customer", which is below the Support User, and so does not have the `create_ticket` capability.
1.0
WooCommerce users cannot submit tickets - @julien731 After the 20b03f31e8511653d7733eedfba4a1c5e5e2e7d5 commit, WC users cannot submit tickets. They are registering on WooCommerce's "My Account" login/register page, receive the role "Customer", which is below the Support User, and so does not have the `create_ticket` capability.
test
woocommerce users cannot submit tickets after the commit wc users cannot submit tickets they are registering on woocommerce s my account login register page receive the role customer which is below the support user and so does not have the create ticket capability
1
93,132
8,395,349,900
IssuesEvent
2018-10-10 06:02:47
SunwellWoW/Sunwell-TBC-Bugtracker
https://api.github.com/repos/SunwellWoW/Sunwell-TBC-Bugtracker
closed
The botanica.
Resolved Resolved - retest required instance pve
Decription: 2nd boss stays in tree form for too long even after killing adds. How it works: I kill the adds and he keeps being immune for a good while, sometimes he keeps doing tranquility while not being in tree form. How it should work: You kill 2 adds and he drops tree form. Source (you should point out proofs of your report, please give us some source):
1.0
The botanica. - Decription: 2nd boss stays in tree form for too long even after killing adds. How it works: I kill the adds and he keeps being immune for a good while, sometimes he keeps doing tranquility while not being in tree form. How it should work: You kill 2 adds and he drops tree form. Source (you should point out proofs of your report, please give us some source):
test
the botanica decription boss stays in tree form for too long even after killing adds how it works i kill the adds and he keeps being immune for a good while sometimes he keeps doing tranquility while not being in tree form how it should work you kill adds and he drops tree form source you should point out proofs of your report please give us some source
1
261,455
22,746,560,889
IssuesEvent
2022-07-07 09:40:41
mozilla-mobile/fenix
https://api.github.com/repos/mozilla-mobile/fenix
closed
Intermittent UI test failure - < ContextMenusTest. verifyContextCopyLink >
eng:intermittent-test needs:triage eng:ui-test
### Firebase Test Run: [Firebase link](https://console.firebase.google.com/u/0/project/moz-fenix/testlab/histories/bh.66b7091e15d53d45/matrices/5757589674050819610/executions/bs.381d76c3e6926507/testcases/1/test-cases) ### Stacktrace: java.lang.RuntimeException: Error while connecting UiAutomation@292d1dd[id=-1, flags=0] at android.app.UiAutomation.connect(UiAutomation.java:259) at android.app.Instrumentation.getUiAutomation(Instrumentation.java:2176) at androidx.test.uiautomator.UiDevice.getUiAutomation(UiDevice.java:1129) at androidx.test.uiautomator.QueryController.<init>(QueryController.java:95) at androidx.test.uiautomator.UiDevice.<init>(UiDevice.java:109) at androidx.test.uiautomator.UiDevice.getInstance(UiDevice.java:261) at org.mozilla.fenix.ui.ContextMenusTest.<init>(ContextMenusTest.kt:40) ### Build: 6/15 Main ### Notes: Similar with: #25416 #25414 #25342 #25341 #25453 #25468 #25579 #25578 #25624 ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FNXV2-20714)
2.0
Intermittent UI test failure - < ContextMenusTest. verifyContextCopyLink > - ### Firebase Test Run: [Firebase link](https://console.firebase.google.com/u/0/project/moz-fenix/testlab/histories/bh.66b7091e15d53d45/matrices/5757589674050819610/executions/bs.381d76c3e6926507/testcases/1/test-cases) ### Stacktrace: java.lang.RuntimeException: Error while connecting UiAutomation@292d1dd[id=-1, flags=0] at android.app.UiAutomation.connect(UiAutomation.java:259) at android.app.Instrumentation.getUiAutomation(Instrumentation.java:2176) at androidx.test.uiautomator.UiDevice.getUiAutomation(UiDevice.java:1129) at androidx.test.uiautomator.QueryController.<init>(QueryController.java:95) at androidx.test.uiautomator.UiDevice.<init>(UiDevice.java:109) at androidx.test.uiautomator.UiDevice.getInstance(UiDevice.java:261) at org.mozilla.fenix.ui.ContextMenusTest.<init>(ContextMenusTest.kt:40) ### Build: 6/15 Main ### Notes: Similar with: #25416 #25414 #25342 #25341 #25453 #25468 #25579 #25578 #25624 ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FNXV2-20714)
test
intermittent ui test failure firebase test run stacktrace java lang runtimeexception error while connecting uiautomation at android app uiautomation connect uiautomation java at android app instrumentation getuiautomation instrumentation java at androidx test uiautomator uidevice getuiautomation uidevice java at androidx test uiautomator querycontroller querycontroller java at androidx test uiautomator uidevice uidevice java at androidx test uiautomator uidevice getinstance uidevice java at org mozilla fenix ui contextmenustest contextmenustest kt build main notes similar with ┆issue is synchronized with this
1
256,470
22,054,391,413
IssuesEvent
2022-05-30 11:33:05
mountaincharlie/project-four-cook-ebook
https://api.github.com/repos/mountaincharlie/project-four-cook-ebook
opened
MANUAL TESTING - add method button adds method step
Testing
- [ ] image before clicking add method button - [ ] image after clicking add method button
1.0
MANUAL TESTING - add method button adds method step - - [ ] image before clicking add method button - [ ] image after clicking add method button
test
manual testing add method button adds method step image before clicking add method button image after clicking add method button
1
98,825
4,031,685,483
IssuesEvent
2016-05-18 18:01:04
raml-org/raml-js-parser-2
https://api.github.com/repos/raml-org/raml-js-parser-2
closed
authorizationGrants does not except absolute URIs
bug priority:high raml-1.0
The parser complains that the URI in the end is not accepted. ```yaml authorizationGrants: [ authorization_code, implicit, 'urn:ietf:params:oauth:grant-type:saml2-bearer' ] ``` `'urn:ietf:params:oauth:grant-type:saml2-bearer'` should be fully valid according to the spec.
1.0
authorizationGrants does not except absolute URIs - The parser complains that the URI in the end is not accepted. ```yaml authorizationGrants: [ authorization_code, implicit, 'urn:ietf:params:oauth:grant-type:saml2-bearer' ] ``` `'urn:ietf:params:oauth:grant-type:saml2-bearer'` should be fully valid according to the spec.
non_test
authorizationgrants does not except absolute uris the parser complains that the uri in the end is not accepted yaml authorizationgrants urn ietf params oauth grant type bearer should be fully valid according to the spec
0
336,954
30,230,828,222
IssuesEvent
2023-07-06 06:46:12
unifyai/ivy
https://api.github.com/repos/unifyai/ivy
reopened
Fix jax_nn_activations.test_jax_nn_one_hot
JAX Frontend Sub Task Failing Test
| | | |---|---| |jax|<a href="https://github.com/unifyai/ivy/actions/runs/5461165256/jobs/9938860654"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5461165256/jobs/9938860654"><img src=https://img.shields.io/badge/-success-success></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5461165256/jobs/9938860654"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/5461165256/jobs/9938860654"><img src=https://img.shields.io/badge/-success-success></a> |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5471928363"><img src=https://img.shields.io/badge/-failure-red></a>
1.0
Fix jax_nn_activations.test_jax_nn_one_hot - | | | |---|---| |jax|<a href="https://github.com/unifyai/ivy/actions/runs/5461165256/jobs/9938860654"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5461165256/jobs/9938860654"><img src=https://img.shields.io/badge/-success-success></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5461165256/jobs/9938860654"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/5461165256/jobs/9938860654"><img src=https://img.shields.io/badge/-success-success></a> |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5471928363"><img src=https://img.shields.io/badge/-failure-red></a>
test
fix jax nn activations test jax nn one hot jax a href src numpy a href src tensorflow a href src torch a href src paddle a href src
1
93,410
8,414,722,966
IssuesEvent
2018-10-13 06:17:46
brave/brave-browser
https://api.github.com/repos/brave/brave-browser
closed
Battery api should return constant result
QA/Test-Plan-Specified QA/Yes priority/P2 privacy
## Test plan See https://github.com/brave/brave-core/pull/567 ## Description We current disable the battery api, but that breaks some sites and if anything it is worse for privacy than fixing the value to 100%
1.0
Battery api should return constant result - ## Test plan See https://github.com/brave/brave-core/pull/567 ## Description We current disable the battery api, but that breaks some sites and if anything it is worse for privacy than fixing the value to 100%
test
battery api should return constant result test plan see description we current disable the battery api but that breaks some sites and if anything it is worse for privacy than fixing the value to
1
216,542
16,769,715,170
IssuesEvent
2021-06-14 13:29:23
scikit-activeml/scikit-activeml
https://api.github.com/repos/scikit-activeml/scikit-activeml
opened
Check imports to unify the style (absolute path vs relative path)
test
According to @mherde relative imports directly to the files are more preferable to importing the packages to avoid circular imports.
1.0
Check imports to unify the style (absolute path vs relative path) - According to @mherde relative imports directly to the files are more preferable to importing the packages to avoid circular imports.
test
check imports to unify the style absolute path vs relative path according to mherde relative imports directly to the files are more preferable to importing the packages to avoid circular imports
1
68,663
7,106,956,764
IssuesEvent
2018-01-16 18:16:28
rancher/rancher
https://api.github.com/repos/rancher/rancher
closed
Move namespace modal button should be Move not Create
area/stack area/ui kind/bug status/resolved status/to-test version/2.0
**Rancher versions:** 2.0 master 1/12 **Steps to Reproduce:** 1. Create two projects in one cluster 2. Create a namespace 3. Check the namespace and click on Move **Results:** The Move modal has a Create button. Should really say Move, since the namespace is already created. ![image](https://user-images.githubusercontent.com/11514927/34887187-d75c478a-f782-11e7-93be-110906506716.png)
1.0
Move namespace modal button should be Move not Create - **Rancher versions:** 2.0 master 1/12 **Steps to Reproduce:** 1. Create two projects in one cluster 2. Create a namespace 3. Check the namespace and click on Move **Results:** The Move modal has a Create button. Should really say Move, since the namespace is already created. ![image](https://user-images.githubusercontent.com/11514927/34887187-d75c478a-f782-11e7-93be-110906506716.png)
test
move namespace modal button should be move not create rancher versions master steps to reproduce create two projects in one cluster create a namespace check the namespace and click on move results the move modal has a create button should really say move since the namespace is already created
1
598,963
18,263,347,437
IssuesEvent
2021-10-04 04:15:10
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.juul.com - Firefox is an unsupported browser
os-ios browser-firefox-ios priority-normal severity-critical type-unsupported
<!-- @browser: Firefox iOS 36.0 --> <!-- @ua_header: Mozilla/5.0 (iPhone; CPU iPhone OS 14_7_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/36.0 Mobile/15E148 Safari/605.1.15 --> <!-- @reported_with: mobile-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/85108 --> <!-- @extra_labels: browser-firefox-ios --> **URL**: https://www.juul.com/unsupported-browser **Browser / Version**: Firefox iOS 36.0 **Operating System**: iOS 14.7.1 **Tested Another Browser**: No **Problem type**: Site is not usable **Description**: Browser unsupported **Steps to Reproduce**: Tesla’s me to update browser to continue <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2021/8/c8770f0d-dc4b-4d78-b5a9-1f9160400182.jpg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.juul.com - Firefox is an unsupported browser - <!-- @browser: Firefox iOS 36.0 --> <!-- @ua_header: Mozilla/5.0 (iPhone; CPU iPhone OS 14_7_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/36.0 Mobile/15E148 Safari/605.1.15 --> <!-- @reported_with: mobile-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/85108 --> <!-- @extra_labels: browser-firefox-ios --> **URL**: https://www.juul.com/unsupported-browser **Browser / Version**: Firefox iOS 36.0 **Operating System**: iOS 14.7.1 **Tested Another Browser**: No **Problem type**: Site is not usable **Description**: Browser unsupported **Steps to Reproduce**: Tesla’s me to update browser to continue <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2021/8/c8770f0d-dc4b-4d78-b5a9-1f9160400182.jpg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_test
firefox is an unsupported browser url browser version firefox ios operating system ios tested another browser no problem type site is not usable description browser unsupported steps to reproduce tesla’s me to update browser to continue view the screenshot img alt screenshot src browser configuration none from with ❤️
0
433,315
12,505,667,958
IssuesEvent
2020-06-02 11:09:46
gitcoinco/web
https://api.github.com/repos/gitcoinco/web
closed
Advanced Payout / Tips Refactor Cleanup task for August
Gitcoin Tips priority: backlog
Deprecation task 1 month after https://github.com/gitcoinco/web/pull/1666 is launched after we are satisfied that all tips have been claimed -- delete `receive_legacy.html`, receive url route, and `onepager/js/quarantine`
1.0
Advanced Payout / Tips Refactor Cleanup task for August - Deprecation task 1 month after https://github.com/gitcoinco/web/pull/1666 is launched after we are satisfied that all tips have been claimed -- delete `receive_legacy.html`, receive url route, and `onepager/js/quarantine`
non_test
advanced payout tips refactor cleanup task for august deprecation task month after is launched after we are satisfied that all tips have been claimed delete receive legacy html receive url route and onepager js quarantine
0
610,952
18,940,989,994
IssuesEvent
2021-11-18 02:49:03
boostcampwm-2021/web14-salondesrefuses
https://api.github.com/repos/boostcampwm-2021/web14-salondesrefuses
opened
(중간)[FE] 마이페이지 레이아웃
🚀 Front Priority: Middle
## 📃 이슈 내용 마이페이지 레이아웃 작업을 하면서 중간중간 들어오는 경매 로직 수정 요청을 반영한다. ## ✅ 체크 리스트 - [ ] 마이페이지 레이아웃 작성 - [ ] 지갑에서 내 작품을 조회해 확인하는 기능 구현 - [ ] 경매 로직 완성 - [ ] 어제 머지할때 나왔던 UI 관련 이슈 수정 ## 📌 레퍼런스
1.0
(중간)[FE] 마이페이지 레이아웃 - ## 📃 이슈 내용 마이페이지 레이아웃 작업을 하면서 중간중간 들어오는 경매 로직 수정 요청을 반영한다. ## ✅ 체크 리스트 - [ ] 마이페이지 레이아웃 작성 - [ ] 지갑에서 내 작품을 조회해 확인하는 기능 구현 - [ ] 경매 로직 완성 - [ ] 어제 머지할때 나왔던 UI 관련 이슈 수정 ## 📌 레퍼런스
non_test
중간 마이페이지 레이아웃 📃 이슈 내용 마이페이지 레이아웃 작업을 하면서 중간중간 들어오는 경매 로직 수정 요청을 반영한다 ✅ 체크 리스트 마이페이지 레이아웃 작성 지갑에서 내 작품을 조회해 확인하는 기능 구현 경매 로직 완성 어제 머지할때 나왔던 ui 관련 이슈 수정 📌 레퍼런스
0
191,583
14,594,787,240
IssuesEvent
2020-12-20 08:00:41
github-vet/rangeloop-pointer-findings
https://api.github.com/repos/github-vet/rangeloop-pointer-findings
closed
kubeup/kube-aliyun: vendor/k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates/predicates_test.go; 48 LoC
fresh small test vendored
Found a possible issue in [kubeup/kube-aliyun](https://www.github.com/kubeup/kube-aliyun) at [vendor/k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates/predicates_test.go](https://github.com/kubeup/kube-aliyun/blob/54e3b167f343aff686bd9c979792c1e0434b0b08/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates/predicates_test.go#L2895-L2942) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call which takes a reference to node at line 2908 may start a goroutine [Click here to see the code in its original context.](https://github.com/kubeup/kube-aliyun/blob/54e3b167f343aff686bd9c979792c1e0434b0b08/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates/predicates_test.go#L2895-L2942) <details> <summary>Click here to show the 48 line(s) of Go which triggered the analyzer.</summary> ```go for _, node := range test.nodes { var podsOnNode []*v1.Pod for _, pod := range test.pods { if pod.Spec.NodeName == node.Name { podsOnNode = append(podsOnNode, pod) } } testFit := PodAffinityChecker{ info: nodeListInfo, podLister: schedulertesting.FakePodLister(test.pods), } nodeInfo := schedulercache.NewNodeInfo(podsOnNode...) nodeInfo.SetNode(&node) nodeInfoMap := map[string]*schedulercache.NodeInfo{node.Name: nodeInfo} var meta interface{} = nil if !test.nometa { meta = PredicateMetadata(test.pod, nodeInfoMap) } fits, reasons, err := testFit.InterPodAffinityMatches(test.pod, meta, nodeInfo) if err != nil { t.Errorf("%s: unexpected error %v", test.test, err) } if !fits && !reflect.DeepEqual(reasons, affinityExpectedFailureReasons) { t.Errorf("%s: unexpected failure reasons: %v", test.test, reasons) } affinity := test.pod.Spec.Affinity if affinity != nil && affinity.NodeAffinity != nil { nodeInfo := schedulercache.NewNodeInfo() nodeInfo.SetNode(&node) nodeInfoMap := map[string]*schedulercache.NodeInfo{node.Name: nodeInfo} fits2, reasons, err := PodSelectorMatches(test.pod, PredicateMetadata(test.pod, nodeInfoMap), nodeInfo) if err != nil { t.Errorf("%s: unexpected error: %v", test.test, err) } if !fits2 && !reflect.DeepEqual(reasons, selectorExpectedFailureReasons) { t.Errorf("%s: unexpected failure reasons: %v, want: %v", test.test, reasons, selectorExpectedFailureReasons) } fits = fits && fits2 } if fits != test.fits[node.Name] { t.Errorf("%s: expected %v for %s got %v", test.test, test.fits[node.Name], node.Name, fits) } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 54e3b167f343aff686bd9c979792c1e0434b0b08
1.0
kubeup/kube-aliyun: vendor/k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates/predicates_test.go; 48 LoC - Found a possible issue in [kubeup/kube-aliyun](https://www.github.com/kubeup/kube-aliyun) at [vendor/k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates/predicates_test.go](https://github.com/kubeup/kube-aliyun/blob/54e3b167f343aff686bd9c979792c1e0434b0b08/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates/predicates_test.go#L2895-L2942) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call which takes a reference to node at line 2908 may start a goroutine [Click here to see the code in its original context.](https://github.com/kubeup/kube-aliyun/blob/54e3b167f343aff686bd9c979792c1e0434b0b08/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates/predicates_test.go#L2895-L2942) <details> <summary>Click here to show the 48 line(s) of Go which triggered the analyzer.</summary> ```go for _, node := range test.nodes { var podsOnNode []*v1.Pod for _, pod := range test.pods { if pod.Spec.NodeName == node.Name { podsOnNode = append(podsOnNode, pod) } } testFit := PodAffinityChecker{ info: nodeListInfo, podLister: schedulertesting.FakePodLister(test.pods), } nodeInfo := schedulercache.NewNodeInfo(podsOnNode...) nodeInfo.SetNode(&node) nodeInfoMap := map[string]*schedulercache.NodeInfo{node.Name: nodeInfo} var meta interface{} = nil if !test.nometa { meta = PredicateMetadata(test.pod, nodeInfoMap) } fits, reasons, err := testFit.InterPodAffinityMatches(test.pod, meta, nodeInfo) if err != nil { t.Errorf("%s: unexpected error %v", test.test, err) } if !fits && !reflect.DeepEqual(reasons, affinityExpectedFailureReasons) { t.Errorf("%s: unexpected failure reasons: %v", test.test, reasons) } affinity := test.pod.Spec.Affinity if affinity != nil && affinity.NodeAffinity != nil { nodeInfo := schedulercache.NewNodeInfo() nodeInfo.SetNode(&node) nodeInfoMap := map[string]*schedulercache.NodeInfo{node.Name: nodeInfo} fits2, reasons, err := PodSelectorMatches(test.pod, PredicateMetadata(test.pod, nodeInfoMap), nodeInfo) if err != nil { t.Errorf("%s: unexpected error: %v", test.test, err) } if !fits2 && !reflect.DeepEqual(reasons, selectorExpectedFailureReasons) { t.Errorf("%s: unexpected failure reasons: %v, want: %v", test.test, reasons, selectorExpectedFailureReasons) } fits = fits && fits2 } if fits != test.fits[node.Name] { t.Errorf("%s: expected %v for %s got %v", test.test, test.fits[node.Name], node.Name, fits) } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 54e3b167f343aff686bd9c979792c1e0434b0b08
test
kubeup kube aliyun vendor io kubernetes plugin pkg scheduler algorithm predicates predicates test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to node at line may start a goroutine click here to show the line s of go which triggered the analyzer go for node range test nodes var podsonnode pod for pod range test pods if pod spec nodename node name podsonnode append podsonnode pod testfit podaffinitychecker info nodelistinfo podlister schedulertesting fakepodlister test pods nodeinfo schedulercache newnodeinfo podsonnode nodeinfo setnode node nodeinfomap map schedulercache nodeinfo node name nodeinfo var meta interface nil if test nometa meta predicatemetadata test pod nodeinfomap fits reasons err testfit interpodaffinitymatches test pod meta nodeinfo if err nil t errorf s unexpected error v test test err if fits reflect deepequal reasons affinityexpectedfailurereasons t errorf s unexpected failure reasons v test test reasons affinity test pod spec affinity if affinity nil affinity nodeaffinity nil nodeinfo schedulercache newnodeinfo nodeinfo setnode node nodeinfomap map schedulercache nodeinfo node name nodeinfo reasons err podselectormatches test pod predicatemetadata test pod nodeinfomap nodeinfo if err nil t errorf s unexpected error v test test err if reflect deepequal reasons selectorexpectedfailurereasons t errorf s unexpected failure reasons v want v test test reasons selectorexpectedfailurereasons fits fits if fits test fits t errorf s expected v for s got v test test test fits node name fits leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
1
749,140
26,150,710,662
IssuesEvent
2022-12-30 13:09:44
renovatebot/renovate
https://api.github.com/repos/renovatebot/renovate
closed
Test/validate node 16 for Renovate runtime
type:feature priority-3-medium status:in-progress
### What would you like Renovate to be able to do? Use node 16 as runtime, and upgrade to it for official Renovate images. ### If you have any ideas on how this should be implemented, please tell us here. It should be pretty stable by now, so perhaps we can return to having a matrix of node 14 + 16 for a while until we migrate to testing only 16? ### Is this a feature you are interested in implementing yourself? No
1.0
Test/validate node 16 for Renovate runtime - ### What would you like Renovate to be able to do? Use node 16 as runtime, and upgrade to it for official Renovate images. ### If you have any ideas on how this should be implemented, please tell us here. It should be pretty stable by now, so perhaps we can return to having a matrix of node 14 + 16 for a while until we migrate to testing only 16? ### Is this a feature you are interested in implementing yourself? No
non_test
test validate node for renovate runtime what would you like renovate to be able to do use node as runtime and upgrade to it for official renovate images if you have any ideas on how this should be implemented please tell us here it should be pretty stable by now so perhaps we can return to having a matrix of node for a while until we migrate to testing only is this a feature you are interested in implementing yourself no
0
350,051
31,848,786,017
IssuesEvent
2023-09-14 22:33:48
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
pkg/sql/sqlstats/insights/integration/integration_test: TestInsightsIntegrationForContention failed
C-test-failure O-robot release-blocker branch-release-23.1 T-cluster-observability
pkg/sql/sqlstats/insights/integration/integration_test.TestInsightsIntegrationForContention [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Ci_TestsAwsLinuxArm64_UnitTests/11779399?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Ci_TestsAwsLinuxArm64_UnitTests/11779399?buildTab=artifacts#/) on release-23.1 @ [2694d15acc16b61015b309364feb5ce07d4d7f27](https://github.com/cockroachdb/cockroach/commits/2694d15acc16b61015b309364feb5ce07d4d7f27): ``` === RUN TestInsightsIntegrationForContention test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/8db7e2fbdb77cb910d7e66552998132e/logTestInsightsIntegrationForContention1491236748 test_log_scope.go:79: use -show-logs to present logs inline insights_test.go:570: condition failed to evaluate within 45s: sql: Scan error on column index 2, name "durationms": converting NULL to float64 is unsupported panic.go:522: -- test log scope end -- test logs left over in: /artifacts/tmp/_tmp/8db7e2fbdb77cb910d7e66552998132e/logTestInsightsIntegrationForContention1491236748 --- FAIL: TestInsightsIntegrationForContention (48.66s) ``` <details><summary>Help</summary> <p> See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM) </p> </details> /cc @cockroachdb/cluster-observability <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestInsightsIntegrationForContention.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub> Jira issue: CRDB-31551
1.0
pkg/sql/sqlstats/insights/integration/integration_test: TestInsightsIntegrationForContention failed - pkg/sql/sqlstats/insights/integration/integration_test.TestInsightsIntegrationForContention [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Ci_TestsAwsLinuxArm64_UnitTests/11779399?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Ci_TestsAwsLinuxArm64_UnitTests/11779399?buildTab=artifacts#/) on release-23.1 @ [2694d15acc16b61015b309364feb5ce07d4d7f27](https://github.com/cockroachdb/cockroach/commits/2694d15acc16b61015b309364feb5ce07d4d7f27): ``` === RUN TestInsightsIntegrationForContention test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/8db7e2fbdb77cb910d7e66552998132e/logTestInsightsIntegrationForContention1491236748 test_log_scope.go:79: use -show-logs to present logs inline insights_test.go:570: condition failed to evaluate within 45s: sql: Scan error on column index 2, name "durationms": converting NULL to float64 is unsupported panic.go:522: -- test log scope end -- test logs left over in: /artifacts/tmp/_tmp/8db7e2fbdb77cb910d7e66552998132e/logTestInsightsIntegrationForContention1491236748 --- FAIL: TestInsightsIntegrationForContention (48.66s) ``` <details><summary>Help</summary> <p> See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM) </p> </details> /cc @cockroachdb/cluster-observability <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestInsightsIntegrationForContention.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub> Jira issue: CRDB-31551
test
pkg sql sqlstats insights integration integration test testinsightsintegrationforcontention failed pkg sql sqlstats insights integration integration test testinsightsintegrationforcontention with on release run testinsightsintegrationforcontention test log scope go test logs captured to artifacts tmp tmp test log scope go use show logs to present logs inline insights test go condition failed to evaluate within sql scan error on column index name durationms converting null to is unsupported panic go test log scope end test logs left over in artifacts tmp tmp fail testinsightsintegrationforcontention help see also cc cockroachdb cluster observability jira issue crdb
1
108,920
4,363,990,265
IssuesEvent
2016-08-03 03:51:52
angular/angular-cli
https://api.github.com/repos/angular/angular-cli
closed
webpack branch installs ver. 1.0.0-beta.2-mobile.4 in project
priority: 1 (urgent) type: bug
> Please provide us with the following information: > --------------------------------------------------------------- 1. OS? Windows 7, 8 or 10. Linux (which distribution). Mac OSX (Yosemite? El Capitan?) OSX El Capitan 2. Versions. Please run `ng --version`. If there's nothing outputted, please run in a Terminal: `node --version` and paste the result here: (node:95661) fs: re-evaluating native module sources is not supported. If you are using the graceful-fs module, please update it to a more recent version. Could not start watchman; falling back to NodeWatcher for file system events. Visit http://ember-cli.com/user-guide/#watchman for more info. angular-cli: 1.0.0-beta.2-mobile.4 node: 6.3.0 os: darwin x64 3. Repro steps. Was this an app that wasn't created using the CLI? What change did you do on your code? etc. Created using CLI. No changes were made. 4. The log given by the failure. Normally this include a stack trace and some more information. gabrielterwesten@Gabriels-MBP ~/D/s/ng-cli-test> ng test (node:95665) fs: re-evaluating native module sources is not supported. If you are using the graceful-fs module, please update it to a more recent version. Could not start watchman; falling back to NodeWatcher for file system events. Visit http://ember-cli.com/user-guide/#watchman for more info. ⠋ BuildingNo angular-cli-build.js found. Please see the transition guide: https://github.com/angular-cli/angular-cli/blob/master/TRANSITION.md#user-content-brocfile-transition. Error: No angular-cli-build.js found. Please see the transition guide: https://github.com/angular-cli/angular-cli/blob/master/TRANSITION.md#user-content-brocfile-transition. at Class.module.exports.Task.extend.setupBroccoliBuilder (/Users/gabrielterwesten/Dev/sandbox/ng-cli-test/node_modules/angular-cli/lib/models/builder.js:57:13) at Class.module.exports.Task.extend.init (/Users/gabrielterwesten/Dev/sandbox/ng-cli-test/node_modules/angular-cli/lib/models/builder.js:89:10) at new Class (/Users/gabrielterwesten/Dev/sandbox/ng-cli-test/node_modules/core-object/core-object.js:18:12) at Class.module.exports.Task.extend.run (/Users/gabrielterwesten/Dev/sandbox/ng-cli-test/node_modules/angular-cli/lib/tasks/build.js:15:19) at win.checkWindowsElevation.then (/Users/gabrielterwesten/Dev/sandbox/ng-cli-test/node_modules/angular-cli/addon/ng2/commands/test.js:63:30) at lib$rsvp$$internal$$tryCatch (/Users/gabrielterwesten/Dev/sandbox/ng-cli-test/node_modules/rsvp/dist/rsvp.js:1036:16) at lib$rsvp$$internal$$invokeCallback (/Users/gabrielterwesten/Dev/sandbox/ng-cli-test/node_modules/rsvp/dist/rsvp.js:1048:17) at /Users/gabrielterwesten/Dev/sandbox/ng-cli-test/node_modules/rsvp/dist/rsvp.js:331:11 at lib$rsvp$asap$$flush (/Users/gabrielterwesten/Dev/sandbox/ng-cli-test/node_modules/rsvp/dist/rsvp.js:1198:9) at _combinedTickCallback (internal/process/next_tick.js:67:7) 5. Mention any other details that might be useful. package.json specifies `"angular-cli": "^1.0.0-beta.11-webpack"`. Removing `^` in the semver spec and running `npm install` installs the correct version and fixes the issue. > --------------------------------------------------------------- > Thanks! We'll be in touch soon.
1.0
webpack branch installs ver. 1.0.0-beta.2-mobile.4 in project - > Please provide us with the following information: > --------------------------------------------------------------- 1. OS? Windows 7, 8 or 10. Linux (which distribution). Mac OSX (Yosemite? El Capitan?) OSX El Capitan 2. Versions. Please run `ng --version`. If there's nothing outputted, please run in a Terminal: `node --version` and paste the result here: (node:95661) fs: re-evaluating native module sources is not supported. If you are using the graceful-fs module, please update it to a more recent version. Could not start watchman; falling back to NodeWatcher for file system events. Visit http://ember-cli.com/user-guide/#watchman for more info. angular-cli: 1.0.0-beta.2-mobile.4 node: 6.3.0 os: darwin x64 3. Repro steps. Was this an app that wasn't created using the CLI? What change did you do on your code? etc. Created using CLI. No changes were made. 4. The log given by the failure. Normally this include a stack trace and some more information. gabrielterwesten@Gabriels-MBP ~/D/s/ng-cli-test> ng test (node:95665) fs: re-evaluating native module sources is not supported. If you are using the graceful-fs module, please update it to a more recent version. Could not start watchman; falling back to NodeWatcher for file system events. Visit http://ember-cli.com/user-guide/#watchman for more info. ⠋ BuildingNo angular-cli-build.js found. Please see the transition guide: https://github.com/angular-cli/angular-cli/blob/master/TRANSITION.md#user-content-brocfile-transition. Error: No angular-cli-build.js found. Please see the transition guide: https://github.com/angular-cli/angular-cli/blob/master/TRANSITION.md#user-content-brocfile-transition. at Class.module.exports.Task.extend.setupBroccoliBuilder (/Users/gabrielterwesten/Dev/sandbox/ng-cli-test/node_modules/angular-cli/lib/models/builder.js:57:13) at Class.module.exports.Task.extend.init (/Users/gabrielterwesten/Dev/sandbox/ng-cli-test/node_modules/angular-cli/lib/models/builder.js:89:10) at new Class (/Users/gabrielterwesten/Dev/sandbox/ng-cli-test/node_modules/core-object/core-object.js:18:12) at Class.module.exports.Task.extend.run (/Users/gabrielterwesten/Dev/sandbox/ng-cli-test/node_modules/angular-cli/lib/tasks/build.js:15:19) at win.checkWindowsElevation.then (/Users/gabrielterwesten/Dev/sandbox/ng-cli-test/node_modules/angular-cli/addon/ng2/commands/test.js:63:30) at lib$rsvp$$internal$$tryCatch (/Users/gabrielterwesten/Dev/sandbox/ng-cli-test/node_modules/rsvp/dist/rsvp.js:1036:16) at lib$rsvp$$internal$$invokeCallback (/Users/gabrielterwesten/Dev/sandbox/ng-cli-test/node_modules/rsvp/dist/rsvp.js:1048:17) at /Users/gabrielterwesten/Dev/sandbox/ng-cli-test/node_modules/rsvp/dist/rsvp.js:331:11 at lib$rsvp$asap$$flush (/Users/gabrielterwesten/Dev/sandbox/ng-cli-test/node_modules/rsvp/dist/rsvp.js:1198:9) at _combinedTickCallback (internal/process/next_tick.js:67:7) 5. Mention any other details that might be useful. package.json specifies `"angular-cli": "^1.0.0-beta.11-webpack"`. Removing `^` in the semver spec and running `npm install` installs the correct version and fixes the issue. > --------------------------------------------------------------- > Thanks! We'll be in touch soon.
non_test
webpack branch installs ver beta mobile in project please provide us with the following information os windows or linux which distribution mac osx yosemite el capitan osx el capitan versions please run ng version if there s nothing outputted please run in a terminal node version and paste the result here node fs re evaluating native module sources is not supported if you are using the graceful fs module please update it to a more recent version could not start watchman falling back to nodewatcher for file system events visit for more info angular cli beta mobile node os darwin repro steps was this an app that wasn t created using the cli what change did you do on your code etc created using cli no changes were made the log given by the failure normally this include a stack trace and some more information gabrielterwesten gabriels mbp d s ng cli test ng test node fs re evaluating native module sources is not supported if you are using the graceful fs module please update it to a more recent version could not start watchman falling back to nodewatcher for file system events visit for more info ⠋ buildingno angular cli build js found please see the transition guide error no angular cli build js found please see the transition guide at class module exports task extend setupbroccolibuilder users gabrielterwesten dev sandbox ng cli test node modules angular cli lib models builder js at class module exports task extend init users gabrielterwesten dev sandbox ng cli test node modules angular cli lib models builder js at new class users gabrielterwesten dev sandbox ng cli test node modules core object core object js at class module exports task extend run users gabrielterwesten dev sandbox ng cli test node modules angular cli lib tasks build js at win checkwindowselevation then users gabrielterwesten dev sandbox ng cli test node modules angular cli addon commands test js at lib rsvp internal trycatch users gabrielterwesten dev sandbox ng cli test node modules rsvp dist rsvp js at lib rsvp internal invokecallback users gabrielterwesten dev sandbox ng cli test node modules rsvp dist rsvp js at users gabrielterwesten dev sandbox ng cli test node modules rsvp dist rsvp js at lib rsvp asap flush users gabrielterwesten dev sandbox ng cli test node modules rsvp dist rsvp js at combinedtickcallback internal process next tick js mention any other details that might be useful package json specifies angular cli beta webpack removing in the semver spec and running npm install installs the correct version and fixes the issue thanks we ll be in touch soon
0
235,398
19,343,895,560
IssuesEvent
2021-12-15 08:48:54
Roche-GSK/admiral
https://api.github.com/repos/Roche-GSK/admiral
closed
Rethink interface of baseline related derivations
help wanted discussion tester feedback
Several tester indicated that they found the naming of `derive_baseline()` to be confusing as they expected it to derive the baseline flag. In addition, questions where raised whether there's a need to have `derive_var_base()` and `derive_var_basec()` if you can use the underlying function to create those witht the added benefit of more explicit code as you would have to specify the `source_var` and `new_var` when using `derive_baseline()`. My suggestion would be to rename `derive_baseline()` to `derive_var_base()` and remove `derive_var_basec()`.
1.0
Rethink interface of baseline related derivations - Several tester indicated that they found the naming of `derive_baseline()` to be confusing as they expected it to derive the baseline flag. In addition, questions where raised whether there's a need to have `derive_var_base()` and `derive_var_basec()` if you can use the underlying function to create those witht the added benefit of more explicit code as you would have to specify the `source_var` and `new_var` when using `derive_baseline()`. My suggestion would be to rename `derive_baseline()` to `derive_var_base()` and remove `derive_var_basec()`.
test
rethink interface of baseline related derivations several tester indicated that they found the naming of derive baseline to be confusing as they expected it to derive the baseline flag in addition questions where raised whether there s a need to have derive var base and derive var basec if you can use the underlying function to create those witht the added benefit of more explicit code as you would have to specify the source var and new var when using derive baseline my suggestion would be to rename derive baseline to derive var base and remove derive var basec
1
56,051
23,688,412,876
IssuesEvent
2022-08-29 08:35:56
Azure/azure-cli
https://api.github.com/repos/Azure/azure-cli
closed
unable to set tde-key to a customer managed key imported from another keyvault
SQL Service Attention
When trying to set the tde-key to a customer managed key that was imported from another keyvault, the az sql server tde-key set returns: key not found. The sql server and the keyvault are in the same subscription, same region, same resource group. And the key exists in the keyvault az sql server tde-key set --server <sqlservername> --resource-group <resourcegroup> --server-key-type AzureKeyVault --kid <kid> response: "error":{"code":"ServerKeyNotFound","message":"The requested server key was not found."}} --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: ea369391-5bd1-845a-83c8-f544d30728db * Version Independent ID: 251e46ee-9f7a-2e64-8ae0-30837b23897b * Content: [az sql server tde-key](https://docs.microsoft.com/en-us/cli/azure/sql/server/tde-key?view=azure-cli-latest#feedback) * Content Source: [latest/docs-ref-autogen/sql/server/tde-key.yml](https://github.com/MicrosoftDocs/azure-docs-cli/blob/master/latest/docs-ref-autogen/sql/server/tde-key.yml) * Service: **sql-database** * GitHub Login: @rloutlaw * Microsoft Alias: **routlaw**
1.0
unable to set tde-key to a customer managed key imported from another keyvault - When trying to set the tde-key to a customer managed key that was imported from another keyvault, the az sql server tde-key set returns: key not found. The sql server and the keyvault are in the same subscription, same region, same resource group. And the key exists in the keyvault az sql server tde-key set --server <sqlservername> --resource-group <resourcegroup> --server-key-type AzureKeyVault --kid <kid> response: "error":{"code":"ServerKeyNotFound","message":"The requested server key was not found."}} --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: ea369391-5bd1-845a-83c8-f544d30728db * Version Independent ID: 251e46ee-9f7a-2e64-8ae0-30837b23897b * Content: [az sql server tde-key](https://docs.microsoft.com/en-us/cli/azure/sql/server/tde-key?view=azure-cli-latest#feedback) * Content Source: [latest/docs-ref-autogen/sql/server/tde-key.yml](https://github.com/MicrosoftDocs/azure-docs-cli/blob/master/latest/docs-ref-autogen/sql/server/tde-key.yml) * Service: **sql-database** * GitHub Login: @rloutlaw * Microsoft Alias: **routlaw**
non_test
unable to set tde key to a customer managed key imported from another keyvault when trying to set the tde key to a customer managed key that was imported from another keyvault the az sql server tde key set returns key not found the sql server and the keyvault are in the same subscription same region same resource group and the key exists in the keyvault az sql server tde key set server resource group server key type azurekeyvault kid response error code serverkeynotfound message the requested server key was not found document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service sql database github login rloutlaw microsoft alias routlaw
0
230,323
18,539,789,011
IssuesEvent
2021-10-21 14:59:19
zonemaster/zonemaster-engine
https://api.github.com/repos/zonemaster/zonemaster-engine
closed
Update the implementation of DNSSEC08
A-TestCase
The specification of DNSSEC08 has been updated by the merge of zonemaster/zonemaster#872. The specification update requires an implementation update.
1.0
Update the implementation of DNSSEC08 - The specification of DNSSEC08 has been updated by the merge of zonemaster/zonemaster#872. The specification update requires an implementation update.
test
update the implementation of the specification of has been updated by the merge of zonemaster zonemaster the specification update requires an implementation update
1
234,026
19,091,244,198
IssuesEvent
2021-11-29 12:21:03
spacemeshos/go-svm
https://api.github.com/repos/spacemeshos/go-svm
closed
Testing a `Spawn` transaction with a ctor that doesn't exist
good first issue svm test
When trying to Spawn a new account and setting a `ctor` that doesn't exist we'd expect to get back `Function Not Found` Here is a link to a valid `Spawn` transaction https://github.com/spacemeshos/go-svm/blob/710604ce6fcca1f31d3f79a60a8880db98d1ef6d/svm/inputs/spawn/initialize.json In order to generate an input to the test: * Duplicate the above `initialize.json` and name it differently. * Rename the value under `ctor_name` to something such as `new` or similar. * `cd` to `svm/inputs` and execute `./generate_txs.sh` - this should generate the binary transaction to be used for the test.
1.0
Testing a `Spawn` transaction with a ctor that doesn't exist - When trying to Spawn a new account and setting a `ctor` that doesn't exist we'd expect to get back `Function Not Found` Here is a link to a valid `Spawn` transaction https://github.com/spacemeshos/go-svm/blob/710604ce6fcca1f31d3f79a60a8880db98d1ef6d/svm/inputs/spawn/initialize.json In order to generate an input to the test: * Duplicate the above `initialize.json` and name it differently. * Rename the value under `ctor_name` to something such as `new` or similar. * `cd` to `svm/inputs` and execute `./generate_txs.sh` - this should generate the binary transaction to be used for the test.
test
testing a spawn transaction with a ctor that doesn t exist when trying to spawn a new account and setting a ctor that doesn t exist we d expect to get back function not found here is a link to a valid spawn transaction in order to generate an input to the test duplicate the above initialize json and name it differently rename the value under ctor name to something such as new or similar cd to svm inputs and execute generate txs sh this should generate the binary transaction to be used for the test
1
1,484
2,550,330,986
IssuesEvent
2015-02-01 12:26:23
jGleitz/JUnit-KIT
https://api.github.com/repos/jGleitz/JUnit-KIT
closed
Check invalid commands - Task A
proposal Test
We have to test invalid interactive commandline entries. E.g. search (without param) quit (with param) info (with param) some-invalid-command
1.0
Check invalid commands - Task A - We have to test invalid interactive commandline entries. E.g. search (without param) quit (with param) info (with param) some-invalid-command
test
check invalid commands task a we have to test invalid interactive commandline entries e g search without param quit with param info with param some invalid command
1
332,971
10,113,415,503
IssuesEvent
2019-07-30 16:40:21
kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines
closed
Support mutl-platform storage for kaniko
area/sdk/dsl/compiler help wanted kind/feature platform/other priority/p1
Currently kaniko pod mounts the GCP service accounts so it can access to GCP API. https://github.com/kubeflow/pipelines/pull/343 We should consider make this a configurable setting so people can switch to non GCP stack as they want.
1.0
Support mutl-platform storage for kaniko - Currently kaniko pod mounts the GCP service accounts so it can access to GCP API. https://github.com/kubeflow/pipelines/pull/343 We should consider make this a configurable setting so people can switch to non GCP stack as they want.
non_test
support mutl platform storage for kaniko currently kaniko pod mounts the gcp service accounts so it can access to gcp api we should consider make this a configurable setting so people can switch to non gcp stack as they want
0
148,985
11,873,863,567
IssuesEvent
2020-03-26 17:59:48
jualoppaz/anhqv-stats-front
https://api.github.com/repos/jualoppaz/anhqv-stats-front
opened
Implementar tests unitarios para la cabecera de la web
test
El objetivo de esta tarea es implementar tests unitarios del componente **Header** de la web.
1.0
Implementar tests unitarios para la cabecera de la web - El objetivo de esta tarea es implementar tests unitarios del componente **Header** de la web.
test
implementar tests unitarios para la cabecera de la web el objetivo de esta tarea es implementar tests unitarios del componente header de la web
1
17,850
3,643,594,701
IssuesEvent
2016-02-15 03:09:33
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
Failed tests (12813):
test-failure
The following test appears to have failed: [#12813](https://circleci.com/gh/cockroachdb/cockroach/12813): ``` I0215 03:08:40.238788 636 kv/dist_sender.go:809 range 1: new cached leader store 2 (old: 0) W0215 03:08:40.239011 636 kv/dist_sender.go:678 range 1: replica node_id:1 store_id:1 replica_id:1 not leader; leader is node_id:2 store_id:2 replica_id:2 I0215 03:08:40.239086 636 kv/range_cache.go:128 lookup range descriptor: key=/Min I0215 03:08:40.239506 636 kv/range_cache.go:189 adding descriptor: key=/Meta2/Max desc=range_id:1 start_key:"" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:3 store_id:3 replica_id:3 > next_replica_id:4 ================== WARNING: DATA RACE Read by goroutine 85: math/rand.(*rngSource).Int63() /tmp/workdir/go/src/math/rand/rng.go:233 +0x32 math/rand.(*Rand).Int63() /tmp/workdir/go/src/math/rand/rand.go:46 +0x54 math/rand.(*Rand).Int31() /tmp/workdir/go/src/math/rand/rand.go:52 +0x2e math/rand.(*Rand).Int31n() /tmp/workdir/go/src/math/rand/rand.go:84 +0x9a math/rand.(*Rand).Intn() -- github.com/cockroachdb/cockroach/storage_test.TestStoreRangeRebalance() /go/src/github.com/cockroachdb/cockroach/storage/client_raft_test.go:1237 +0xf8c testing.tRunner() /tmp/workdir/go/src/testing/testing.go:456 +0xdc Previous write by goroutine 177: math/rand.(*rngSource).Int63() /tmp/workdir/go/src/math/rand/rng.go:233 +0x48 math/rand.(*Rand).Int63() /tmp/workdir/go/src/math/rand/rand.go:46 +0x54 math/rand.(*Rand).Int31() /tmp/workdir/go/src/math/rand/rand.go:52 +0x2e math/rand.(*Rand).Int31n() /tmp/workdir/go/src/math/rand/rand.go:84 +0x9a math/rand.(*Rand).Intn() /tmp/workdir/go/src/math/rand/rand.go:101 +0x9f -- github.com/cockroachdb/cockroach/storage.(*baseQueue).processLoop.func1() /go/src/github.com/cockroachdb/cockroach/storage/queue.go:375 +0x2f4 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1() /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:98 +0x5f Goroutine 85 (running) created at: testing.RunTests() /tmp/workdir/go/src/testing/testing.go:561 +0xaa3 testing.(*M).Run() /tmp/workdir/go/src/testing/testing.go:494 +0xe4 github.com/cockroachdb/cockroach/util/leaktest.TestMainWithLeakCheck() /go/src/github.com/cockroachdb/cockroach/util/leaktest/leaktest.go:37 +0x2e github.com/cockroachdb/cockroach/storage_test.TestMain() /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:38 +0x33 main.main() github.com/cockroachdb/cockroach/storage/_test/_testmain.go:516 +0x209 Goroutine 177 (running) created at: github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker() /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:99 +0x6f github.com/cockroachdb/cockroach/storage.(*baseQueue).processLoop() /go/src/github.com/cockroachdb/cockroach/storage/queue.go:388 +0x105 github.com/cockroachdb/cockroach/storage.(*baseQueue).Start() /go/src/github.com/cockroachdb/cockroach/storage/queue.go:234 +0x42 github.com/cockroachdb/cockroach/storage.(*replicateQueue).Start() <autogenerated>:95 +0x5f github.com/cockroachdb/cockroach/storage.(*replicaScanner).Start() /go/src/github.com/cockroachdb/cockroach/storage/scanner.go:106 +0xd3 -- /go/src/github.com/cockroachdb/cockroach/storage/store.go:605 +0x160 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1() /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:98 +0x5f ================== ================== WARNING: DATA RACE Read by goroutine 85: math/rand.(*rngSource).Int63() /tmp/workdir/go/src/math/rand/rng.go:238 +0xb0 math/rand.(*Rand).Int63() /tmp/workdir/go/src/math/rand/rand.go:46 +0x54 math/rand.(*Rand).Int31() /tmp/workdir/go/src/math/rand/rand.go:52 +0x2e math/rand.(*Rand).Int31n() /tmp/workdir/go/src/math/rand/rand.go:84 +0x9a math/rand.(*Rand).Intn() -- github.com/cockroachdb/cockroach/storage_test.TestStoreRangeRebalance() /go/src/github.com/cockroachdb/cockroach/storage/client_raft_test.go:1237 +0xf8c testing.tRunner() /tmp/workdir/go/src/testing/testing.go:456 +0xdc Previous write by goroutine 177: math/rand.(*rngSource).Int63() /tmp/workdir/go/src/math/rand/rng.go:238 +0xcc math/rand.(*Rand).Int63() /tmp/workdir/go/src/math/rand/rand.go:46 +0x54 math/rand.(*Rand).Int31() /tmp/workdir/go/src/math/rand/rand.go:52 +0x2e math/rand.(*Rand).Int31n() /tmp/workdir/go/src/math/rand/rand.go:84 +0x9a math/rand.(*Rand).Intn() /tmp/workdir/go/src/math/rand/rand.go:101 +0x9f -- github.com/cockroachdb/cockroach/storage.(*baseQueue).processLoop.func1() /go/src/github.com/cockroachdb/cockroach/storage/queue.go:375 +0x2f4 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1() /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:98 +0x5f Goroutine 85 (running) created at: testing.RunTests() /tmp/workdir/go/src/testing/testing.go:561 +0xaa3 testing.(*M).Run() /tmp/workdir/go/src/testing/testing.go:494 +0xe4 github.com/cockroachdb/cockroach/util/leaktest.TestMainWithLeakCheck() /go/src/github.com/cockroachdb/cockroach/util/leaktest/leaktest.go:37 +0x2e github.com/cockroachdb/cockroach/storage_test.TestMain() /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:38 +0x33 main.main() github.com/cockroachdb/cockroach/storage/_test/_testmain.go:516 +0x209 Goroutine 177 (running) created at: github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker() /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:99 +0x6f github.com/cockroachdb/cockroach/storage.(*baseQueue).processLoop() /go/src/github.com/cockroachdb/cockroach/storage/queue.go:388 +0x105 github.com/cockroachdb/cockroach/storage.(*baseQueue).Start() /go/src/github.com/cockroachdb/cockroach/storage/queue.go:234 +0x42 github.com/cockroachdb/cockroach/storage.(*replicateQueue).Start() <autogenerated>:95 +0x5f github.com/cockroachdb/cockroach/storage.(*replicaScanner).Start() /go/src/github.com/cockroachdb/cockroach/storage/scanner.go:106 +0xd3 -- --- PASS: TestLogRebalances (0.95s) === RUN Example_rebalancing --- PASS: Example_rebalancing (0.72s) PASS Found 2 data race(s) FAIL github.com/cockroachdb/cockroach/storage 35.728s === RUN TestBatchBasics I0215 03:08:47.466476 695 storage/engine/rocksdb.go:137 closing in-memory rocksdb instance --- PASS: TestBatchBasics (0.01s) === RUN TestBatchGet I0215 03:08:47.471659 695 storage/engine/rocksdb.go:137 closing in-memory rocksdb instance --- PASS: TestBatchGet (0.01s) === RUN TestBatchMerge I0215 03:08:47.479885 695 storage/engine/rocksdb.go:137 closing in-memory rocksdb instance --- PASS: TestBatchMerge (0.01s) === RUN TestBatchProto ``` Please assign, take a look and update the issue accordingly.
1.0
Failed tests (12813): - The following test appears to have failed: [#12813](https://circleci.com/gh/cockroachdb/cockroach/12813): ``` I0215 03:08:40.238788 636 kv/dist_sender.go:809 range 1: new cached leader store 2 (old: 0) W0215 03:08:40.239011 636 kv/dist_sender.go:678 range 1: replica node_id:1 store_id:1 replica_id:1 not leader; leader is node_id:2 store_id:2 replica_id:2 I0215 03:08:40.239086 636 kv/range_cache.go:128 lookup range descriptor: key=/Min I0215 03:08:40.239506 636 kv/range_cache.go:189 adding descriptor: key=/Meta2/Max desc=range_id:1 start_key:"" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:3 store_id:3 replica_id:3 > next_replica_id:4 ================== WARNING: DATA RACE Read by goroutine 85: math/rand.(*rngSource).Int63() /tmp/workdir/go/src/math/rand/rng.go:233 +0x32 math/rand.(*Rand).Int63() /tmp/workdir/go/src/math/rand/rand.go:46 +0x54 math/rand.(*Rand).Int31() /tmp/workdir/go/src/math/rand/rand.go:52 +0x2e math/rand.(*Rand).Int31n() /tmp/workdir/go/src/math/rand/rand.go:84 +0x9a math/rand.(*Rand).Intn() -- github.com/cockroachdb/cockroach/storage_test.TestStoreRangeRebalance() /go/src/github.com/cockroachdb/cockroach/storage/client_raft_test.go:1237 +0xf8c testing.tRunner() /tmp/workdir/go/src/testing/testing.go:456 +0xdc Previous write by goroutine 177: math/rand.(*rngSource).Int63() /tmp/workdir/go/src/math/rand/rng.go:233 +0x48 math/rand.(*Rand).Int63() /tmp/workdir/go/src/math/rand/rand.go:46 +0x54 math/rand.(*Rand).Int31() /tmp/workdir/go/src/math/rand/rand.go:52 +0x2e math/rand.(*Rand).Int31n() /tmp/workdir/go/src/math/rand/rand.go:84 +0x9a math/rand.(*Rand).Intn() /tmp/workdir/go/src/math/rand/rand.go:101 +0x9f -- github.com/cockroachdb/cockroach/storage.(*baseQueue).processLoop.func1() /go/src/github.com/cockroachdb/cockroach/storage/queue.go:375 +0x2f4 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1() /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:98 +0x5f Goroutine 85 (running) created at: testing.RunTests() /tmp/workdir/go/src/testing/testing.go:561 +0xaa3 testing.(*M).Run() /tmp/workdir/go/src/testing/testing.go:494 +0xe4 github.com/cockroachdb/cockroach/util/leaktest.TestMainWithLeakCheck() /go/src/github.com/cockroachdb/cockroach/util/leaktest/leaktest.go:37 +0x2e github.com/cockroachdb/cockroach/storage_test.TestMain() /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:38 +0x33 main.main() github.com/cockroachdb/cockroach/storage/_test/_testmain.go:516 +0x209 Goroutine 177 (running) created at: github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker() /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:99 +0x6f github.com/cockroachdb/cockroach/storage.(*baseQueue).processLoop() /go/src/github.com/cockroachdb/cockroach/storage/queue.go:388 +0x105 github.com/cockroachdb/cockroach/storage.(*baseQueue).Start() /go/src/github.com/cockroachdb/cockroach/storage/queue.go:234 +0x42 github.com/cockroachdb/cockroach/storage.(*replicateQueue).Start() <autogenerated>:95 +0x5f github.com/cockroachdb/cockroach/storage.(*replicaScanner).Start() /go/src/github.com/cockroachdb/cockroach/storage/scanner.go:106 +0xd3 -- /go/src/github.com/cockroachdb/cockroach/storage/store.go:605 +0x160 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1() /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:98 +0x5f ================== ================== WARNING: DATA RACE Read by goroutine 85: math/rand.(*rngSource).Int63() /tmp/workdir/go/src/math/rand/rng.go:238 +0xb0 math/rand.(*Rand).Int63() /tmp/workdir/go/src/math/rand/rand.go:46 +0x54 math/rand.(*Rand).Int31() /tmp/workdir/go/src/math/rand/rand.go:52 +0x2e math/rand.(*Rand).Int31n() /tmp/workdir/go/src/math/rand/rand.go:84 +0x9a math/rand.(*Rand).Intn() -- github.com/cockroachdb/cockroach/storage_test.TestStoreRangeRebalance() /go/src/github.com/cockroachdb/cockroach/storage/client_raft_test.go:1237 +0xf8c testing.tRunner() /tmp/workdir/go/src/testing/testing.go:456 +0xdc Previous write by goroutine 177: math/rand.(*rngSource).Int63() /tmp/workdir/go/src/math/rand/rng.go:238 +0xcc math/rand.(*Rand).Int63() /tmp/workdir/go/src/math/rand/rand.go:46 +0x54 math/rand.(*Rand).Int31() /tmp/workdir/go/src/math/rand/rand.go:52 +0x2e math/rand.(*Rand).Int31n() /tmp/workdir/go/src/math/rand/rand.go:84 +0x9a math/rand.(*Rand).Intn() /tmp/workdir/go/src/math/rand/rand.go:101 +0x9f -- github.com/cockroachdb/cockroach/storage.(*baseQueue).processLoop.func1() /go/src/github.com/cockroachdb/cockroach/storage/queue.go:375 +0x2f4 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1() /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:98 +0x5f Goroutine 85 (running) created at: testing.RunTests() /tmp/workdir/go/src/testing/testing.go:561 +0xaa3 testing.(*M).Run() /tmp/workdir/go/src/testing/testing.go:494 +0xe4 github.com/cockroachdb/cockroach/util/leaktest.TestMainWithLeakCheck() /go/src/github.com/cockroachdb/cockroach/util/leaktest/leaktest.go:37 +0x2e github.com/cockroachdb/cockroach/storage_test.TestMain() /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:38 +0x33 main.main() github.com/cockroachdb/cockroach/storage/_test/_testmain.go:516 +0x209 Goroutine 177 (running) created at: github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker() /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:99 +0x6f github.com/cockroachdb/cockroach/storage.(*baseQueue).processLoop() /go/src/github.com/cockroachdb/cockroach/storage/queue.go:388 +0x105 github.com/cockroachdb/cockroach/storage.(*baseQueue).Start() /go/src/github.com/cockroachdb/cockroach/storage/queue.go:234 +0x42 github.com/cockroachdb/cockroach/storage.(*replicateQueue).Start() <autogenerated>:95 +0x5f github.com/cockroachdb/cockroach/storage.(*replicaScanner).Start() /go/src/github.com/cockroachdb/cockroach/storage/scanner.go:106 +0xd3 -- --- PASS: TestLogRebalances (0.95s) === RUN Example_rebalancing --- PASS: Example_rebalancing (0.72s) PASS Found 2 data race(s) FAIL github.com/cockroachdb/cockroach/storage 35.728s === RUN TestBatchBasics I0215 03:08:47.466476 695 storage/engine/rocksdb.go:137 closing in-memory rocksdb instance --- PASS: TestBatchBasics (0.01s) === RUN TestBatchGet I0215 03:08:47.471659 695 storage/engine/rocksdb.go:137 closing in-memory rocksdb instance --- PASS: TestBatchGet (0.01s) === RUN TestBatchMerge I0215 03:08:47.479885 695 storage/engine/rocksdb.go:137 closing in-memory rocksdb instance --- PASS: TestBatchMerge (0.01s) === RUN TestBatchProto ``` Please assign, take a look and update the issue accordingly.
test
failed tests the following test appears to have failed kv dist sender go range new cached leader store old kv dist sender go range replica node id store id replica id not leader leader is node id store id replica id kv range cache go lookup range descriptor key min kv range cache go adding descriptor key max desc range id start key end key replicas replicas replicas next replica id warning data race read by goroutine math rand rngsource tmp workdir go src math rand rng go math rand rand tmp workdir go src math rand rand go math rand rand tmp workdir go src math rand rand go math rand rand tmp workdir go src math rand rand go math rand rand intn github com cockroachdb cockroach storage test teststorerangerebalance go src github com cockroachdb cockroach storage client raft test go testing trunner tmp workdir go src testing testing go previous write by goroutine math rand rngsource tmp workdir go src math rand rng go math rand rand tmp workdir go src math rand rand go math rand rand tmp workdir go src math rand rand go math rand rand tmp workdir go src math rand rand go math rand rand intn tmp workdir go src math rand rand go github com cockroachdb cockroach storage basequeue processloop go src github com cockroachdb cockroach storage queue go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine running created at testing runtests tmp workdir go src testing testing go testing m run tmp workdir go src testing testing go github com cockroachdb cockroach util leaktest testmainwithleakcheck go src github com cockroachdb cockroach util leaktest leaktest go github com cockroachdb cockroach storage test testmain go src github com cockroachdb cockroach storage main test go main main github com cockroachdb cockroach storage test testmain go goroutine running created at github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go github com cockroachdb cockroach storage basequeue processloop go src github com cockroachdb cockroach storage queue go github com cockroachdb cockroach storage basequeue start go src github com cockroachdb cockroach storage queue go github com cockroachdb cockroach storage replicatequeue start github com cockroachdb cockroach storage replicascanner start go src github com cockroachdb cockroach storage scanner go go src github com cockroachdb cockroach storage store go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go warning data race read by goroutine math rand rngsource tmp workdir go src math rand rng go math rand rand tmp workdir go src math rand rand go math rand rand tmp workdir go src math rand rand go math rand rand tmp workdir go src math rand rand go math rand rand intn github com cockroachdb cockroach storage test teststorerangerebalance go src github com cockroachdb cockroach storage client raft test go testing trunner tmp workdir go src testing testing go previous write by goroutine math rand rngsource tmp workdir go src math rand rng go math rand rand tmp workdir go src math rand rand go math rand rand tmp workdir go src math rand rand go math rand rand tmp workdir go src math rand rand go math rand rand intn tmp workdir go src math rand rand go github com cockroachdb cockroach storage basequeue processloop go src github com cockroachdb cockroach storage queue go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine running created at testing runtests tmp workdir go src testing testing go testing m run tmp workdir go src testing testing go github com cockroachdb cockroach util leaktest testmainwithleakcheck go src github com cockroachdb cockroach util leaktest leaktest go github com cockroachdb cockroach storage test testmain go src github com cockroachdb cockroach storage main test go main main github com cockroachdb cockroach storage test testmain go goroutine running created at github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go github com cockroachdb cockroach storage basequeue processloop go src github com cockroachdb cockroach storage queue go github com cockroachdb cockroach storage basequeue start go src github com cockroachdb cockroach storage queue go github com cockroachdb cockroach storage replicatequeue start github com cockroachdb cockroach storage replicascanner start go src github com cockroachdb cockroach storage scanner go pass testlogrebalances run example rebalancing pass example rebalancing pass found data race s fail github com cockroachdb cockroach storage run testbatchbasics storage engine rocksdb go closing in memory rocksdb instance pass testbatchbasics run testbatchget storage engine rocksdb go closing in memory rocksdb instance pass testbatchget run testbatchmerge storage engine rocksdb go closing in memory rocksdb instance pass testbatchmerge run testbatchproto please assign take a look and update the issue accordingly
1
3,624
2,774,000,050
IssuesEvent
2015-05-04 03:00:47
SitecorePowerShell/Console
https://api.github.com/repos/SitecorePowerShell/Console
closed
Getting Started Guide
area-documentation
Create a guide to help users get up to speed on SPE. Lessons should align with scripts found in the Getting Started module. Include shortcuts for both console and ISE.
1.0
Getting Started Guide - Create a guide to help users get up to speed on SPE. Lessons should align with scripts found in the Getting Started module. Include shortcuts for both console and ISE.
non_test
getting started guide create a guide to help users get up to speed on spe lessons should align with scripts found in the getting started module include shortcuts for both console and ise
0
292,835
25,243,933,668
IssuesEvent
2022-11-15 09:42:40
0xf4lc0n/gun-lender
https://api.github.com/repos/0xf4lc0n/gun-lender
closed
Typos in method names
invalid tests
https://github.com/0xf4lc0n/gun-lender/blob/c69d780ed8dedfcc058fc08608eb0477c080c389/src/test/java/UserRepositoryTest.java#L60 https://github.com/0xf4lc0n/gun-lender/blob/c69d780ed8dedfcc058fc08608eb0477c080c389/src/test/java/AmmoRepositoryTest.java#L58 https://github.com/0xf4lc0n/gun-lender/blob/c69d780ed8dedfcc058fc08608eb0477c080c389/src/test/java/GunRepositoryTest.java#L59 https://github.com/0xf4lc0n/gun-lender/blob/c69d780ed8dedfcc058fc08608eb0477c080c389/src/test/java/LendingRepositoryTest.java#L67 https://github.com/0xf4lc0n/gun-lender/blob/c69d780ed8dedfcc058fc08608eb0477c080c389/src/test/java/LendingRepositoryTest.java#L98 https://github.com/0xf4lc0n/gun-lender/blob/c69d780ed8dedfcc058fc08608eb0477c080c389/src/test/java/LendingRepositoryTest.java#L130 Instead of NoneExisting there should be NonExisting.
1.0
Typos in method names - https://github.com/0xf4lc0n/gun-lender/blob/c69d780ed8dedfcc058fc08608eb0477c080c389/src/test/java/UserRepositoryTest.java#L60 https://github.com/0xf4lc0n/gun-lender/blob/c69d780ed8dedfcc058fc08608eb0477c080c389/src/test/java/AmmoRepositoryTest.java#L58 https://github.com/0xf4lc0n/gun-lender/blob/c69d780ed8dedfcc058fc08608eb0477c080c389/src/test/java/GunRepositoryTest.java#L59 https://github.com/0xf4lc0n/gun-lender/blob/c69d780ed8dedfcc058fc08608eb0477c080c389/src/test/java/LendingRepositoryTest.java#L67 https://github.com/0xf4lc0n/gun-lender/blob/c69d780ed8dedfcc058fc08608eb0477c080c389/src/test/java/LendingRepositoryTest.java#L98 https://github.com/0xf4lc0n/gun-lender/blob/c69d780ed8dedfcc058fc08608eb0477c080c389/src/test/java/LendingRepositoryTest.java#L130 Instead of NoneExisting there should be NonExisting.
test
typos in method names instead of noneexisting there should be nonexisting
1
207,267
7,126,761,209
IssuesEvent
2018-01-20 14:18:50
Jumpscale/go-raml
https://api.github.com/repos/Jumpscale/go-raml
opened
js9 not used
priority_critical
the idea & goal was that people could use easy tool from js93, not usable today .../code/github/jumpscale/lib9/JumpScale9Lib/tools/raml/RamlTools.py
1.0
js9 not used - the idea & goal was that people could use easy tool from js93, not usable today .../code/github/jumpscale/lib9/JumpScale9Lib/tools/raml/RamlTools.py
non_test
not used the idea goal was that people could use easy tool from not usable today code github jumpscale tools raml ramltools py
0
628,159
19,976,964,859
IssuesEvent
2022-01-29 08:29:40
Hukino/Sky-traveller
https://api.github.com/repos/Hukino/Sky-traveller
opened
追加[エネミー]:グラスプバット
😵help wanted 🗒️ priority: low ✨ feature ⚔item 📦need modeling
### 概要 - 後ろ足が発達したコウモリ - プレイヤーを掴んで噛みつく攻撃 ### タスク - [ ] モデルを制作 - [ ] ステータスを制作 - [ ] AIを制作
1.0
追加[エネミー]:グラスプバット - ### 概要 - 後ろ足が発達したコウモリ - プレイヤーを掴んで噛みつく攻撃 ### タスク - [ ] モデルを制作 - [ ] ステータスを制作 - [ ] AIを制作
non_test
追加[エネミー]:グラスプバット 概要 後ろ足が発達したコウモリ プレイヤーを掴んで噛みつく攻撃 タスク モデルを制作 ステータスを制作 aiを制作
0
81,893
7,806,851,640
IssuesEvent
2018-06-11 15:09:15
mapbox/mapbox-gl-native
https://api.github.com/repos/mapbox/mapbox-gl-native
opened
Some iOS integration tests fail on iOS 9.3
iOS tests
<!-- Hello and thanks for contributing! To help us diagnose your problem quickly, please: - Include a minimal demonstration of the bug, including code, logs, and screenshots. - Ensure you can reproduce the bug using the latest release. - Only post to report a bug or request a feature; direct all other questions to: https://stackoverflow.com/questions/tagged/mapbox --> **Platform:** iOS **Mapbox SDK version:** 4.0.0 When targeting an iOS 9.3 simulator, roughly half of the current integration tests fail. Circle currently only tests the latest deployment version, so this hasn't been detected before.
1.0
Some iOS integration tests fail on iOS 9.3 - <!-- Hello and thanks for contributing! To help us diagnose your problem quickly, please: - Include a minimal demonstration of the bug, including code, logs, and screenshots. - Ensure you can reproduce the bug using the latest release. - Only post to report a bug or request a feature; direct all other questions to: https://stackoverflow.com/questions/tagged/mapbox --> **Platform:** iOS **Mapbox SDK version:** 4.0.0 When targeting an iOS 9.3 simulator, roughly half of the current integration tests fail. Circle currently only tests the latest deployment version, so this hasn't been detected before.
test
some ios integration tests fail on ios hello and thanks for contributing to help us diagnose your problem quickly please include a minimal demonstration of the bug including code logs and screenshots ensure you can reproduce the bug using the latest release only post to report a bug or request a feature direct all other questions to platform ios mapbox sdk version when targeting an ios simulator roughly half of the current integration tests fail circle currently only tests the latest deployment version so this hasn t been detected before
1
10,641
3,412,042,017
IssuesEvent
2015-12-05 15:35:30
pinax/pinax-stripe
https://api.github.com/repos/pinax/pinax-stripe
closed
Docs needed for testing/using Webhooks
documentation
I've added docs in the [README](https://github.com/epicserve/dsp-example/blob/master/README.md) for the example site I created on how to setup webhooks, if you want to copy or borrow them for the DSP docs.
1.0
Docs needed for testing/using Webhooks - I've added docs in the [README](https://github.com/epicserve/dsp-example/blob/master/README.md) for the example site I created on how to setup webhooks, if you want to copy or borrow them for the DSP docs.
non_test
docs needed for testing using webhooks i ve added docs in the for the example site i created on how to setup webhooks if you want to copy or borrow them for the dsp docs
0
302,529
22,828,940,379
IssuesEvent
2022-07-12 11:09:26
PlumyGame/mgpp
https://api.github.com/repos/PlumyGame/mgpp
closed
Build a documentation page
documentation
As the more and more features are introduced in mgpp, the guideline is essential for users. ## What we need 1. A dynamic documentation page for switchable DSL. 2. Detailed descriptive document for each task and configuration.
1.0
Build a documentation page - As the more and more features are introduced in mgpp, the guideline is essential for users. ## What we need 1. A dynamic documentation page for switchable DSL. 2. Detailed descriptive document for each task and configuration.
non_test
build a documentation page as the more and more features are introduced in mgpp the guideline is essential for users what we need a dynamic documentation page for switchable dsl detailed descriptive document for each task and configuration
0
56,411
32,000,426,181
IssuesEvent
2023-09-21 11:56:49
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Ordering of `genquery` output does not match the output of `bazel query --order_output=full`
type: bug team-Performance
### Description of the bug: According to docs on the `genquery` rule [here](https://bazel.build/reference/be/general#genquery), output from the rule is supposed to match the order dictated by the flag value `--order_output=full`, and it doesn't seem to be. ### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. https://github.com/Faqa/bazel_query_ordering Example repo here. You can compare the output of `//:b_query` to the command `bazel query --notool_deps --order_output=full 'deps(//:b)'` - in the former, the first thing seen is `//:a`, and in the latter, the first target is `//:b`. ### Which operating system are you running Bazel on? OSX ### What is the output of `bazel info release`? release 5.2.0 ### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel. _No response_ ### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ? _No response_ ### Have you found anything relevant by searching the web? _No response_ ### Any other information, logs, or outputs that you want to share? _No response_
True
Ordering of `genquery` output does not match the output of `bazel query --order_output=full` - ### Description of the bug: According to docs on the `genquery` rule [here](https://bazel.build/reference/be/general#genquery), output from the rule is supposed to match the order dictated by the flag value `--order_output=full`, and it doesn't seem to be. ### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. https://github.com/Faqa/bazel_query_ordering Example repo here. You can compare the output of `//:b_query` to the command `bazel query --notool_deps --order_output=full 'deps(//:b)'` - in the former, the first thing seen is `//:a`, and in the latter, the first target is `//:b`. ### Which operating system are you running Bazel on? OSX ### What is the output of `bazel info release`? release 5.2.0 ### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel. _No response_ ### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ? _No response_ ### Have you found anything relevant by searching the web? _No response_ ### Any other information, logs, or outputs that you want to share? _No response_
non_test
ordering of genquery output does not match the output of bazel query order output full description of the bug according to docs on the genquery rule output from the rule is supposed to match the order dictated by the flag value order output full and it doesn t seem to be what s the simplest easiest way to reproduce this bug please provide a minimal example if possible example repo here you can compare the output of b query to the command bazel query notool deps order output full deps b in the former the first thing seen is a and in the latter the first target is b which operating system are you running bazel on osx what is the output of bazel info release release if bazel info release returns development version or non git tell us how you built bazel no response what s the output of git remote get url origin git rev parse master git rev parse head no response have you found anything relevant by searching the web no response any other information logs or outputs that you want to share no response
0