Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
1.02k
| labels
stringlengths 4
1.54k
| body
stringlengths 1
262k
| index
stringclasses 17
values | text_combine
stringlengths 95
262k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
168,486
| 14,149,887,828
|
IssuesEvent
|
2020-11-11 02:00:22
|
nicorithner/Sweater_weather
|
https://api.github.com/repos/nicorithner/Sweater_weather
|
closed
|
Installation instructions
|
documentation
|
Explain how to fork, clone, basic installation including gems and how to run the tests.
- [x] Explanation on how to fork
- [x] Explanation on how to clone
- [x] Explanation on how to do the basic installation
- [x] rails db:{create,migrate}
- [x] Gemfile: Rspec -> rails generate rspec:install (config brief explanation? ref to docs?)
- [x] Gemfile: Figaro -> bundle exec figaro install (config brief explanation? ref to docs?)
- [x] Gemfile: All other gems
- [x] Explanation on how to run the tests
|
1.0
|
Installation instructions - Explain how to fork, clone, basic installation including gems and how to run the tests.
- [x] Explanation on how to fork
- [x] Explanation on how to clone
- [x] Explanation on how to do the basic installation
- [x] rails db:{create,migrate}
- [x] Gemfile: Rspec -> rails generate rspec:install (config brief explanation? ref to docs?)
- [x] Gemfile: Figaro -> bundle exec figaro install (config brief explanation? ref to docs?)
- [x] Gemfile: All other gems
- [x] Explanation on how to run the tests
|
non_test
|
installation instructions explain how to fork clone basic installation including gems and how to run the tests explanation on how to fork explanation on how to clone explanation on how to do the basic installation rails db create migrate gemfile rspec rails generate rspec install config brief explanation ref to docs gemfile figaro bundle exec figaro install config brief explanation ref to docs gemfile all other gems explanation on how to run the tests
| 0
|
14,459
| 3,402,554,723
|
IssuesEvent
|
2015-12-03 00:38:44
|
FredHutch/Oncoscape
|
https://api.github.com/repos/FredHutch/Oncoscape
|
closed
|
try/catch error for age & survival ranges handler
|
bug needs test PLSR
|
handleAgeAtDxAndSurvivalRanges when testing TCGAbrca data:
"OncoDev14 (version 1.4.88) exception! Error in (function (classes, fdef, mtable) : unable to find an inherited method for function ‘summarizeNumericPatientAttributes’ for signature ‘"NULL"’ . incoming msg: summarizePLSRPatientAttributes; handleAgeAtDxAndSurvivalRanges; request; c("AgeDx", "Survival")"
possibly due to missing age at Dx?
|
1.0
|
try/catch error for age & survival ranges handler - handleAgeAtDxAndSurvivalRanges when testing TCGAbrca data:
"OncoDev14 (version 1.4.88) exception! Error in (function (classes, fdef, mtable) : unable to find an inherited method for function ‘summarizeNumericPatientAttributes’ for signature ‘"NULL"’ . incoming msg: summarizePLSRPatientAttributes; handleAgeAtDxAndSurvivalRanges; request; c("AgeDx", "Survival")"
possibly due to missing age at Dx?
|
test
|
try catch error for age survival ranges handler handleageatdxandsurvivalranges when testing tcgabrca data version exception error in function classes fdef mtable unable to find an inherited method for function ‘summarizenumericpatientattributes’ for signature ‘ null ’ incoming msg summarizeplsrpatientattributes handleageatdxandsurvivalranges request c agedx survival possibly due to missing age at dx
| 1
|
267,366
| 23,295,240,999
|
IssuesEvent
|
2022-08-06 13:08:34
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
opened
|
Internal everyone role not shown for admin in fresh tenant.
|
bug Affected-6.0.0 IS-6.0.0-Test-Hackathon 6.0.0-rc-testing
|
When we create a new tenant and login to it from the console app (don't login from management console) view the admin user and check the roles you will only see the admin role and internal everyone role is not visible. Once we go to the role section and come back or log into the management console and come back the role will be visible in the user.
|
2.0
|
Internal everyone role not shown for admin in fresh tenant. - When we create a new tenant and login to it from the console app (don't login from management console) view the admin user and check the roles you will only see the admin role and internal everyone role is not visible. Once we go to the role section and come back or log into the management console and come back the role will be visible in the user.
|
test
|
internal everyone role not shown for admin in fresh tenant when we create a new tenant and login to it from the console app don t login from management console view the admin user and check the roles you will only see the admin role and internal everyone role is not visible once we go to the role section and come back or log into the management console and come back the role will be visible in the user
| 1
|
2,121
| 2,882,062,331
|
IssuesEvent
|
2015-06-11 00:53:35
|
quicklisp/quicklisp-projects
|
https://api.github.com/repos/quicklisp/quicklisp-projects
|
closed
|
Remove asdf-contrib
|
canbuild
|
This package has been content-free for years, and isn't anticipated to become useful, ever. It fails the latest quicklisp metadata requirements. I'll delete it shortly.
|
1.0
|
Remove asdf-contrib - This package has been content-free for years, and isn't anticipated to become useful, ever. It fails the latest quicklisp metadata requirements. I'll delete it shortly.
|
non_test
|
remove asdf contrib this package has been content free for years and isn t anticipated to become useful ever it fails the latest quicklisp metadata requirements i ll delete it shortly
| 0
|
17,732
| 24,479,125,376
|
IssuesEvent
|
2022-10-08 15:29:31
|
jstedfast/MailKit
|
https://api.github.com/repos/jstedfast/MailKit
|
closed
|
ImapProtocolException: Syntax error in BODYSTRUCTURE. Unexpected token: NIL" w/ Office365
|
bug compatibility server-bug
|
**Describe the bug**
I have an MVC5 application which is using IMAP to load messages from Office365. The code lists the `UniqueId` and `InternalDate` of the messages in the inbox, then uses `FetchAsync` to fetch the following properties for the loaded message IDs:
* `UniqueId`;
* `Flags`;
* `Envelope`;
* `PreviewText`;
* `BodyStructure`;
* The `X-Priority` header;
We have received a message which causes this method to throw a `TaskCancelledException`. This happens even if we pass in `CancellationToken.None`, so it's not our code that's cancelling the operation.
**Platform (please complete the following information):**
- OS: Windows (reproduced on 11 and Server 2019)
- .NET Runtime: .NET Framework
- .NET Framework: .NET 4.8
- MailKit Version: 3.3.0
**Exception**
```
System.Threading.Tasks.TaskCanceledException: A task was canceled.
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at MailKit.Net.Imap.ImapEngine.<ProcessUntaggedResponseAsync>d__189.MoveNext() in D:\src\MailKit\MailKit\Net\Imap\ImapEngine.cs:line 2234
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at MailKit.Net.Imap.ImapCommand.<StepAsync>d__84.MoveNext() in D:\src\MailKit\MailKit\Net\Imap\ImapCommand.cs:line 915
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at MailKit.Net.Imap.ImapEngine.<IterateAsync>d__190.MoveNext() in D:\src\MailKit\MailKit\Net\Imap\ImapEngine.cs:line 2345
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at MailKit.Net.Imap.ImapEngine.<RunAsync>d__191.MoveNext() in D:\src\MailKit\MailKit\Net\Imap\ImapEngine.cs:line 2366
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at MailKit.Net.Imap.ImapFolder.<FetchAsync>d__193.MoveNext() in D:\src\MailKit\MailKit\Net\Imap\ImapFolderFetch.cs:line 1048
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
at (my code...)
```
**Protocol Logs**
> Connected to imaps://outlook.office365.com:993/
> S: * OK The Microsoft Exchange IMAP4 service is ready. [TABPADYAUAAxADIAMwBDAEEAMAAwADMANwAuAEcAQgBSAFAAMQAyADMALgBQAFIATwBEAC4ATwBVAFQATABPAE8ASwAuAEMATwBNAA==]
> C: B00000000 CAPABILITY
> S: * CAPABILITY IMAP4 IMAP4rev1 AUTH=PLAIN AUTH=XOAUTH2 SASL-IR UIDPLUS ID UNSELECT CHILDREN IDLE NAMESPACE LITERAL+
> S: B00000000 OK CAPABILITY completed.
> C: B00000001 AUTHENTICATE PLAIN ********
> S: B00000001 NO AUTHENTICATE failed.
> C: B00000002 LOGIN "********" "********"
> S: B00000002 OK LOGIN completed.
> C: B00000003 CAPABILITY
> S: * CAPABILITY IMAP4 IMAP4rev1 AUTH=PLAIN AUTH=XOAUTH2 SASL-IR UIDPLUS MOVE ID UNSELECT CLIENTACCESSRULES CLIENTNETWORKPRESENCELOCATION BACKENDAUTHENTICATE CHILDREN IDLE NAMESPACE LITERAL+
> S: B00000003 OK CAPABILITY completed.
> C: B00000004 NAMESPACE
> S: * NAMESPACE (("" "/")) NIL NIL
> S: B00000004 OK NAMESPACE completed.
> C: B00000005 LIST "" "INBOX"
> S: * LIST (\Marked \HasChildren) "/" INBOX
> S: B00000005 OK LIST completed.
> C: B00000006 LIST "" INBOX/Ignore
> S: * LIST (\Marked \HasNoChildren) "/" INBOX/Ignore
> S: B00000006 OK LIST completed.
> C: B00000007 EXAMINE INBOX/Ignore
> S: * 1 EXISTS
> S: * 1 RECENT
> S: * FLAGS (\Seen \Answered \Flagged \Deleted \Draft $MDNSent)
> S: * OK [PERMANENTFLAGS ()] Permanent flags
> S: * OK [UIDVALIDITY 4340] UIDVALIDITY value
> S: * OK [UIDNEXT 3] The next unique identifier value
> S: B00000007 OK [READ-ONLY] EXAMINE completed.
> C: B00000008 UID FETCH 2 (UID FLAGS ENVELOPE BODYSTRUCTURE BODY.PEEK[HEADER.FIELDS (X-PRIORITY)])
> S: * 1 FETCH (UID 2 FLAGS (\Seen \Recent) ENVELOPE ("Tue, 2 Aug 2022 15:01:06 +0000" "Returned mail: see transcript for details" (("Mail Delivery Subsystem" NIL "MAILER-DAEMON" "hermes.gatewaynet.com")) NIL NIL ((NIL NIL "_MAILBOX_" "_OUR-DOMAIN_")) NIL NIL NIL "<202208021501.272F16D4031920@_HERMES-GATEWAYNET-COM_>") BODYSTRUCTURE ((NIL NIL NIL NIL NIL "7BIT" 563 NIL NIL NIL NIL)("message" "delivery-status" NIL NIL NIL "7BIT" 658 NIL NIL NIL NIL)("message" "rfc822" NIL NIL NIL "8bit" 0 ("Tue, 2 Aug 2022 15:00:47 +0000" "[POSSIBLE SPAM 11.4] Invoices now overdue - 115365#" ((NIL NIL "_MAILBOX_" "_OUR-DOMAIN_")) NIL NIL ((NIL NIL "accounts" "_OTHER-DOMAIN_") (NIL NIL "safety" "_OTHER-DOMAIN_") (NIL NIL "_USER_" "_OUR-DOMAIN_")) NIL NIL NIL "<1IOGPFNLIHU4.377MHPZYJQ6E3@_OUR-SERVER_>") ((("text" "plain" ("charset" "utf-8") NIL NIL "8bit" 597 16 NIL NIL NIL NIL)(("text" "html" ("charset" "utf-8") NIL NIL "7BIT" 1611 26 NIL NIL NIL NIL)("image" "png" ("name" "0.dat") "<1KWGPFNLIHU4.4RR7HCVM8MQQ1@_OUR-SERVER_>" NIL "base64" 14172 NIL ("inline" ("filename" "0.dat")) NIL "0.dat")("image" "png" ("name" "1.dat") "<1KWGPFNLIHU4.UWJ8R86RE2KA2@_OUR-SERVER_>" NIL "base64" 486 NIL ("inline" ("filename" "1.dat")) NIL "1.dat")("image" "png" ("name" "2.dat") "<1KWGPFNLIHU4.EC7HN124OJC32@_OUR-SERVER_>" NIL "base64" 506 NIL ("inline" ("filename" "2.dat")) NIL "2.dat")("image" "png" ("name" "3.dat") "<1KWGPFNLIHU4.WM1ALJTG745F1@_OUR-SERVER_>" NIL "base64" 616 NIL ("inline" ("filename" "3.dat")) NIL "3.dat")("image" "png" ("name" "4.dat") "<1KWGPFNLIHU4.1B42S5EVSF4B2@_OUR-SERVER_>" NIL "base64" 22470 NIL ("inline" ("filename" "4.dat")) NIL "4.dat") "related" ("boundary" "=-5nEE2FIlRoeXkJyZAHV8UA==" "type" "text/html") NIL NIL) "alternative" ("boundary" "=-1sRjeMizXVbc5nGIFXbARA==") NIL NIL)("application" "pdf" ("name" "Reminder.pdf") "<RJ2DSFNLIHU4.UUVSNNY5Z3ER@_OUR-SERVER_>" NIL "base64" 359650 NIL ("attachment" ("filename" "Reminder.pdf" "size" "262820")) NIL NIL) "mixed" ("boundary" "=-EJwVTfPtacyNnTqY4DPQ0A==") NIL NIL) 0 NIL NIL NIL NIL) "report" ("report-type" "delivery-status" "boundary" "272F16D4031920.1659452466/hermes.gatewaynet.com") NIL NIL) BODY[HEADER.FIELDS (X-PRIORITY)] {2}
> S:
> S: )
**Additional context**
After moving the affected message out of the folder, the `FetchAsync` command works as expected.
Using `GetMessageAsync`, I am able to load the affected message without any problems.
|
True
|
ImapProtocolException: Syntax error in BODYSTRUCTURE. Unexpected token: NIL" w/ Office365 - **Describe the bug**
I have an MVC5 application which is using IMAP to load messages from Office365. The code lists the `UniqueId` and `InternalDate` of the messages in the inbox, then uses `FetchAsync` to fetch the following properties for the loaded message IDs:
* `UniqueId`;
* `Flags`;
* `Envelope`;
* `PreviewText`;
* `BodyStructure`;
* The `X-Priority` header;
We have received a message which causes this method to throw a `TaskCancelledException`. This happens even if we pass in `CancellationToken.None`, so it's not our code that's cancelling the operation.
**Platform (please complete the following information):**
- OS: Windows (reproduced on 11 and Server 2019)
- .NET Runtime: .NET Framework
- .NET Framework: .NET 4.8
- MailKit Version: 3.3.0
**Exception**
```
System.Threading.Tasks.TaskCanceledException: A task was canceled.
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at MailKit.Net.Imap.ImapEngine.<ProcessUntaggedResponseAsync>d__189.MoveNext() in D:\src\MailKit\MailKit\Net\Imap\ImapEngine.cs:line 2234
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at MailKit.Net.Imap.ImapCommand.<StepAsync>d__84.MoveNext() in D:\src\MailKit\MailKit\Net\Imap\ImapCommand.cs:line 915
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at MailKit.Net.Imap.ImapEngine.<IterateAsync>d__190.MoveNext() in D:\src\MailKit\MailKit\Net\Imap\ImapEngine.cs:line 2345
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at MailKit.Net.Imap.ImapEngine.<RunAsync>d__191.MoveNext() in D:\src\MailKit\MailKit\Net\Imap\ImapEngine.cs:line 2366
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at MailKit.Net.Imap.ImapFolder.<FetchAsync>d__193.MoveNext() in D:\src\MailKit\MailKit\Net\Imap\ImapFolderFetch.cs:line 1048
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
at (my code...)
```
**Protocol Logs**
> Connected to imaps://outlook.office365.com:993/
> S: * OK The Microsoft Exchange IMAP4 service is ready. [TABPADYAUAAxADIAMwBDAEEAMAAwADMANwAuAEcAQgBSAFAAMQAyADMALgBQAFIATwBEAC4ATwBVAFQATABPAE8ASwAuAEMATwBNAA==]
> C: B00000000 CAPABILITY
> S: * CAPABILITY IMAP4 IMAP4rev1 AUTH=PLAIN AUTH=XOAUTH2 SASL-IR UIDPLUS ID UNSELECT CHILDREN IDLE NAMESPACE LITERAL+
> S: B00000000 OK CAPABILITY completed.
> C: B00000001 AUTHENTICATE PLAIN ********
> S: B00000001 NO AUTHENTICATE failed.
> C: B00000002 LOGIN "********" "********"
> S: B00000002 OK LOGIN completed.
> C: B00000003 CAPABILITY
> S: * CAPABILITY IMAP4 IMAP4rev1 AUTH=PLAIN AUTH=XOAUTH2 SASL-IR UIDPLUS MOVE ID UNSELECT CLIENTACCESSRULES CLIENTNETWORKPRESENCELOCATION BACKENDAUTHENTICATE CHILDREN IDLE NAMESPACE LITERAL+
> S: B00000003 OK CAPABILITY completed.
> C: B00000004 NAMESPACE
> S: * NAMESPACE (("" "/")) NIL NIL
> S: B00000004 OK NAMESPACE completed.
> C: B00000005 LIST "" "INBOX"
> S: * LIST (\Marked \HasChildren) "/" INBOX
> S: B00000005 OK LIST completed.
> C: B00000006 LIST "" INBOX/Ignore
> S: * LIST (\Marked \HasNoChildren) "/" INBOX/Ignore
> S: B00000006 OK LIST completed.
> C: B00000007 EXAMINE INBOX/Ignore
> S: * 1 EXISTS
> S: * 1 RECENT
> S: * FLAGS (\Seen \Answered \Flagged \Deleted \Draft $MDNSent)
> S: * OK [PERMANENTFLAGS ()] Permanent flags
> S: * OK [UIDVALIDITY 4340] UIDVALIDITY value
> S: * OK [UIDNEXT 3] The next unique identifier value
> S: B00000007 OK [READ-ONLY] EXAMINE completed.
> C: B00000008 UID FETCH 2 (UID FLAGS ENVELOPE BODYSTRUCTURE BODY.PEEK[HEADER.FIELDS (X-PRIORITY)])
> S: * 1 FETCH (UID 2 FLAGS (\Seen \Recent) ENVELOPE ("Tue, 2 Aug 2022 15:01:06 +0000" "Returned mail: see transcript for details" (("Mail Delivery Subsystem" NIL "MAILER-DAEMON" "hermes.gatewaynet.com")) NIL NIL ((NIL NIL "_MAILBOX_" "_OUR-DOMAIN_")) NIL NIL NIL "<202208021501.272F16D4031920@_HERMES-GATEWAYNET-COM_>") BODYSTRUCTURE ((NIL NIL NIL NIL NIL "7BIT" 563 NIL NIL NIL NIL)("message" "delivery-status" NIL NIL NIL "7BIT" 658 NIL NIL NIL NIL)("message" "rfc822" NIL NIL NIL "8bit" 0 ("Tue, 2 Aug 2022 15:00:47 +0000" "[POSSIBLE SPAM 11.4] Invoices now overdue - 115365#" ((NIL NIL "_MAILBOX_" "_OUR-DOMAIN_")) NIL NIL ((NIL NIL "accounts" "_OTHER-DOMAIN_") (NIL NIL "safety" "_OTHER-DOMAIN_") (NIL NIL "_USER_" "_OUR-DOMAIN_")) NIL NIL NIL "<1IOGPFNLIHU4.377MHPZYJQ6E3@_OUR-SERVER_>") ((("text" "plain" ("charset" "utf-8") NIL NIL "8bit" 597 16 NIL NIL NIL NIL)(("text" "html" ("charset" "utf-8") NIL NIL "7BIT" 1611 26 NIL NIL NIL NIL)("image" "png" ("name" "0.dat") "<1KWGPFNLIHU4.4RR7HCVM8MQQ1@_OUR-SERVER_>" NIL "base64" 14172 NIL ("inline" ("filename" "0.dat")) NIL "0.dat")("image" "png" ("name" "1.dat") "<1KWGPFNLIHU4.UWJ8R86RE2KA2@_OUR-SERVER_>" NIL "base64" 486 NIL ("inline" ("filename" "1.dat")) NIL "1.dat")("image" "png" ("name" "2.dat") "<1KWGPFNLIHU4.EC7HN124OJC32@_OUR-SERVER_>" NIL "base64" 506 NIL ("inline" ("filename" "2.dat")) NIL "2.dat")("image" "png" ("name" "3.dat") "<1KWGPFNLIHU4.WM1ALJTG745F1@_OUR-SERVER_>" NIL "base64" 616 NIL ("inline" ("filename" "3.dat")) NIL "3.dat")("image" "png" ("name" "4.dat") "<1KWGPFNLIHU4.1B42S5EVSF4B2@_OUR-SERVER_>" NIL "base64" 22470 NIL ("inline" ("filename" "4.dat")) NIL "4.dat") "related" ("boundary" "=-5nEE2FIlRoeXkJyZAHV8UA==" "type" "text/html") NIL NIL) "alternative" ("boundary" "=-1sRjeMizXVbc5nGIFXbARA==") NIL NIL)("application" "pdf" ("name" "Reminder.pdf") "<RJ2DSFNLIHU4.UUVSNNY5Z3ER@_OUR-SERVER_>" NIL "base64" 359650 NIL ("attachment" ("filename" "Reminder.pdf" "size" "262820")) NIL NIL) "mixed" ("boundary" "=-EJwVTfPtacyNnTqY4DPQ0A==") NIL NIL) 0 NIL NIL NIL NIL) "report" ("report-type" "delivery-status" "boundary" "272F16D4031920.1659452466/hermes.gatewaynet.com") NIL NIL) BODY[HEADER.FIELDS (X-PRIORITY)] {2}
> S:
> S: )
**Additional context**
After moving the affected message out of the folder, the `FetchAsync` command works as expected.
Using `GetMessageAsync`, I am able to load the affected message without any problems.
|
non_test
|
imapprotocolexception syntax error in bodystructure unexpected token nil w describe the bug i have an application which is using imap to load messages from the code lists the uniqueid and internaldate of the messages in the inbox then uses fetchasync to fetch the following properties for the loaded message ids uniqueid flags envelope previewtext bodystructure the x priority header we have received a message which causes this method to throw a taskcancelledexception this happens even if we pass in cancellationtoken none so it s not our code that s cancelling the operation platform please complete the following information os windows reproduced on and server net runtime net framework net framework net mailkit version exception system threading tasks taskcanceledexception a task was canceled at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at mailkit net imap imapengine d movenext in d src mailkit mailkit net imap imapengine cs line end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter throwfornonsuccess task task at mailkit net imap imapcommand d movenext in d src mailkit mailkit net imap imapcommand cs line end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter throwfornonsuccess task task at mailkit net imap imapengine d movenext in d src mailkit mailkit net imap imapengine cs line end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter throwfornonsuccess task task at mailkit net imap imapengine d movenext in d src mailkit mailkit net imap imapengine cs line end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter throwfornonsuccess task task at mailkit net imap imapfolder d movenext in d src mailkit mailkit net imap imapfolderfetch cs line end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices taskawaiter getresult at my code protocol logs connected to imaps outlook com s ok the microsoft exchange service is ready c capability s capability auth plain auth sasl ir uidplus id unselect children idle namespace literal s ok capability completed c authenticate plain s no authenticate failed c login s ok login completed c capability s capability auth plain auth sasl ir uidplus move id unselect clientaccessrules clientnetworkpresencelocation backendauthenticate children idle namespace literal s ok capability completed c namespace s namespace nil nil s ok namespace completed c list inbox s list marked haschildren inbox s ok list completed c list inbox ignore s list marked hasnochildren inbox ignore s ok list completed c examine inbox ignore s exists s recent s flags seen answered flagged deleted draft mdnsent s ok permanent flags s ok uidvalidity value s ok the next unique identifier value s ok examine completed c uid fetch uid flags envelope bodystructure body peek s fetch uid flags seen recent envelope tue aug returned mail see transcript for details mail delivery subsystem nil mailer daemon hermes gatewaynet com nil nil nil nil mailbox our domain nil nil nil bodystructure nil nil nil nil nil nil nil nil nil message delivery status nil nil nil nil nil nil nil message nil nil nil tue aug invoices now overdue nil nil mailbox our domain nil nil nil nil accounts other domain nil nil safety other domain nil nil user our domain nil nil nil text plain charset utf nil nil nil nil nil nil text html charset utf nil nil nil nil nil nil image png name dat nil nil inline filename dat nil dat image png name dat nil nil inline filename dat nil dat image png name dat nil nil inline filename dat nil dat image png name dat nil nil inline filename dat nil dat image png name dat nil nil inline filename dat nil dat related boundary type text html nil nil alternative boundary nil nil application pdf name reminder pdf nil nil attachment filename reminder pdf size nil nil mixed boundary nil nil nil nil nil nil report report type delivery status boundary hermes gatewaynet com nil nil body s s additional context after moving the affected message out of the folder the fetchasync command works as expected using getmessageasync i am able to load the affected message without any problems
| 0
|
99,112
| 8,690,888,709
|
IssuesEvent
|
2018-12-03 23:00:38
|
GoogleChromeLabs/ProjectVisBug
|
https://api.github.com/repos/GoogleChromeLabs/ProjectVisBug
|
opened
|
Visual inheritance and layer inspect, aka show dom with 3D perspective
|
⚡️ feature 🔎 needs tested
|

there is a rad feature of the devtools called layers, and i want it for designers and i want in right there with the live dom! i want tab navigation to keep working and all the tools. i want to change properties of a DOM that's rotated in 3D space and spread out so i can understand which layer is in control of what.
i suspect LOE on this to be interesting minimal. some UX considerations are:
1. does the feature do the whole page or just the selected element(s)?
2. should we animate going into and out from that state?
3. put the whole feature behind a hotkey? it sounds utilitarian enough that i may want to quick flip it on/off
inheritance inspection
layer inspection
|
1.0
|
Visual inheritance and layer inspect, aka show dom with 3D perspective - 
there is a rad feature of the devtools called layers, and i want it for designers and i want in right there with the live dom! i want tab navigation to keep working and all the tools. i want to change properties of a DOM that's rotated in 3D space and spread out so i can understand which layer is in control of what.
i suspect LOE on this to be interesting minimal. some UX considerations are:
1. does the feature do the whole page or just the selected element(s)?
2. should we animate going into and out from that state?
3. put the whole feature behind a hotkey? it sounds utilitarian enough that i may want to quick flip it on/off
inheritance inspection
layer inspection
|
test
|
visual inheritance and layer inspect aka show dom with perspective there is a rad feature of the devtools called layers and i want it for designers and i want in right there with the live dom i want tab navigation to keep working and all the tools i want to change properties of a dom that s rotated in space and spread out so i can understand which layer is in control of what i suspect loe on this to be interesting minimal some ux considerations are does the feature do the whole page or just the selected element s should we animate going into and out from that state put the whole feature behind a hotkey it sounds utilitarian enough that i may want to quick flip it on off inheritance inspection layer inspection
| 1
|
134,130
| 29,833,498,568
|
IssuesEvent
|
2023-06-18 14:57:41
|
WordPress/openverse
|
https://api.github.com/repos/WordPress/openverse
|
closed
|
Add timeout to image type `head` requests in `photon.check_image_type`
|
🟧 priority: high 🛠 goal: fix 💻 aspect: code 🧱 stack: api
|
## Description
<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->
<!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." -->
`photon.check_image_type` runs in the same request lifecycle as the already long-running thumbnails endpoint. Without a timeout, this head request, for slower providers, could be the source of the thumbnail endpoint causing workers to timeout.
## Reproduction
<!-- Provide detailed steps to reproduce the bug. -->
As of right now there is no way to reproduce this behaviour, as far as I know. cc @zackkrida who originally identified the need for the timeout.
## Fix
Add `timeout=10` to the call to `request.head`. This is long, but gunicorn times workers out after 30 seconds. Combined with the timeout in `photon.get` at 15 seconds, this leaves 5 seconds for our own Django request handling. That's a lot of time, but I'm not sure how close we can get to the 30 seconds max (which we do not want to increase for our entire API) when we'd rather the request just terminate if it's taking that long anyway.
|
1.0
|
Add timeout to image type `head` requests in `photon.check_image_type` - ## Description
<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->
<!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." -->
`photon.check_image_type` runs in the same request lifecycle as the already long-running thumbnails endpoint. Without a timeout, this head request, for slower providers, could be the source of the thumbnail endpoint causing workers to timeout.
## Reproduction
<!-- Provide detailed steps to reproduce the bug. -->
As of right now there is no way to reproduce this behaviour, as far as I know. cc @zackkrida who originally identified the need for the timeout.
## Fix
Add `timeout=10` to the call to `request.head`. This is long, but gunicorn times workers out after 30 seconds. Combined with the timeout in `photon.get` at 15 seconds, this leaves 5 seconds for our own Django request handling. That's a lot of time, but I'm not sure how close we can get to the 30 seconds max (which we do not want to increase for our entire API) when we'd rather the request just terminate if it's taking that long anyway.
|
non_test
|
add timeout to image type head requests in photon check image type description photon check image type runs in the same request lifecycle as the already long running thumbnails endpoint without a timeout this head request for slower providers could be the source of the thumbnail endpoint causing workers to timeout reproduction as of right now there is no way to reproduce this behaviour as far as i know cc zackkrida who originally identified the need for the timeout fix add timeout to the call to request head this is long but gunicorn times workers out after seconds combined with the timeout in photon get at seconds this leaves seconds for our own django request handling that s a lot of time but i m not sure how close we can get to the seconds max which we do not want to increase for our entire api when we d rather the request just terminate if it s taking that long anyway
| 0
|
432,062
| 12,488,710,829
|
IssuesEvent
|
2020-05-31 15:28:23
|
joehot200/AntiAura
|
https://api.github.com/repos/joehot200/AntiAura
|
closed
|
Improve config.yml
|
high-priority issue
|
Remove the conflicts in Criticals and FastEat between config.yml and config_advanced.yml
|
1.0
|
Improve config.yml - Remove the conflicts in Criticals and FastEat between config.yml and config_advanced.yml
|
non_test
|
improve config yml remove the conflicts in criticals and fasteat between config yml and config advanced yml
| 0
|
36,799
| 5,087,869,339
|
IssuesEvent
|
2016-12-31 10:59:57
|
TechnionYP5777/SmartCity-Market
|
https://api.github.com/repos/TechnionYP5777/SmartCity-Market
|
opened
|
Develop new Worker unit test
|
UnitTest Worker
|
Due to Worker class expansion, need to write some new unit tests for methods:
- [ ] addProductToWarehouse
- [ ] placeProductPackageOnShelves
- [ ] removeProductPackageFromStore
- [ ] getProductPackageAmount
- [ ] getWorkerLoginDetails
* open new issue for each one of them.
|
1.0
|
Develop new Worker unit test - Due to Worker class expansion, need to write some new unit tests for methods:
- [ ] addProductToWarehouse
- [ ] placeProductPackageOnShelves
- [ ] removeProductPackageFromStore
- [ ] getProductPackageAmount
- [ ] getWorkerLoginDetails
* open new issue for each one of them.
|
test
|
develop new worker unit test due to worker class expansion need to write some new unit tests for methods addproducttowarehouse placeproductpackageonshelves removeproductpackagefromstore getproductpackageamount getworkerlogindetails open new issue for each one of them
| 1
|
464,787
| 13,340,094,033
|
IssuesEvent
|
2020-08-28 13:55:51
|
ChainSafe/forest
|
https://api.github.com/repos/ChainSafe/forest
|
closed
|
Add benchmarks to AMT/HAMT
|
IPLD Priority: 4 - Low Type: Maintenance good first issue
|
**Issue summary**
<!-- A clear and concise description of what the task is. -->
AMT implementation is coming in with #197 but I did not include benchmarks. I will most likely do this along with benchmarking HAMT, but anyone could pick this up if they would like.
Most important one to benchmark is the batch set function, or the new from slice as this is the primary use and need for it in the code as of now.
**Other information and links**
<!-- Add any other context or screenshots about the issue here. -->
<!-- Thank you 🙏 -->
|
1.0
|
Add benchmarks to AMT/HAMT - **Issue summary**
<!-- A clear and concise description of what the task is. -->
AMT implementation is coming in with #197 but I did not include benchmarks. I will most likely do this along with benchmarking HAMT, but anyone could pick this up if they would like.
Most important one to benchmark is the batch set function, or the new from slice as this is the primary use and need for it in the code as of now.
**Other information and links**
<!-- Add any other context or screenshots about the issue here. -->
<!-- Thank you 🙏 -->
|
non_test
|
add benchmarks to amt hamt issue summary amt implementation is coming in with but i did not include benchmarks i will most likely do this along with benchmarking hamt but anyone could pick this up if they would like most important one to benchmark is the batch set function or the new from slice as this is the primary use and need for it in the code as of now other information and links
| 0
|
319,277
| 27,361,858,565
|
IssuesEvent
|
2023-02-27 16:21:19
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
closed
|
[CI] ReadinessClusterIT testReadyAfterCorrectFileSettings failing
|
:Core/Infra/Core >test-failure Team:Core/Infra
|
**Build scan:**
https://gradle-enterprise.elastic.co/s/pz33y5ym4lq44/tests/:server:internalClusterTest/org.elasticsearch.readiness.ReadinessClusterIT/testReadyAfterCorrectFileSettings
**Reproduction line:**
```
gradlew ':server:internalClusterTest' --tests "org.elasticsearch.readiness.ReadinessClusterIT.testReadyAfterCorrectFileSettings" -Dtests.seed=FFB71983A07E7720 -Dtests.locale=it-IT -Dtests.timezone=Indian/Maldives -Druntime.java=19
```
**Applicable branches:**
8.6
**Reproduces locally?:**
Didn't try
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.readiness.ReadinessClusterIT&tests.test=testReadyAfterCorrectFileSettings
**Failure excerpt:**
```
java.lang.NullPointerException: Cannot invoke "org.elasticsearch.common.transport.BoundTransportAddress.publishAddress()" because the return value of "org.elasticsearch.readiness.ReadinessService.boundAddress()" is null
at __randomizedtesting.SeedInfo.seed([FFB71983A07E7720:2E556980FF8889E8]:0)
at org.elasticsearch.test.readiness.ReadinessClientProbe.tcpReadinessProbeTrue(ReadinessClientProbe.java:37)
at org.elasticsearch.readiness.ReadinessClusterIT.testReadyAfterCorrectFileSettings(ReadinessClusterIT.java:327)
at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
at java.lang.reflect.Method.invoke(Method.java:578)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:1589)
```
|
1.0
|
[CI] ReadinessClusterIT testReadyAfterCorrectFileSettings failing - **Build scan:**
https://gradle-enterprise.elastic.co/s/pz33y5ym4lq44/tests/:server:internalClusterTest/org.elasticsearch.readiness.ReadinessClusterIT/testReadyAfterCorrectFileSettings
**Reproduction line:**
```
gradlew ':server:internalClusterTest' --tests "org.elasticsearch.readiness.ReadinessClusterIT.testReadyAfterCorrectFileSettings" -Dtests.seed=FFB71983A07E7720 -Dtests.locale=it-IT -Dtests.timezone=Indian/Maldives -Druntime.java=19
```
**Applicable branches:**
8.6
**Reproduces locally?:**
Didn't try
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.readiness.ReadinessClusterIT&tests.test=testReadyAfterCorrectFileSettings
**Failure excerpt:**
```
java.lang.NullPointerException: Cannot invoke "org.elasticsearch.common.transport.BoundTransportAddress.publishAddress()" because the return value of "org.elasticsearch.readiness.ReadinessService.boundAddress()" is null
at __randomizedtesting.SeedInfo.seed([FFB71983A07E7720:2E556980FF8889E8]:0)
at org.elasticsearch.test.readiness.ReadinessClientProbe.tcpReadinessProbeTrue(ReadinessClientProbe.java:37)
at org.elasticsearch.readiness.ReadinessClusterIT.testReadyAfterCorrectFileSettings(ReadinessClusterIT.java:327)
at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
at java.lang.reflect.Method.invoke(Method.java:578)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:1589)
```
|
test
|
readinessclusterit testreadyaftercorrectfilesettings failing build scan reproduction line gradlew server internalclustertest tests org elasticsearch readiness readinessclusterit testreadyaftercorrectfilesettings dtests seed dtests locale it it dtests timezone indian maldives druntime java applicable branches reproduces locally didn t try failure history failure excerpt java lang nullpointerexception cannot invoke org elasticsearch common transport boundtransportaddress publishaddress because the return value of org elasticsearch readiness readinessservice boundaddress is null at randomizedtesting seedinfo seed at org elasticsearch test readiness readinessclientprobe tcpreadinessprobetrue readinessclientprobe java at org elasticsearch readiness readinessclusterit testreadyaftercorrectfilesettings readinessclusterit java at jdk internal reflect directmethodhandleaccessor invoke directmethodhandleaccessor java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java
| 1
|
21,288
| 3,881,945,866
|
IssuesEvent
|
2016-04-13 07:50:21
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
Cluster fails to come up with login issues
|
kind/flake team/test-infra
|
Just an intermittent flake. Subsequent runs succeeded:
```
:03
09:51:03 to select an already authenticated account to use.
09:51:05 ERROR: (gcloud.compute.instances.list) You do not currently have an active account selected.
09:51:05 Please run:
09:51:05
09:51:05 $ gcloud auth login
09:51:05
09:51:05 to obtain new credentials, or if you have already logged in with a
09:51:05 different account:
09:51:05
09:51:05 $ gcloud config set account ACCOUNT
09:51:05
09:51:05 to select an already authenticated account to use.
09:51:06 ERROR: (gcloud.compute.routes.list) You do not currently have an active account selected.
09:51:06 Please run:
09:51:06
09:51:06 $ gcloud auth login
09:51:06
09:51:06 to obtain new credentials, or if you have already logged in with a
09:51:06 different account:
09:51:06
09:51:06 $ gcloud config set account ACCOUNT
09:51:06
```
http://kubekins.dls.corp.google.com/job/kubernetes-e2e-gce-ingress-release-1.2/203
Passed in http://kubekins.dls.corp.google.com/job/kubernetes-e2e-gce-ingress-release-1.2/204
|
1.0
|
Cluster fails to come up with login issues - Just an intermittent flake. Subsequent runs succeeded:
```
:03
09:51:03 to select an already authenticated account to use.
09:51:05 ERROR: (gcloud.compute.instances.list) You do not currently have an active account selected.
09:51:05 Please run:
09:51:05
09:51:05 $ gcloud auth login
09:51:05
09:51:05 to obtain new credentials, or if you have already logged in with a
09:51:05 different account:
09:51:05
09:51:05 $ gcloud config set account ACCOUNT
09:51:05
09:51:05 to select an already authenticated account to use.
09:51:06 ERROR: (gcloud.compute.routes.list) You do not currently have an active account selected.
09:51:06 Please run:
09:51:06
09:51:06 $ gcloud auth login
09:51:06
09:51:06 to obtain new credentials, or if you have already logged in with a
09:51:06 different account:
09:51:06
09:51:06 $ gcloud config set account ACCOUNT
09:51:06
```
http://kubekins.dls.corp.google.com/job/kubernetes-e2e-gce-ingress-release-1.2/203
Passed in http://kubekins.dls.corp.google.com/job/kubernetes-e2e-gce-ingress-release-1.2/204
|
test
|
cluster fails to come up with login issues just an intermittent flake subsequent runs succeeded to select an already authenticated account to use error gcloud compute instances list you do not currently have an active account selected please run gcloud auth login to obtain new credentials or if you have already logged in with a different account gcloud config set account account to select an already authenticated account to use error gcloud compute routes list you do not currently have an active account selected please run gcloud auth login to obtain new credentials or if you have already logged in with a different account gcloud config set account account passed in
| 1
|
160,448
| 25,165,356,632
|
IssuesEvent
|
2022-11-10 20:19:09
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
closed
|
[Design] My VA Audit UX Improvements: Update LOA1 State Designs
|
design my-va-dashboard authenticated-experience my-va-audit
|
## Background
During Midpoint Review we received feedback around our design of the LOA1 state on My VA. Feedback is documented [here](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/products/identity-personalization/my-va/2022-audit/MidpointReview.md). This ticket is to update the designs for LOA1 based on content feedback.
Feedback on this topic can be found in these tickets:
* #48624 -- Must feedback
* #48655 -- Must feedback
## Tasks
- [x] Use em dash instead of plain dash in this sentence “We need to make sure you're you—and not someone pretending to be you—before we can give you access to your personal and health-related information.” (longer dash, this – vs - )
- [x] Update Sketch files and FE Documentation
- [ ] PM to check FE tickets and update accordingly
## Acceptance Criteria
- [ ] New designs for LOA1 State are complete
|
1.0
|
[Design] My VA Audit UX Improvements: Update LOA1 State Designs - ## Background
During Midpoint Review we received feedback around our design of the LOA1 state on My VA. Feedback is documented [here](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/products/identity-personalization/my-va/2022-audit/MidpointReview.md). This ticket is to update the designs for LOA1 based on content feedback.
Feedback on this topic can be found in these tickets:
* #48624 -- Must feedback
* #48655 -- Must feedback
## Tasks
- [x] Use em dash instead of plain dash in this sentence “We need to make sure you're you—and not someone pretending to be you—before we can give you access to your personal and health-related information.” (longer dash, this – vs - )
- [x] Update Sketch files and FE Documentation
- [ ] PM to check FE tickets and update accordingly
## Acceptance Criteria
- [ ] New designs for LOA1 State are complete
|
non_test
|
my va audit ux improvements update state designs background during midpoint review we received feedback around our design of the state on my va feedback is documented this ticket is to update the designs for based on content feedback feedback on this topic can be found in these tickets must feedback must feedback tasks use em dash instead of plain dash in this sentence “we need to make sure you re you—and not someone pretending to be you—before we can give you access to your personal and health related information ” longer dash this – vs update sketch files and fe documentation pm to check fe tickets and update accordingly acceptance criteria new designs for state are complete
| 0
|
110,193
| 9,438,824,858
|
IssuesEvent
|
2019-04-14 04:13:47
|
nearprotocol/nearcore
|
https://api.github.com/repos/nearprotocol/nearcore
|
closed
|
Merge Runtime User into
|
enhancement housekeeping testing
|
All cases that we test in `runtime/src/lib.rs` and `runtime/src/system.rs` should be also executed on DevNet, TestNet and other fixtures. We already have `Node` and `NodeUser` that allow us to run tests on different fixtures of TestNet, we should execute the above mentioned cases from `runtime` on other fixtures like TestNet too. This would require merging `User` from https://github.com/nearprotocol/nearcore/blob/4a20e8507b440cee591c0d3cbefa60f5417d292b/node/runtime/src/test_utils.rs#L131 into `NoderUser` and converting `runtime` tests to use it.
|
1.0
|
Merge Runtime User into - All cases that we test in `runtime/src/lib.rs` and `runtime/src/system.rs` should be also executed on DevNet, TestNet and other fixtures. We already have `Node` and `NodeUser` that allow us to run tests on different fixtures of TestNet, we should execute the above mentioned cases from `runtime` on other fixtures like TestNet too. This would require merging `User` from https://github.com/nearprotocol/nearcore/blob/4a20e8507b440cee591c0d3cbefa60f5417d292b/node/runtime/src/test_utils.rs#L131 into `NoderUser` and converting `runtime` tests to use it.
|
test
|
merge runtime user into all cases that we test in runtime src lib rs and runtime src system rs should be also executed on devnet testnet and other fixtures we already have node and nodeuser that allow us to run tests on different fixtures of testnet we should execute the above mentioned cases from runtime on other fixtures like testnet too this would require merging user from into noderuser and converting runtime tests to use it
| 1
|
118,371
| 4,738,257,315
|
IssuesEvent
|
2016-10-20 03:10:15
|
vidalborromeo/eecshelp-beta
|
https://api.github.com/repos/vidalborromeo/eecshelp-beta
|
opened
|
Entry Request Form: Refresh validators instead of closing form on remote update
|
Accepted Enhancement Priority: Low
|
v3.39
First reported by @eraz in #8.
> Updating the queue settings in any manner (pressing the save button) immediately closes any new request forms in other sessions (forcing the user to start over). If a user was editing a request, the form will disappear and leave the page in an non-interactive state, forcing the user to refresh the page.
|
1.0
|
Entry Request Form: Refresh validators instead of closing form on remote update - v3.39
First reported by @eraz in #8.
> Updating the queue settings in any manner (pressing the save button) immediately closes any new request forms in other sessions (forcing the user to start over). If a user was editing a request, the form will disappear and leave the page in an non-interactive state, forcing the user to refresh the page.
|
non_test
|
entry request form refresh validators instead of closing form on remote update first reported by eraz in updating the queue settings in any manner pressing the save button immediately closes any new request forms in other sessions forcing the user to start over if a user was editing a request the form will disappear and leave the page in an non interactive state forcing the user to refresh the page
| 0
|
549,319
| 16,090,453,913
|
IssuesEvent
|
2021-04-26 16:03:35
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.filmeonline.biz - see bug description
|
browser-firefox engine-gecko priority-normal
|
<!-- @browser: Firefox Nightly 90.0a1 (2021-04-21) (64-bit) -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.85 Safari/537.36 -->
<!-- @reported_with: unknown -->
**URL**: https://www.filmeonline.biz/
**Browser / Version**: Firefox Nightly 90.0a1 (2021-04-21) (64-bit)
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: Scroll to top button not present in Nightly
**Steps to Reproduce**:
1. Open https://www.filmeonline.biz/
2. Scroll down
Actual behavior:
The scroll to top button is not present in the lower left button.
Expected behavior:
The scroll to top button should be present in the lower left button.
Note: #SoftvisionTP3
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/4/44def3cf-2285-4d57-bc84-1dda9c1d75b6.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.filmeonline.biz - see bug description - <!-- @browser: Firefox Nightly 90.0a1 (2021-04-21) (64-bit) -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.85 Safari/537.36 -->
<!-- @reported_with: unknown -->
**URL**: https://www.filmeonline.biz/
**Browser / Version**: Firefox Nightly 90.0a1 (2021-04-21) (64-bit)
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: Scroll to top button not present in Nightly
**Steps to Reproduce**:
1. Open https://www.filmeonline.biz/
2. Scroll down
Actual behavior:
The scroll to top button is not present in the lower left button.
Expected behavior:
The scroll to top button should be present in the lower left button.
Note: #SoftvisionTP3
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/4/44def3cf-2285-4d57-bc84-1dda9c1d75b6.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_test
|
see bug description url browser version firefox nightly bit operating system windows tested another browser yes chrome problem type something else description scroll to top button not present in nightly steps to reproduce open scroll down actual behavior the scroll to top button is not present in the lower left button expected behavior the scroll to top button should be present in the lower left button note view the screenshot img alt screenshot src browser configuration none from with ❤️
| 0
|
33,443
| 7,124,394,718
|
IssuesEvent
|
2018-01-19 18:43:13
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
closed
|
Unable to configure the session, setting session.save_handler failed (PHP 7.2)
|
Defect On hold http
|
This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.5.10
* Platform and Target: Ubuntu 16.04, Apache 2.4, PHP 7.2 (7.2.1-1+ubuntu16.04.1+deb.sury.org+1)
### What you did
Updated php from 7.0.x to 7.2
### What happened

Found a similar issue here: https://github.com/symphonycms/symphony-2/issues/2783
The check in https://github.com/cakephp/cakephp/blob/83815ac4912656eb9e724e179bbe1dc6440e3359/src/Network/Session.php#L113 evaluates to false, so I don't quite know why this is happening, but removing line 166 from that file solves the issue
|
1.0
|
Unable to configure the session, setting session.save_handler failed (PHP 7.2) - This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.5.10
* Platform and Target: Ubuntu 16.04, Apache 2.4, PHP 7.2 (7.2.1-1+ubuntu16.04.1+deb.sury.org+1)
### What you did
Updated php from 7.0.x to 7.2
### What happened

Found a similar issue here: https://github.com/symphonycms/symphony-2/issues/2783
The check in https://github.com/cakephp/cakephp/blob/83815ac4912656eb9e724e179bbe1dc6440e3359/src/Network/Session.php#L113 evaluates to false, so I don't quite know why this is happening, but removing line 166 from that file solves the issue
|
non_test
|
unable to configure the session setting session save handler failed php this is a multiple allowed bug enhancement feature discussion rfc cakephp version platform and target ubuntu apache php deb sury org what you did updated php from x to what happened found a similar issue here the check in evaluates to false so i don t quite know why this is happening but removing line from that file solves the issue
| 0
|
36,072
| 6,515,863,021
|
IssuesEvent
|
2017-08-26 21:46:15
|
Zimmi48/bugzilla-test
|
https://api.github.com/repos/Zimmi48/bugzilla-test
|
closed
|
building doc: Error: none ... mention theories/Compat/Coq84.v
|
kind: documentation
|
Note: the issue was created automatically with bugzilla2github tool
# Bugzilla Bug ID: 4353
Date: 2015-10-02 18:06:02 +0200
From: Jonathan Leivent <<jonikelee@gmail.com>>
To:
CC: coq-bugs-redist@lists.gforge.inria.fr, @_silene
Last updated: 2015-10-02 21:33:39 +0200
## Bugzilla Comment ID: 10974
Date: 2015-10-02 18:06:02 +0200
From: Jonathan Leivent <<jonikelee@gmail.com>>
Got this error building doc in 8860362 (very recent 8.5):
...
rm -rf doc/refman/html
install -d doc/refman/html
install -m 644 doc/refman/coqide.png doc/refman/coqide-queries.png doc/refman/html
(cd doc/refman/html; hacha -nolinks -tocbis -o toc.html ../styles.hva ../Reference-Manual.html)
install -m 644 doc/refman/cover.html doc/refman/html/index.html
install -m 644 doc/common/styles/html/simple/*.css doc/refman/html
./doc/stdlib/make-library-index doc/stdlib/index-list.html doc/stdlib/hidden-files
Building file index-list.prehtml... Error: none of doc/stdlib/index-list.html and doc/stdlib/hidden-files mention theories/Compat/Coq84.v
make[1]: *** [doc/stdlib/index-list.html] Error 1
...
## Bugzilla Comment ID: 10976
Date: 2015-10-02 21:33:39 +0200
From: @_silene
Thanks for the report. Fixed by commit f41de34.
|
1.0
|
building doc: Error: none ... mention theories/Compat/Coq84.v - Note: the issue was created automatically with bugzilla2github tool
# Bugzilla Bug ID: 4353
Date: 2015-10-02 18:06:02 +0200
From: Jonathan Leivent <<jonikelee@gmail.com>>
To:
CC: coq-bugs-redist@lists.gforge.inria.fr, @_silene
Last updated: 2015-10-02 21:33:39 +0200
## Bugzilla Comment ID: 10974
Date: 2015-10-02 18:06:02 +0200
From: Jonathan Leivent <<jonikelee@gmail.com>>
Got this error building doc in 8860362 (very recent 8.5):
...
rm -rf doc/refman/html
install -d doc/refman/html
install -m 644 doc/refman/coqide.png doc/refman/coqide-queries.png doc/refman/html
(cd doc/refman/html; hacha -nolinks -tocbis -o toc.html ../styles.hva ../Reference-Manual.html)
install -m 644 doc/refman/cover.html doc/refman/html/index.html
install -m 644 doc/common/styles/html/simple/*.css doc/refman/html
./doc/stdlib/make-library-index doc/stdlib/index-list.html doc/stdlib/hidden-files
Building file index-list.prehtml... Error: none of doc/stdlib/index-list.html and doc/stdlib/hidden-files mention theories/Compat/Coq84.v
make[1]: *** [doc/stdlib/index-list.html] Error 1
...
## Bugzilla Comment ID: 10976
Date: 2015-10-02 21:33:39 +0200
From: @_silene
Thanks for the report. Fixed by commit f41de34.
|
non_test
|
building doc error none mention theories compat v note the issue was created automatically with tool bugzilla bug id date from jonathan leivent lt gt to cc coq bugs redist lists gforge inria fr silene last updated bugzilla comment id date from jonathan leivent lt gt got this error building doc in very recent rm rf doc refman html install d doc refman html install m doc refman coqide png doc refman coqide queries png doc refman html cd doc refman html hacha nolinks tocbis o toc html styles hva reference manual html install m doc refman cover html doc refman html index html install m doc common styles html simple css doc refman html doc stdlib make library index doc stdlib index list html doc stdlib hidden files building file index list prehtml error none of doc stdlib index list html and doc stdlib hidden files mention theories compat v make error bugzilla comment id date from silene thanks for the report fixed by commit
| 0
|
593,506
| 18,009,706,655
|
IssuesEvent
|
2021-09-16 07:05:16
|
buddyboss/buddyboss-platform
|
https://api.github.com/repos/buddyboss/buddyboss-platform
|
closed
|
BP activity shortcode like and comment option are not working
|
bug priority: low component: activity integration: missing bug: validated
|
**Describe the bug**
When using BP activity shortcode on some page and when members "liking" or "commenting" on the activity it forces the whole page to refresh and commenting doesn't work, it just refreshes the page.
https://wordpress.org/plugins/bp-activity-shortcode/
**To Reproduce**
Steps to reproduce the behavior:
1. Install BuddyBoss platform and BuddyBoss Theme
2. Install the BP activity shortcode plugin
3. Create a new page and add BP activity shortcode into the page and then try click ono like or try adding thee comment it will reload the pages and would not add any comment or like
**Expected behavior**
Like and Comment all the option should work
**Screenshots**
https://www.loom.com/share/15f2f6bf49d04fca944e30d3e612a869
**Support ticket links**
https://secure.helpscout.net/conversation/1112177882/15853/
|
1.0
|
BP activity shortcode like and comment option are not working - **Describe the bug**
When using BP activity shortcode on some page and when members "liking" or "commenting" on the activity it forces the whole page to refresh and commenting doesn't work, it just refreshes the page.
https://wordpress.org/plugins/bp-activity-shortcode/
**To Reproduce**
Steps to reproduce the behavior:
1. Install BuddyBoss platform and BuddyBoss Theme
2. Install the BP activity shortcode plugin
3. Create a new page and add BP activity shortcode into the page and then try click ono like or try adding thee comment it will reload the pages and would not add any comment or like
**Expected behavior**
Like and Comment all the option should work
**Screenshots**
https://www.loom.com/share/15f2f6bf49d04fca944e30d3e612a869
**Support ticket links**
https://secure.helpscout.net/conversation/1112177882/15853/
|
non_test
|
bp activity shortcode like and comment option are not working describe the bug when using bp activity shortcode on some page and when members liking or commenting on the activity it forces the whole page to refresh and commenting doesn t work it just refreshes the page to reproduce steps to reproduce the behavior install buddyboss platform and buddyboss theme install the bp activity shortcode plugin create a new page and add bp activity shortcode into the page and then try click ono like or try adding thee comment it will reload the pages and would not add any comment or like expected behavior like and comment all the option should work screenshots support ticket links
| 0
|
816,939
| 30,618,122,728
|
IssuesEvent
|
2023-07-24 06:01:10
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
closed
|
Ruby GRPC client cannot retrieve streaming response
|
kind/bug lang/ruby priority/P2
|
<!--
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers here:
- grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
- StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
grpc version 1.26.0, ruby 2.5.5
### What operating system (Linux, Windows,...) and version?
Centos 7 Kernel 4.20.5-1.el7.elrepo.x86_64
### What runtime / compiler are you using (e.g. python version or version of gcc)
ruby version 2.5.5, grpc version 1.26.0
### What did you do?
If possible, provide a recipe for reproducing the error. Try being specific and include code snippets if helpful.
I use deadline TimeConsts::INFINITE_FUTURE on etcdv3-ruby client for watch some key,ruby client cannot get any events update after the connection have been idle for several hours, There is no error occur and etcd service is up and running. when using golang etcd client have not happen same case.
### What did you expect to see?
I expect client get new events after the connection have been idle for several hours.
### What did you see instead?
ruby etcdv3 client

golang etcdv3 client

### Anything else we should know about your project / environment?
my project run on Kubernetes version 1.13.
|
1.0
|
Ruby GRPC client cannot retrieve streaming response - <!--
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers here:
- grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
- StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
grpc version 1.26.0, ruby 2.5.5
### What operating system (Linux, Windows,...) and version?
Centos 7 Kernel 4.20.5-1.el7.elrepo.x86_64
### What runtime / compiler are you using (e.g. python version or version of gcc)
ruby version 2.5.5, grpc version 1.26.0
### What did you do?
If possible, provide a recipe for reproducing the error. Try being specific and include code snippets if helpful.
I use deadline TimeConsts::INFINITE_FUTURE on etcdv3-ruby client for watch some key,ruby client cannot get any events update after the connection have been idle for several hours, There is no error occur and etcd service is up and running. when using golang etcd client have not happen same case.
### What did you expect to see?
I expect client get new events after the connection have been idle for several hours.
### What did you see instead?
ruby etcdv3 client

golang etcdv3 client

### Anything else we should know about your project / environment?
my project run on Kubernetes version 1.13.
|
non_test
|
ruby grpc client cannot retrieve streaming response this form is for bug reports and feature requests only for general questions and troubleshooting please ask look for answers here grpc io mailing list stackoverflow with grpc tag issues specific to grpc java grpc go grpc node grpc dart grpc web should be created in the repository they belong to e g what version of grpc and what language are you using grpc version ruby what operating system linux windows and version centos kernel elrepo what runtime compiler are you using e g python version or version of gcc ruby version grpc version what did you do if possible provide a recipe for reproducing the error try being specific and include code snippets if helpful i use deadline timeconsts infinite future on ruby client for watch some key ruby client cannot get any events update after the connection have been idle for several hours there is no error occur and etcd service is up and running when using golang etcd client have not happen same case what did you expect to see i expect client get new events after the connection have been idle for several hours what did you see instead ruby client golang client anything else we should know about your project environment my project run on kubernetes version
| 0
|
306,195
| 26,446,871,063
|
IssuesEvent
|
2023-01-16 08:10:45
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
closed
|
Fix raw_ops.test_tensorflow_Softplus
|
TensorFlow Frontend Sub Task Failing Test
|
| | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/3915721669/jobs/6694156421" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_tensorflow/test_raw_ops.py::test_tensorflow_Softplus[cpu-ivy.functional.backends.jax-False-False]</summary>
2023-01-13T23:53:37.5008638Z E TypeError: softplus only takes keyword args (possible keys: ['features', 'name']). Please pass these args as kwargs instead.
2023-01-13T23:53:37.5009037Z E Falsifying example: test_tensorflow_Softplus(
2023-01-13T23:53:37.5009614Z E dtype_and_x=(['float16'], [array([-1.], dtype=float16)]),
2023-01-13T23:53:37.5009891Z E native_array=[False],
2023-01-13T23:53:37.5010124Z E num_positional_args=1,
2023-01-13T23:53:37.5010352Z E as_variable=[False],
2023-01-13T23:53:37.5010745Z E fn_tree='ivy.functional.frontends.tensorflow.raw_ops.Softplus',
2023-01-13T23:53:37.5011110Z E frontend='tensorflow',
2023-01-13T23:53:37.5011364Z E on_device='cpu',
2023-01-13T23:53:37.5011565Z E )
2023-01-13T23:53:37.5011737Z E
2023-01-13T23:53:37.5012247Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2BkAAMoBaIZGQAAPQAF') as a decorator on your test case
</details>
|
1.0
|
Fix raw_ops.test_tensorflow_Softplus - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/3915721669/jobs/6694156421" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_tensorflow/test_raw_ops.py::test_tensorflow_Softplus[cpu-ivy.functional.backends.jax-False-False]</summary>
2023-01-13T23:53:37.5008638Z E TypeError: softplus only takes keyword args (possible keys: ['features', 'name']). Please pass these args as kwargs instead.
2023-01-13T23:53:37.5009037Z E Falsifying example: test_tensorflow_Softplus(
2023-01-13T23:53:37.5009614Z E dtype_and_x=(['float16'], [array([-1.], dtype=float16)]),
2023-01-13T23:53:37.5009891Z E native_array=[False],
2023-01-13T23:53:37.5010124Z E num_positional_args=1,
2023-01-13T23:53:37.5010352Z E as_variable=[False],
2023-01-13T23:53:37.5010745Z E fn_tree='ivy.functional.frontends.tensorflow.raw_ops.Softplus',
2023-01-13T23:53:37.5011110Z E frontend='tensorflow',
2023-01-13T23:53:37.5011364Z E on_device='cpu',
2023-01-13T23:53:37.5011565Z E )
2023-01-13T23:53:37.5011737Z E
2023-01-13T23:53:37.5012247Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2BkAAMoBaIZGQAAPQAF') as a decorator on your test case
</details>
|
test
|
fix raw ops test tensorflow softplus tensorflow img src torch img src numpy img src jax img src failed ivy tests test ivy test frontends test tensorflow test raw ops py test tensorflow softplus e typeerror softplus only takes keyword args possible keys please pass these args as kwargs instead e falsifying example test tensorflow softplus e dtype and x dtype e native array e num positional args e as variable e fn tree ivy functional frontends tensorflow raw ops softplus e frontend tensorflow e on device cpu e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case
| 1
|
125,596
| 10,347,637,214
|
IssuesEvent
|
2019-09-04 17:52:37
|
cmu-db/terrier
|
https://api.github.com/repos/cmu-db/terrier
|
closed
|
DataTable consistency checker
|
beginner feature tests
|
We enforce a lot of storage invariants with TERRIER_ASSERTs at runtime throughout the codebase, but it would be helpful to have a function that we can invoke at any time in tests that, given a DataTable pointer, can verify the consistency of the table and its version chains (if they exist).
Some example invariants regarding ordering of version chain:
- Uncommitted UndoRecords should not appear after Committed UndoRecords
- If an Insert UndoRecord is present, it should be the last element in the version chain
- If a Delete UndoRecord is present, it should be the first element in the version chain
I think we're reaching a point of stability on the DataTable API and internal BlockLayout (after #174 goes in) that we could start implementing this. When it's done, you'll have a good understanding of how the storage layer is organized. We can discuss more invariants as we think of them.
If this class is written in test/include/util/storage_test_util.h it should be made a friend of the DataTable class so it can access the private fields.
|
1.0
|
DataTable consistency checker - We enforce a lot of storage invariants with TERRIER_ASSERTs at runtime throughout the codebase, but it would be helpful to have a function that we can invoke at any time in tests that, given a DataTable pointer, can verify the consistency of the table and its version chains (if they exist).
Some example invariants regarding ordering of version chain:
- Uncommitted UndoRecords should not appear after Committed UndoRecords
- If an Insert UndoRecord is present, it should be the last element in the version chain
- If a Delete UndoRecord is present, it should be the first element in the version chain
I think we're reaching a point of stability on the DataTable API and internal BlockLayout (after #174 goes in) that we could start implementing this. When it's done, you'll have a good understanding of how the storage layer is organized. We can discuss more invariants as we think of them.
If this class is written in test/include/util/storage_test_util.h it should be made a friend of the DataTable class so it can access the private fields.
|
test
|
datatable consistency checker we enforce a lot of storage invariants with terrier asserts at runtime throughout the codebase but it would be helpful to have a function that we can invoke at any time in tests that given a datatable pointer can verify the consistency of the table and its version chains if they exist some example invariants regarding ordering of version chain uncommitted undorecords should not appear after committed undorecords if an insert undorecord is present it should be the last element in the version chain if a delete undorecord is present it should be the first element in the version chain i think we re reaching a point of stability on the datatable api and internal blocklayout after goes in that we could start implementing this when it s done you ll have a good understanding of how the storage layer is organized we can discuss more invariants as we think of them if this class is written in test include util storage test util h it should be made a friend of the datatable class so it can access the private fields
| 1
|
40,824
| 10,583,043,295
|
IssuesEvent
|
2019-10-08 12:57:07
|
ocaml/opam
|
https://api.github.com/repos/ocaml/opam
|
closed
|
Solaris 10 patch command doesn't get file to patch
|
AREA: BUILD AREA: PORTABILITY
|
After editing
opam-full-1.2.2-rc2/src_ext/Makefile
to remove suppression of recipe echoing:
...
if [ -d patches/cmdliner ]; then \
cd cmdliner && \
for p in ../patches/cmdliner/*.patch; do \
patch -p1 < $p; \
done; \
fi
Looks like a unified context diff.
File to patch:
That is, the patch command prompts the user.
opam-full-1.2.2-rc2/src_ext/patches/cmdliner/backport_pre_4_00_0.patch
diff -Naur cmdliner-0.9.7/src/cmdliner.ml cmdliner-0.9.7.patched/src/cmdliner.ml
--- cmdliner-0.9.7/src/cmdliner.ml 2015-02-06 11:33:44.000000000 +0100
+++ cmdliner-0.9.7.patched/src/cmdliner.ml 2015-02-18 23:04:04.000000000 +0100
...
See the man page for the Solaris 10 patch command.
http://docs.oracle.com/cd/E19253-01/816-5165/6mbb0m9n6/index.html
In particular, we are interested in the "File Name Determination" section of that document.
If no file operand is specified, patch performs the following steps to obtain a path name:
If the patch contains the strings **\* and - - -, patch strips components from the beginning of each path name (depending on the presence or value of the -p option), then tests for the existence of both files in the current directory ...
src/cmdliner.ml
src/cmdliner.ml
"Both" files exist.
If both files exist, patch assumes that no path name can be obtained from this step ...
If no path name can be obtained by applying the previous steps, ... patch will write a prompt to standard output and request a file name interactively from standard input.
One possible solution is for the makefile to read the patch file, extracting the path name using the Linux patch command algorithm. Then feed that path name to the patch command explicitly.
Alan Feldstein
Cosmic Horizon
http://www.alanfeldstein.com
|
1.0
|
Solaris 10 patch command doesn't get file to patch - After editing
opam-full-1.2.2-rc2/src_ext/Makefile
to remove suppression of recipe echoing:
...
if [ -d patches/cmdliner ]; then \
cd cmdliner && \
for p in ../patches/cmdliner/*.patch; do \
patch -p1 < $p; \
done; \
fi
Looks like a unified context diff.
File to patch:
That is, the patch command prompts the user.
opam-full-1.2.2-rc2/src_ext/patches/cmdliner/backport_pre_4_00_0.patch
diff -Naur cmdliner-0.9.7/src/cmdliner.ml cmdliner-0.9.7.patched/src/cmdliner.ml
--- cmdliner-0.9.7/src/cmdliner.ml 2015-02-06 11:33:44.000000000 +0100
+++ cmdliner-0.9.7.patched/src/cmdliner.ml 2015-02-18 23:04:04.000000000 +0100
...
See the man page for the Solaris 10 patch command.
http://docs.oracle.com/cd/E19253-01/816-5165/6mbb0m9n6/index.html
In particular, we are interested in the "File Name Determination" section of that document.
If no file operand is specified, patch performs the following steps to obtain a path name:
If the patch contains the strings **\* and - - -, patch strips components from the beginning of each path name (depending on the presence or value of the -p option), then tests for the existence of both files in the current directory ...
src/cmdliner.ml
src/cmdliner.ml
"Both" files exist.
If both files exist, patch assumes that no path name can be obtained from this step ...
If no path name can be obtained by applying the previous steps, ... patch will write a prompt to standard output and request a file name interactively from standard input.
One possible solution is for the makefile to read the patch file, extracting the path name using the Linux patch command algorithm. Then feed that path name to the patch command explicitly.
Alan Feldstein
Cosmic Horizon
http://www.alanfeldstein.com
|
non_test
|
solaris patch command doesn t get file to patch after editing opam full src ext makefile to remove suppression of recipe echoing if then cd cmdliner for p in patches cmdliner patch do patch p done fi looks like a unified context diff file to patch that is the patch command prompts the user opam full src ext patches cmdliner backport pre patch diff naur cmdliner src cmdliner ml cmdliner patched src cmdliner ml cmdliner src cmdliner ml cmdliner patched src cmdliner ml see the man page for the solaris patch command in particular we are interested in the file name determination section of that document if no file operand is specified patch performs the following steps to obtain a path name if the patch contains the strings and patch strips components from the beginning of each path name depending on the presence or value of the p option then tests for the existence of both files in the current directory src cmdliner ml src cmdliner ml both files exist if both files exist patch assumes that no path name can be obtained from this step if no path name can be obtained by applying the previous steps patch will write a prompt to standard output and request a file name interactively from standard input one possible solution is for the makefile to read the patch file extracting the path name using the linux patch command algorithm then feed that path name to the patch command explicitly alan feldstein cosmic horizon
| 0
|
795,857
| 28,089,399,389
|
IssuesEvent
|
2023-03-30 12:05:30
|
AY2223S2-CS2103-F10-2/tp
|
https://api.github.com/repos/AY2223S2-CS2103-F10-2/tp
|
closed
|
Clicking "Open URL" button in help menu causes unhandled exception on Linux
|
priority.High type.Bug severity.Medium
|
### Description
When on Linux, the "Open URL" button causes an unhandled exception to be thrown.
### Steps for reproducing
1. Click Help > Help
2. Click "Open URL"
### Expected result
Open link to user guide in browser.
### Actual result
Exception thrown.
### Other details
None
### Error output
```
Exception in thread "JavaFX Application Thread" java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at javafx.fxml.FXMLLoader$MethodHandler.invoke(FXMLLoader.java:1787)
at javafx.fxml.FXMLLoader$ControllerMethodEventHandler.handle(FXMLLoader.java:1670)
at com.sun.javafx.event.CompositeEventHandler.dispatchBubblingEvent(CompositeEventHandler.java:86)
at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:238)
at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:191)
at com.sun.javafx.event.CompositeEventDispatcher.dispatchBubblingEvent(CompositeEventDispatcher.java:59)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:58)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.EventUtil.fireEventImpl(EventUtil.java:74)
at com.sun.javafx.event.EventUtil.fireEvent(EventUtil.java:49)
at javafx.event.Event.fireEvent(Event.java:198)
at javafx.scene.Node.fireEvent(Node.java:8879)
at javafx.scene.control.Button.fire(Button.java:200)
at com.sun.javafx.scene.control.behavior.ButtonBehavior.mouseReleased(ButtonBehavior.java:206)
at com.sun.javafx.scene.control.inputmap.InputMap.handle(InputMap.java:274)
at com.sun.javafx.event.CompositeEventHandler$NormalEventHandlerRecord.handleBubblingEvent(CompositeEventHandler.java:218)
at com.sun.javafx.event.CompositeEventHandler.dispatchBubblingEvent(CompositeEventHandler.java:80)
at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:238)
at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:191)
at com.sun.javafx.event.CompositeEventDispatcher.dispatchBubblingEvent(CompositeEventDispatcher.java:59)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:58)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.EventUtil.fireEventImpl(EventUtil.java:74)
at com.sun.javafx.event.EventUtil.fireEvent(EventUtil.java:54)
at javafx.event.Event.fireEvent(Event.java:198)
at javafx.scene.Scene$MouseHandler.process(Scene.java:3851)
at javafx.scene.Scene$MouseHandler.access$1200(Scene.java:3579)
at javafx.scene.Scene.processMouseEvent(Scene.java:1849)
at javafx.scene.Scene$ScenePeerListener.mouseEvent(Scene.java:2588)
at com.sun.javafx.tk.quantum.GlassViewEventHandler$MouseEventNotification.run(GlassViewEventHandler.java:397)
at com.sun.javafx.tk.quantum.GlassViewEventHandler$MouseEventNotification.run(GlassViewEventHandler.java:295)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at com.sun.javafx.tk.quantum.GlassViewEventHandler.lambda$handleMouseEvent$2(GlassViewEventHandler.java:434)
at com.sun.javafx.tk.quantum.QuantumToolkit.runWithoutRenderLock(QuantumToolkit.java:390)
at com.sun.javafx.tk.quantum.GlassViewEventHandler.handleMouseEvent(GlassViewEventHandler.java:433)
at com.sun.glass.ui.View.handleMouseEvent(View.java:556)
at com.sun.glass.ui.View.notifyMouse(View.java:942)
at com.sun.glass.ui.gtk.GtkApplication._runLoop(Native Method)
at com.sun.glass.ui.gtk.GtkApplication.lambda$runLoop$11(GtkApplication.java:277)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at com.sun.javafx.reflect.Trampoline.invoke(MethodUtil.java:76)
at jdk.internal.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at com.sun.javafx.reflect.MethodUtil.invoke(MethodUtil.java:273)
at com.sun.javafx.fxml.MethodHelper.invoke(MethodHelper.java:83)
at javafx.fxml.FXMLLoader$MethodHandler.invoke(FXMLLoader.java:1784)
... 47 more
Caused by: java.lang.UnsupportedOperationException: The BROWSE action is not supported on the current platform!
at java.desktop/java.awt.Desktop.checkActionSupport(Desktop.java:380)
at java.desktop/java.awt.Desktop.browse(Desktop.java:524)
at seedu.address.ui.HelpWindow.openUrl(HelpWindow.java:101)
... 58 more
```
|
1.0
|
Clicking "Open URL" button in help menu causes unhandled exception on Linux - ### Description
When on Linux, the "Open URL" button causes an unhandled exception to be thrown.
### Steps for reproducing
1. Click Help > Help
2. Click "Open URL"
### Expected result
Open link to user guide in browser.
### Actual result
Exception thrown.
### Other details
None
### Error output
```
Exception in thread "JavaFX Application Thread" java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at javafx.fxml.FXMLLoader$MethodHandler.invoke(FXMLLoader.java:1787)
at javafx.fxml.FXMLLoader$ControllerMethodEventHandler.handle(FXMLLoader.java:1670)
at com.sun.javafx.event.CompositeEventHandler.dispatchBubblingEvent(CompositeEventHandler.java:86)
at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:238)
at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:191)
at com.sun.javafx.event.CompositeEventDispatcher.dispatchBubblingEvent(CompositeEventDispatcher.java:59)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:58)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.EventUtil.fireEventImpl(EventUtil.java:74)
at com.sun.javafx.event.EventUtil.fireEvent(EventUtil.java:49)
at javafx.event.Event.fireEvent(Event.java:198)
at javafx.scene.Node.fireEvent(Node.java:8879)
at javafx.scene.control.Button.fire(Button.java:200)
at com.sun.javafx.scene.control.behavior.ButtonBehavior.mouseReleased(ButtonBehavior.java:206)
at com.sun.javafx.scene.control.inputmap.InputMap.handle(InputMap.java:274)
at com.sun.javafx.event.CompositeEventHandler$NormalEventHandlerRecord.handleBubblingEvent(CompositeEventHandler.java:218)
at com.sun.javafx.event.CompositeEventHandler.dispatchBubblingEvent(CompositeEventHandler.java:80)
at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:238)
at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:191)
at com.sun.javafx.event.CompositeEventDispatcher.dispatchBubblingEvent(CompositeEventDispatcher.java:59)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:58)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.EventUtil.fireEventImpl(EventUtil.java:74)
at com.sun.javafx.event.EventUtil.fireEvent(EventUtil.java:54)
at javafx.event.Event.fireEvent(Event.java:198)
at javafx.scene.Scene$MouseHandler.process(Scene.java:3851)
at javafx.scene.Scene$MouseHandler.access$1200(Scene.java:3579)
at javafx.scene.Scene.processMouseEvent(Scene.java:1849)
at javafx.scene.Scene$ScenePeerListener.mouseEvent(Scene.java:2588)
at com.sun.javafx.tk.quantum.GlassViewEventHandler$MouseEventNotification.run(GlassViewEventHandler.java:397)
at com.sun.javafx.tk.quantum.GlassViewEventHandler$MouseEventNotification.run(GlassViewEventHandler.java:295)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at com.sun.javafx.tk.quantum.GlassViewEventHandler.lambda$handleMouseEvent$2(GlassViewEventHandler.java:434)
at com.sun.javafx.tk.quantum.QuantumToolkit.runWithoutRenderLock(QuantumToolkit.java:390)
at com.sun.javafx.tk.quantum.GlassViewEventHandler.handleMouseEvent(GlassViewEventHandler.java:433)
at com.sun.glass.ui.View.handleMouseEvent(View.java:556)
at com.sun.glass.ui.View.notifyMouse(View.java:942)
at com.sun.glass.ui.gtk.GtkApplication._runLoop(Native Method)
at com.sun.glass.ui.gtk.GtkApplication.lambda$runLoop$11(GtkApplication.java:277)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at com.sun.javafx.reflect.Trampoline.invoke(MethodUtil.java:76)
at jdk.internal.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at com.sun.javafx.reflect.MethodUtil.invoke(MethodUtil.java:273)
at com.sun.javafx.fxml.MethodHelper.invoke(MethodHelper.java:83)
at javafx.fxml.FXMLLoader$MethodHandler.invoke(FXMLLoader.java:1784)
... 47 more
Caused by: java.lang.UnsupportedOperationException: The BROWSE action is not supported on the current platform!
at java.desktop/java.awt.Desktop.checkActionSupport(Desktop.java:380)
at java.desktop/java.awt.Desktop.browse(Desktop.java:524)
at seedu.address.ui.HelpWindow.openUrl(HelpWindow.java:101)
... 58 more
```
|
non_test
|
clicking open url button in help menu causes unhandled exception on linux description when on linux the open url button causes an unhandled exception to be thrown steps for reproducing click help help click open url expected result open link to user guide in browser actual result exception thrown other details none error output exception in thread javafx application thread java lang runtimeexception java lang reflect invocationtargetexception at javafx fxml fxmlloader methodhandler invoke fxmlloader java at javafx fxml fxmlloader controllermethodeventhandler handle fxmlloader java at com sun javafx event compositeeventhandler dispatchbubblingevent compositeeventhandler java at com sun javafx event eventhandlermanager dispatchbubblingevent eventhandlermanager java at com sun javafx event eventhandlermanager dispatchbubblingevent eventhandlermanager java at com sun javafx event compositeeventdispatcher dispatchbubblingevent compositeeventdispatcher java at com sun javafx event basiceventdispatcher dispatchevent basiceventdispatcher java at com sun javafx event eventdispatchchainimpl dispatchevent eventdispatchchainimpl java at com sun javafx event basiceventdispatcher dispatchevent basiceventdispatcher java at com sun javafx event eventdispatchchainimpl dispatchevent eventdispatchchainimpl java at com sun javafx event basiceventdispatcher dispatchevent basiceventdispatcher java at com sun javafx event eventdispatchchainimpl dispatchevent eventdispatchchainimpl java at com sun javafx event eventutil fireeventimpl eventutil java at com sun javafx event eventutil fireevent eventutil java at javafx event event fireevent event java at javafx scene node fireevent node java at javafx scene control button fire button java at com sun javafx scene control behavior buttonbehavior mousereleased buttonbehavior java at com sun javafx scene control inputmap inputmap handle inputmap java at com sun javafx event compositeeventhandler normaleventhandlerrecord handlebubblingevent compositeeventhandler java at com sun javafx event compositeeventhandler dispatchbubblingevent compositeeventhandler java at com sun javafx event eventhandlermanager dispatchbubblingevent eventhandlermanager java at com sun javafx event eventhandlermanager dispatchbubblingevent eventhandlermanager java at com sun javafx event compositeeventdispatcher dispatchbubblingevent compositeeventdispatcher java at com sun javafx event basiceventdispatcher dispatchevent basiceventdispatcher java at com sun javafx event eventdispatchchainimpl dispatchevent eventdispatchchainimpl java at com sun javafx event basiceventdispatcher dispatchevent basiceventdispatcher java at com sun javafx event eventdispatchchainimpl dispatchevent eventdispatchchainimpl java at com sun javafx event basiceventdispatcher dispatchevent basiceventdispatcher java at com sun javafx event eventdispatchchainimpl dispatchevent eventdispatchchainimpl java at com sun javafx event eventutil fireeventimpl eventutil java at com sun javafx event eventutil fireevent eventutil java at javafx event event fireevent event java at javafx scene scene mousehandler process scene java at javafx scene scene mousehandler access scene java at javafx scene scene processmouseevent scene java at javafx scene scene scenepeerlistener mouseevent scene java at com sun javafx tk quantum glassvieweventhandler mouseeventnotification run glassvieweventhandler java at com sun javafx tk quantum glassvieweventhandler mouseeventnotification run glassvieweventhandler java at java base java security accesscontroller doprivileged native method at com sun javafx tk quantum glassvieweventhandler lambda handlemouseevent glassvieweventhandler java at com sun javafx tk quantum quantumtoolkit runwithoutrenderlock quantumtoolkit java at com sun javafx tk quantum glassvieweventhandler handlemouseevent glassvieweventhandler java at com sun glass ui view handlemouseevent view java at com sun glass ui view notifymouse view java at com sun glass ui gtk gtkapplication runloop native method at com sun glass ui gtk gtkapplication lambda runloop gtkapplication java at java base java lang thread run thread java caused by java lang reflect invocationtargetexception at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at com sun javafx reflect trampoline invoke methodutil java at jdk internal reflect invoke unknown source at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at com sun javafx reflect methodutil invoke methodutil java at com sun javafx fxml methodhelper invoke methodhelper java at javafx fxml fxmlloader methodhandler invoke fxmlloader java more caused by java lang unsupportedoperationexception the browse action is not supported on the current platform at java desktop java awt desktop checkactionsupport desktop java at java desktop java awt desktop browse desktop java at seedu address ui helpwindow openurl helpwindow java more
| 0
|
50,586
| 6,103,195,652
|
IssuesEvent
|
2017-06-20 18:10:46
|
eclipse/omr
|
https://api.github.com/repos/eclipse/omr
|
opened
|
Segfault when passing `--gtest-filter="*"` to omrgctest
|
bug test
|
When running omrgctest with a test filter, the test segfaults while parsing command line arguments. I would have expected it to run only the tests which match the filter.
```
./omrgctest --gtest_filter="*" -configListFile=fvtest/gctest/configuration/fvConfigListFile.txt
[==========] Running 8 tests from 1 test case.
[----------] Global test environment set-up.
fish: “./omrgctest --gtest_filter="*"…” terminated by signal SIGSEGV (Address boundary error)
```
```
(gdb) bt
#0 0x00007ffff6c95597 in __strncmp_sse42 () from /usr/lib/libc.so.6
#1 0x000000000041689c in BaseEnvironment::SetUp (this=0x860000) at ../util/testEnvironment.hpp:78
#2 0x0000000000426473 in testing::internal::SetUpEnvironment (env=0x860000) at ../../third_party/gtest-1.7.0/src/gtest.cc:4212
#3 0x000000000043da98 in std::for_each<__gnu_cxx::__normal_iterator<testing::Environment* const*, std::vector<testing::Environment*, std::allocator<testing::Environment*> > >, void (*)(testing::Environment*)> (__first=0x860000, __last=0x545345545f485441, __f=0x426450 <testing::internal::SetUpEnvironment(testing::Environment*)>) at /usr/include/c++/7.1.1/bits/stl_algo.h:3884
#4 0x0000000000438e6a in testing::internal::ForEach<std::vector<testing::Environment*, std::allocator<testing::Environment*> >, void (*)(testing::Environment*)> (c=std::vector of length 1, capacity 1 = {...}, functor=0x426450 <testing::internal::SetUpEnvironment(testing::Environment*)>) at ../../third_party/gtest-1.7.0/src/gtest-internal-inl.h:296
#5 0x00000000004266fb in testing::internal::UnitTestImpl::RunAllTests (this=0x85fc20) at ../../third_party/gtest-1.7.0/src/gtest.cc:4309
#6 0x000000000043d339 in testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool> (object=0x85fc20, method=(bool (testing::internal::UnitTestImpl::*)(testing::internal::UnitTestImpl * const)) 0x42649c <testing::internal::UnitTestImpl::RunAllTests()>, location=0x5c4ad8 "auxiliary test code (environments or event listeners)") at ../../third_party/gtest-1.7.0/src/gtest.cc:2078
#7 0x000000000043891a in testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool> (object=0x85fc20, method=(bool (testing::internal::UnitTestImpl::*)(testing::internal::UnitTestImpl * const)) 0x42649c <testing::internal::UnitTestImpl::RunAllTests()>, location=0x5c4ad8 "auxiliary test code (environments or event listeners)") at ../../third_party/gtest-1.7.0/src/gtest.cc:2114
#8 0x0000000000425438 in testing::UnitTest::Run (this=0x849d00 <testing::UnitTest::GetInstance()::instance>) at ../../third_party/gtest-1.7.0/src/gtest.cc:3928
#9 0x0000000000416849 in RUN_ALL_TESTS () at ../../third_party/gtest-1.7.0/include/gtest/gtest.h:2288
#10 0x0000000000416724 in testMain (argc=2, argv=0x7fffffffe838, envp=0x7fffffffe858) at main.cpp:44
#11 0x0000000000419666 in main (argc=3, argv=0x7fffffffe838, envp=0x7fffffffe858) at ../../fvtest/omrGtestGlue/argmain.cpp:62
```
It looks like there is a mismatch between argc and the actual number of arguments in `BaseEnvironment::SetUp`
```
(gdb) p _argc
$1 = 3
(gdb) p _argv[0]
$4 = 0x7fffffffeb09 "/home/aryoung/wsp/gsoc/omr/omrgctest"
(gdb) p _argv[1]
$5 = 0x7fffffffeb3f "-configListFile=fvtest/gctest/configuration/fvConfigListFile.txt"
(gdb) p _argv[2]
$6 = 0x0
```
|
1.0
|
Segfault when passing `--gtest-filter="*"` to omrgctest - When running omrgctest with a test filter, the test segfaults while parsing command line arguments. I would have expected it to run only the tests which match the filter.
```
./omrgctest --gtest_filter="*" -configListFile=fvtest/gctest/configuration/fvConfigListFile.txt
[==========] Running 8 tests from 1 test case.
[----------] Global test environment set-up.
fish: “./omrgctest --gtest_filter="*"…” terminated by signal SIGSEGV (Address boundary error)
```
```
(gdb) bt
#0 0x00007ffff6c95597 in __strncmp_sse42 () from /usr/lib/libc.so.6
#1 0x000000000041689c in BaseEnvironment::SetUp (this=0x860000) at ../util/testEnvironment.hpp:78
#2 0x0000000000426473 in testing::internal::SetUpEnvironment (env=0x860000) at ../../third_party/gtest-1.7.0/src/gtest.cc:4212
#3 0x000000000043da98 in std::for_each<__gnu_cxx::__normal_iterator<testing::Environment* const*, std::vector<testing::Environment*, std::allocator<testing::Environment*> > >, void (*)(testing::Environment*)> (__first=0x860000, __last=0x545345545f485441, __f=0x426450 <testing::internal::SetUpEnvironment(testing::Environment*)>) at /usr/include/c++/7.1.1/bits/stl_algo.h:3884
#4 0x0000000000438e6a in testing::internal::ForEach<std::vector<testing::Environment*, std::allocator<testing::Environment*> >, void (*)(testing::Environment*)> (c=std::vector of length 1, capacity 1 = {...}, functor=0x426450 <testing::internal::SetUpEnvironment(testing::Environment*)>) at ../../third_party/gtest-1.7.0/src/gtest-internal-inl.h:296
#5 0x00000000004266fb in testing::internal::UnitTestImpl::RunAllTests (this=0x85fc20) at ../../third_party/gtest-1.7.0/src/gtest.cc:4309
#6 0x000000000043d339 in testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool> (object=0x85fc20, method=(bool (testing::internal::UnitTestImpl::*)(testing::internal::UnitTestImpl * const)) 0x42649c <testing::internal::UnitTestImpl::RunAllTests()>, location=0x5c4ad8 "auxiliary test code (environments or event listeners)") at ../../third_party/gtest-1.7.0/src/gtest.cc:2078
#7 0x000000000043891a in testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool> (object=0x85fc20, method=(bool (testing::internal::UnitTestImpl::*)(testing::internal::UnitTestImpl * const)) 0x42649c <testing::internal::UnitTestImpl::RunAllTests()>, location=0x5c4ad8 "auxiliary test code (environments or event listeners)") at ../../third_party/gtest-1.7.0/src/gtest.cc:2114
#8 0x0000000000425438 in testing::UnitTest::Run (this=0x849d00 <testing::UnitTest::GetInstance()::instance>) at ../../third_party/gtest-1.7.0/src/gtest.cc:3928
#9 0x0000000000416849 in RUN_ALL_TESTS () at ../../third_party/gtest-1.7.0/include/gtest/gtest.h:2288
#10 0x0000000000416724 in testMain (argc=2, argv=0x7fffffffe838, envp=0x7fffffffe858) at main.cpp:44
#11 0x0000000000419666 in main (argc=3, argv=0x7fffffffe838, envp=0x7fffffffe858) at ../../fvtest/omrGtestGlue/argmain.cpp:62
```
It looks like there is a mismatch between argc and the actual number of arguments in `BaseEnvironment::SetUp`
```
(gdb) p _argc
$1 = 3
(gdb) p _argv[0]
$4 = 0x7fffffffeb09 "/home/aryoung/wsp/gsoc/omr/omrgctest"
(gdb) p _argv[1]
$5 = 0x7fffffffeb3f "-configListFile=fvtest/gctest/configuration/fvConfigListFile.txt"
(gdb) p _argv[2]
$6 = 0x0
```
|
test
|
segfault when passing gtest filter to omrgctest when running omrgctest with a test filter the test segfaults while parsing command line arguments i would have expected it to run only the tests which match the filter omrgctest gtest filter configlistfile fvtest gctest configuration fvconfiglistfile txt running tests from test case global test environment set up fish “ omrgctest gtest filter …” terminated by signal sigsegv address boundary error gdb bt in strncmp from usr lib libc so in baseenvironment setup this at util testenvironment hpp in testing internal setupenvironment env at third party gtest src gtest cc in std for each void testing environment first last f at usr include c bits stl algo h in testing internal foreach void testing environment c std vector of length capacity functor at third party gtest src gtest internal inl h in testing internal unittestimpl runalltests this at third party gtest src gtest cc in testing internal handlesehexceptionsinmethodifsupported object method bool testing internal unittestimpl testing internal unittestimpl const location auxiliary test code environments or event listeners at third party gtest src gtest cc in testing internal handleexceptionsinmethodifsupported object method bool testing internal unittestimpl testing internal unittestimpl const location auxiliary test code environments or event listeners at third party gtest src gtest cc in testing unittest run this at third party gtest src gtest cc in run all tests at third party gtest include gtest gtest h in testmain argc argv envp at main cpp in main argc argv envp at fvtest omrgtestglue argmain cpp it looks like there is a mismatch between argc and the actual number of arguments in baseenvironment setup gdb p argc gdb p argv home aryoung wsp gsoc omr omrgctest gdb p argv configlistfile fvtest gctest configuration fvconfiglistfile txt gdb p argv
| 1
|
219,037
| 17,046,765,579
|
IssuesEvent
|
2021-07-06 00:50:08
|
ProfessorAmanda/econsimulations
|
https://api.github.com/repos/ProfessorAmanda/econsimulations
|
opened
|
Two-sample test: population distribution "reveal"
|
Hypothesis Testing enhancement
|
What do you guys all think about what we should "reveal" for the two-sample case? I think the best option is: Show the distributions of population 1 and 2, overlaid on the same graph, where dots of each population have different colors. I also think we could show the location of mean 1 and the location of mean 2 as vertical lines.
Below is what we currently show (which does not make sense for this case)
<img width="630" alt="Screen Shot 2021-07-05 at 8 46 31 PM" src="https://user-images.githubusercontent.com/53241468/124526948-352dd980-ddd2-11eb-8152-b0c90bdf93aa.png">
|
1.0
|
Two-sample test: population distribution "reveal" - What do you guys all think about what we should "reveal" for the two-sample case? I think the best option is: Show the distributions of population 1 and 2, overlaid on the same graph, where dots of each population have different colors. I also think we could show the location of mean 1 and the location of mean 2 as vertical lines.
Below is what we currently show (which does not make sense for this case)
<img width="630" alt="Screen Shot 2021-07-05 at 8 46 31 PM" src="https://user-images.githubusercontent.com/53241468/124526948-352dd980-ddd2-11eb-8152-b0c90bdf93aa.png">
|
test
|
two sample test population distribution reveal what do you guys all think about what we should reveal for the two sample case i think the best option is show the distributions of population and overlaid on the same graph where dots of each population have different colors i also think we could show the location of mean and the location of mean as vertical lines below is what we currently show which does not make sense for this case img width alt screen shot at pm src
| 1
|
193,662
| 14,659,310,374
|
IssuesEvent
|
2020-12-28 20:08:53
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
sql/logictest: TestSqlLiteLogic failed
|
C-test-failure O-robot branch-master
|
[(sql/logictest).TestSqlLiteLogic failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2541938&tab=buildLog) on [master@735da601263e1d0731cefd6ef3e6709d0ab479a1](https://github.com/cockroachdb/cockroach/commits/735da601263e1d0731cefd6ef3e6709d0ab479a1):
```
/go/src/github.com/cockroachdb/sqllogictest/test/index/in/100/slt_good_0.test:23097: SELECT pk FROM tab3 WHERE ((col0 < 458 OR (((((((((((col3 IN (SELECT col0 FROM tab3 WHERE (col3 < 362) AND col0 > 823 AND col1 > 549.55))) OR col0 >= 812 AND col0 > 501 OR col1 IS NULL AND (col1 IS NULL OR col0 <= 99) AND ((col0 <= 225 AND col0 >= 661 AND col4 > 610.90 AND (col3 > 472) OR col3 < 907 AND ((col1 >= 998.48)) AND (col3 < 815) OR col0 > 478 OR col3 = 253) AND col3 > 107) OR col0 <= 515 OR col0 > 151 OR col4 IS NULL AND (col3 < 820 AND col0 < 524 OR col0 < 905 OR col0 >= 375))))) AND (col3 > 172) OR (col1 < 8.57)) AND col4 >= 402.98 AND col3 >= 773 OR col4 >= 500.63 AND col1 >= 506.35 AND col0 IN (126,773,997,514,831,480) AND col0 >= 453)) OR (col3 > 713)))) AND ((col1 <= 837.43)) OR (col0 < 103) AND col1 BETWEEN 250.56 AND 894.46 AND (col3 <= 326)) OR col4 < 55.9 OR ((col1 <= 562.43)) AND ((col1 >= 130.97))
expected success, but found
(XX000) internal error: expected FD determinant and dependants to be disjoint: key(14); ()-->(11), (12)-->(14), (14)-->(12,14) (2)
func_dep.go:1513: in Verify()
DETAIL: stack trace:
github.com/cockroachdb/cockroach/pkg/sql/opt/props/func_dep.go:1513: Verify()
github.com/cockroachdb/cockroach/pkg/sql/opt/props/verify.go:50: Verify()
github.com/cockroachdb/cockroach/pkg/sql/opt/memo/check_expr.go:34: CheckExpr()
github.com/cockroachdb/cockroach/pkg/sql/opt/memo/expr.og.go:18119: MemoizeSelect()
github.com/cockroachdb/cockroach/pkg/sql/opt/norm/factory.og.go:1078: ConstructSelect()
github.com/cockroachdb/cockroach/pkg/sql/opt/norm/factory.og.go:589: ConstructSelect()
github.com/cockroachdb/cockroach/pkg/sql/opt/norm/factory.og.go:1005: ConstructSelect()
github.com/cockroachdb/cockroach/pkg/sql/opt/norm/factory.og.go:441: ConstructSelect()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/explorer.og.go:307: exploreSelect()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/explorer.og.go:20: exploreGroupMember()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/explorer.go:178: exploreGroup()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:463: optimizeGroup()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:250: optimizeExpr()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:505: optimizeGroupMember()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:450: optimizeGroup()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:250: optimizeExpr()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:505: optimizeGroupMember()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:450: optimizeGroup()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:250: optimizeExpr()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:505: optimizeGroupMember()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:450: optimizeGroup()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:250: optimizeExpr()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:505: optimizeGroupMember()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:450: optimizeGroup()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:250: optimizeExpr()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:505: optimizeGroupMember()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:450: optimizeGroup()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:250: optimizeExpr()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:505: optimizeGroupMember()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:450: optimizeGroup()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:250: optimizeExpr()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:505: optimizeGroupMember()
NOTE: internal errors may have more details in logs. Use -show-logs.
E201224 11:17:25.142618 192230419 sql/logictest/logic.go:3394
pq: internal error: expected FD determinant and dependants to be disjoint: key(14); ()-->(11), (12)-->(14), (14)-->(12,14) (2)
logic.go:2165:
pq: internal error: expected FD determinant and dependants to be disjoint: key(14); ()-->(11), (12)-->(14), (14)-->(12,14) (2)
--- done: /go/src/github.com/cockroachdb/sqllogictest/test/index/in/100/slt_good_0.test with config fakedist-spec-planning: 4918 tests, 2 failures
E201224 11:17:25.142900 192230419 sql/logictest/logic.go:3419 /go/src/github.com/cockroachdb/sqllogictest/test/index/in/100/slt_good_0.test:23101: too many errors encountered, skipping the rest of the input
logic.go:2977:
/go/src/github.com/cockroachdb/sqllogictest/test/index/in/100/slt_good_0.test:23101: error while processing
logic.go:2977: /go/src/github.com/cockroachdb/sqllogictest/test/index/in/100/slt_good_0.test:23101: too many errors encountered, skipping the rest of the input
--- FAIL: TestSqlLiteLogic/fakedist-spec-planning/slt_good_0.test#13 (99.73s)
```
<details><summary>More</summary><p>
```
make stressrace TESTS=TestSqlLiteLogic PKG=./pkg/sql/logictest TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestSqlLiteLogic.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
1.0
|
sql/logictest: TestSqlLiteLogic failed - [(sql/logictest).TestSqlLiteLogic failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2541938&tab=buildLog) on [master@735da601263e1d0731cefd6ef3e6709d0ab479a1](https://github.com/cockroachdb/cockroach/commits/735da601263e1d0731cefd6ef3e6709d0ab479a1):
```
/go/src/github.com/cockroachdb/sqllogictest/test/index/in/100/slt_good_0.test:23097: SELECT pk FROM tab3 WHERE ((col0 < 458 OR (((((((((((col3 IN (SELECT col0 FROM tab3 WHERE (col3 < 362) AND col0 > 823 AND col1 > 549.55))) OR col0 >= 812 AND col0 > 501 OR col1 IS NULL AND (col1 IS NULL OR col0 <= 99) AND ((col0 <= 225 AND col0 >= 661 AND col4 > 610.90 AND (col3 > 472) OR col3 < 907 AND ((col1 >= 998.48)) AND (col3 < 815) OR col0 > 478 OR col3 = 253) AND col3 > 107) OR col0 <= 515 OR col0 > 151 OR col4 IS NULL AND (col3 < 820 AND col0 < 524 OR col0 < 905 OR col0 >= 375))))) AND (col3 > 172) OR (col1 < 8.57)) AND col4 >= 402.98 AND col3 >= 773 OR col4 >= 500.63 AND col1 >= 506.35 AND col0 IN (126,773,997,514,831,480) AND col0 >= 453)) OR (col3 > 713)))) AND ((col1 <= 837.43)) OR (col0 < 103) AND col1 BETWEEN 250.56 AND 894.46 AND (col3 <= 326)) OR col4 < 55.9 OR ((col1 <= 562.43)) AND ((col1 >= 130.97))
expected success, but found
(XX000) internal error: expected FD determinant and dependants to be disjoint: key(14); ()-->(11), (12)-->(14), (14)-->(12,14) (2)
func_dep.go:1513: in Verify()
DETAIL: stack trace:
github.com/cockroachdb/cockroach/pkg/sql/opt/props/func_dep.go:1513: Verify()
github.com/cockroachdb/cockroach/pkg/sql/opt/props/verify.go:50: Verify()
github.com/cockroachdb/cockroach/pkg/sql/opt/memo/check_expr.go:34: CheckExpr()
github.com/cockroachdb/cockroach/pkg/sql/opt/memo/expr.og.go:18119: MemoizeSelect()
github.com/cockroachdb/cockroach/pkg/sql/opt/norm/factory.og.go:1078: ConstructSelect()
github.com/cockroachdb/cockroach/pkg/sql/opt/norm/factory.og.go:589: ConstructSelect()
github.com/cockroachdb/cockroach/pkg/sql/opt/norm/factory.og.go:1005: ConstructSelect()
github.com/cockroachdb/cockroach/pkg/sql/opt/norm/factory.og.go:441: ConstructSelect()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/explorer.og.go:307: exploreSelect()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/explorer.og.go:20: exploreGroupMember()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/explorer.go:178: exploreGroup()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:463: optimizeGroup()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:250: optimizeExpr()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:505: optimizeGroupMember()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:450: optimizeGroup()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:250: optimizeExpr()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:505: optimizeGroupMember()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:450: optimizeGroup()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:250: optimizeExpr()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:505: optimizeGroupMember()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:450: optimizeGroup()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:250: optimizeExpr()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:505: optimizeGroupMember()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:450: optimizeGroup()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:250: optimizeExpr()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:505: optimizeGroupMember()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:450: optimizeGroup()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:250: optimizeExpr()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:505: optimizeGroupMember()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:450: optimizeGroup()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:250: optimizeExpr()
github.com/cockroachdb/cockroach/pkg/sql/opt/xform/optimizer.go:505: optimizeGroupMember()
NOTE: internal errors may have more details in logs. Use -show-logs.
E201224 11:17:25.142618 192230419 sql/logictest/logic.go:3394
pq: internal error: expected FD determinant and dependants to be disjoint: key(14); ()-->(11), (12)-->(14), (14)-->(12,14) (2)
logic.go:2165:
pq: internal error: expected FD determinant and dependants to be disjoint: key(14); ()-->(11), (12)-->(14), (14)-->(12,14) (2)
--- done: /go/src/github.com/cockroachdb/sqllogictest/test/index/in/100/slt_good_0.test with config fakedist-spec-planning: 4918 tests, 2 failures
E201224 11:17:25.142900 192230419 sql/logictest/logic.go:3419 /go/src/github.com/cockroachdb/sqllogictest/test/index/in/100/slt_good_0.test:23101: too many errors encountered, skipping the rest of the input
logic.go:2977:
/go/src/github.com/cockroachdb/sqllogictest/test/index/in/100/slt_good_0.test:23101: error while processing
logic.go:2977: /go/src/github.com/cockroachdb/sqllogictest/test/index/in/100/slt_good_0.test:23101: too many errors encountered, skipping the rest of the input
--- FAIL: TestSqlLiteLogic/fakedist-spec-planning/slt_good_0.test#13 (99.73s)
```
<details><summary>More</summary><p>
```
make stressrace TESTS=TestSqlLiteLogic PKG=./pkg/sql/logictest TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestSqlLiteLogic.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
test
|
sql logictest testsqllitelogic failed on go src github com cockroachdb sqllogictest test index in slt good test select pk from where and or and or is null and is null or and and or and or and or or is null and and or and or and and in and or and expected success but found internal error expected fd determinant and dependants to be disjoint key func dep go in verify detail stack trace github com cockroachdb cockroach pkg sql opt props func dep go verify github com cockroachdb cockroach pkg sql opt props verify go verify github com cockroachdb cockroach pkg sql opt memo check expr go checkexpr github com cockroachdb cockroach pkg sql opt memo expr og go memoizeselect github com cockroachdb cockroach pkg sql opt norm factory og go constructselect github com cockroachdb cockroach pkg sql opt norm factory og go constructselect github com cockroachdb cockroach pkg sql opt norm factory og go constructselect github com cockroachdb cockroach pkg sql opt norm factory og go constructselect github com cockroachdb cockroach pkg sql opt xform explorer og go exploreselect github com cockroachdb cockroach pkg sql opt xform explorer og go exploregroupmember github com cockroachdb cockroach pkg sql opt xform explorer go exploregroup github com cockroachdb cockroach pkg sql opt xform optimizer go optimizegroup github com cockroachdb cockroach pkg sql opt xform optimizer go optimizeexpr github com cockroachdb cockroach pkg sql opt xform optimizer go optimizegroupmember github com cockroachdb cockroach pkg sql opt xform optimizer go optimizegroup github com cockroachdb cockroach pkg sql opt xform optimizer go optimizeexpr github com cockroachdb cockroach pkg sql opt xform optimizer go optimizegroupmember github com cockroachdb cockroach pkg sql opt xform optimizer go optimizegroup github com cockroachdb cockroach pkg sql opt xform optimizer go optimizeexpr github com cockroachdb cockroach pkg sql opt xform optimizer go optimizegroupmember github com cockroachdb cockroach pkg sql opt xform optimizer go optimizegroup github com cockroachdb cockroach pkg sql opt xform optimizer go optimizeexpr github com cockroachdb cockroach pkg sql opt xform optimizer go optimizegroupmember github com cockroachdb cockroach pkg sql opt xform optimizer go optimizegroup github com cockroachdb cockroach pkg sql opt xform optimizer go optimizeexpr github com cockroachdb cockroach pkg sql opt xform optimizer go optimizegroupmember github com cockroachdb cockroach pkg sql opt xform optimizer go optimizegroup github com cockroachdb cockroach pkg sql opt xform optimizer go optimizeexpr github com cockroachdb cockroach pkg sql opt xform optimizer go optimizegroupmember github com cockroachdb cockroach pkg sql opt xform optimizer go optimizegroup github com cockroachdb cockroach pkg sql opt xform optimizer go optimizeexpr github com cockroachdb cockroach pkg sql opt xform optimizer go optimizegroupmember note internal errors may have more details in logs use show logs sql logictest logic go pq internal error expected fd determinant and dependants to be disjoint key logic go pq internal error expected fd determinant and dependants to be disjoint key done go src github com cockroachdb sqllogictest test index in slt good test with config fakedist spec planning tests failures sql logictest logic go go src github com cockroachdb sqllogictest test index in slt good test too many errors encountered skipping the rest of the input logic go go src github com cockroachdb sqllogictest test index in slt good test error while processing logic go go src github com cockroachdb sqllogictest test index in slt good test too many errors encountered skipping the rest of the input fail testsqllitelogic fakedist spec planning slt good test more make stressrace tests testsqllitelogic pkg pkg sql logictest testtimeout stressflags timeout powered by
| 1
|
200,644
| 15,116,932,749
|
IssuesEvent
|
2021-02-09 07:38:58
|
microsoft/AzureStorageExplorer
|
https://api.github.com/repos/microsoft/AzureStorageExplorer
|
closed
|
An error pops up after attaching one ADLS Gen2 folder which name contains spaces
|
:gear: adls gen2 :gear: attach 🧪 testing
|
**Storage Explorer Version**: 1.17.0
**Build Number**: 20210116.6
**Platform/OS**: Windows 10/ CentOS 7.6.1810 (Core)/ MacOS Catalina
**Architecture**: ia32/x64
**Regression From**: Not a regression
## Steps to Reproduce ##
1. Expand one ADLS Gen2 storage account -> Blob Containers.
2. Create a new blob container -> Create a new folder named 'New Folder'.
3. Right click the folder -> Click 'Get Shared Access Signature...' -> Create and copy the 'URI' in the Shared Access Signature dialog.
4. Attach it via SAS -> Check there is no error.
## Expected Experience ##
No error pops up and the folder name shows well.
## Actual Experience ##
An error pops up.

|
1.0
|
An error pops up after attaching one ADLS Gen2 folder which name contains spaces - **Storage Explorer Version**: 1.17.0
**Build Number**: 20210116.6
**Platform/OS**: Windows 10/ CentOS 7.6.1810 (Core)/ MacOS Catalina
**Architecture**: ia32/x64
**Regression From**: Not a regression
## Steps to Reproduce ##
1. Expand one ADLS Gen2 storage account -> Blob Containers.
2. Create a new blob container -> Create a new folder named 'New Folder'.
3. Right click the folder -> Click 'Get Shared Access Signature...' -> Create and copy the 'URI' in the Shared Access Signature dialog.
4. Attach it via SAS -> Check there is no error.
## Expected Experience ##
No error pops up and the folder name shows well.
## Actual Experience ##
An error pops up.

|
test
|
an error pops up after attaching one adls folder which name contains spaces storage explorer version build number platform os windows centos core macos catalina architecture regression from not a regression steps to reproduce expand one adls storage account blob containers create a new blob container create a new folder named new folder right click the folder click get shared access signature create and copy the uri in the shared access signature dialog attach it via sas check there is no error expected experience no error pops up and the folder name shows well actual experience an error pops up
| 1
|
48,146
| 13,301,502,090
|
IssuesEvent
|
2020-08-25 13:02:57
|
rammatzkvosky/jdb
|
https://api.github.com/repos/rammatzkvosky/jdb
|
opened
|
CVE-2017-7658 (High) detected in jetty-server-9.4.8.v20171121.jar, jetty-http-9.4.8.v20171121.jar
|
security vulnerability
|
## CVE-2017-7658 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jetty-server-9.4.8.v20171121.jar</b>, <b>jetty-http-9.4.8.v20171121.jar</b></p></summary>
<p>
<details><summary><b>jetty-server-9.4.8.v20171121.jar</b></p></summary>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: /tmp/ws-scm/jdb/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.8.v20171121/jetty-server-9.4.8.v20171121.jar</p>
<p>
Dependency Hierarchy:
- spark-core-2.7.2.jar (Root Library)
- :x: **jetty-server-9.4.8.v20171121.jar** (Vulnerable Library)
</details>
<details><summary><b>jetty-http-9.4.8.v20171121.jar</b></p></summary>
<p>The Eclipse Jetty Project</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: /tmp/ws-scm/jdb/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-http/9.4.8.v20171121/jetty-http-9.4.8.v20171121.jar</p>
<p>
Dependency Hierarchy:
- spark-core-2.7.2.jar (Root Library)
- jetty-server-9.4.8.v20171121.jar
- :x: **jetty-http-9.4.8.v20171121.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/jdb/commit/9b00613d0ebc4ddf64e79f7e6c4ca43247c9e93c">9b00613d0ebc4ddf64e79f7e6c4ca43247c9e93c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Eclipse Jetty Server, versions 9.2.x and older, 9.3.x (all non HTTP/1.x configurations), and 9.4.x (all HTTP/1.x configurations), when presented with two content-lengths headers, Jetty ignored the second. When presented with a content-length and a chunked encoding header, the content-length was ignored (as per RFC 2616). If an intermediary decided on the shorter length, but still passed on the longer body, then body content could be interpreted by Jetty as a pipelined request. If the intermediary was imposing authorization, the fake pipelined request would bypass that authorization.
<p>Publish Date: 2018-06-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-7658>CVE-2017-7658</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-7658">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-7658</a></p>
<p>Release Date: 2018-06-26</p>
<p>Fix Resolution: org.eclipse.jetty:jetty-server:9.4.11.v20180605,9.3.24.v20180605,9.2.25.v20180606;org.eclipse.jetty.aggregate:jetty-client:9.4.11.v20180605,9.3.24.v20180605,9.2.25.v20180606;org.eclipse.jetty:jetty-http:9.4.11.v20180605,9.3.24.v20180605,9.2.25.v20180606</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.eclipse.jetty","packageName":"jetty-server","packageVersion":"9.4.8.v20171121","isTransitiveDependency":true,"dependencyTree":"com.sparkjava:spark-core:2.7.2;org.eclipse.jetty:jetty-server:9.4.8.v20171121","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.eclipse.jetty:jetty-server:9.4.11.v20180605,9.3.24.v20180605,9.2.25.v20180606;org.eclipse.jetty.aggregate:jetty-client:9.4.11.v20180605,9.3.24.v20180605,9.2.25.v20180606;org.eclipse.jetty:jetty-http:9.4.11.v20180605,9.3.24.v20180605,9.2.25.v20180606"},{"packageType":"Java","groupId":"org.eclipse.jetty","packageName":"jetty-http","packageVersion":"9.4.8.v20171121","isTransitiveDependency":true,"dependencyTree":"com.sparkjava:spark-core:2.7.2;org.eclipse.jetty:jetty-server:9.4.8.v20171121;org.eclipse.jetty:jetty-http:9.4.8.v20171121","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.eclipse.jetty:jetty-server:9.4.11.v20180605,9.3.24.v20180605,9.2.25.v20180606;org.eclipse.jetty.aggregate:jetty-client:9.4.11.v20180605,9.3.24.v20180605,9.2.25.v20180606;org.eclipse.jetty:jetty-http:9.4.11.v20180605,9.3.24.v20180605,9.2.25.v20180606"}],"vulnerabilityIdentifier":"CVE-2017-7658","vulnerabilityDetails":"In Eclipse Jetty Server, versions 9.2.x and older, 9.3.x (all non HTTP/1.x configurations), and 9.4.x (all HTTP/1.x configurations), when presented with two content-lengths headers, Jetty ignored the second. When presented with a content-length and a chunked encoding header, the content-length was ignored (as per RFC 2616). If an intermediary decided on the shorter length, but still passed on the longer body, then body content could be interpreted by Jetty as a pipelined request. If the intermediary was imposing authorization, the fake pipelined request would bypass that authorization.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-7658","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2017-7658 (High) detected in jetty-server-9.4.8.v20171121.jar, jetty-http-9.4.8.v20171121.jar - ## CVE-2017-7658 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jetty-server-9.4.8.v20171121.jar</b>, <b>jetty-http-9.4.8.v20171121.jar</b></p></summary>
<p>
<details><summary><b>jetty-server-9.4.8.v20171121.jar</b></p></summary>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: /tmp/ws-scm/jdb/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.8.v20171121/jetty-server-9.4.8.v20171121.jar</p>
<p>
Dependency Hierarchy:
- spark-core-2.7.2.jar (Root Library)
- :x: **jetty-server-9.4.8.v20171121.jar** (Vulnerable Library)
</details>
<details><summary><b>jetty-http-9.4.8.v20171121.jar</b></p></summary>
<p>The Eclipse Jetty Project</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: /tmp/ws-scm/jdb/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-http/9.4.8.v20171121/jetty-http-9.4.8.v20171121.jar</p>
<p>
Dependency Hierarchy:
- spark-core-2.7.2.jar (Root Library)
- jetty-server-9.4.8.v20171121.jar
- :x: **jetty-http-9.4.8.v20171121.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/jdb/commit/9b00613d0ebc4ddf64e79f7e6c4ca43247c9e93c">9b00613d0ebc4ddf64e79f7e6c4ca43247c9e93c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Eclipse Jetty Server, versions 9.2.x and older, 9.3.x (all non HTTP/1.x configurations), and 9.4.x (all HTTP/1.x configurations), when presented with two content-lengths headers, Jetty ignored the second. When presented with a content-length and a chunked encoding header, the content-length was ignored (as per RFC 2616). If an intermediary decided on the shorter length, but still passed on the longer body, then body content could be interpreted by Jetty as a pipelined request. If the intermediary was imposing authorization, the fake pipelined request would bypass that authorization.
<p>Publish Date: 2018-06-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-7658>CVE-2017-7658</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-7658">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-7658</a></p>
<p>Release Date: 2018-06-26</p>
<p>Fix Resolution: org.eclipse.jetty:jetty-server:9.4.11.v20180605,9.3.24.v20180605,9.2.25.v20180606;org.eclipse.jetty.aggregate:jetty-client:9.4.11.v20180605,9.3.24.v20180605,9.2.25.v20180606;org.eclipse.jetty:jetty-http:9.4.11.v20180605,9.3.24.v20180605,9.2.25.v20180606</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.eclipse.jetty","packageName":"jetty-server","packageVersion":"9.4.8.v20171121","isTransitiveDependency":true,"dependencyTree":"com.sparkjava:spark-core:2.7.2;org.eclipse.jetty:jetty-server:9.4.8.v20171121","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.eclipse.jetty:jetty-server:9.4.11.v20180605,9.3.24.v20180605,9.2.25.v20180606;org.eclipse.jetty.aggregate:jetty-client:9.4.11.v20180605,9.3.24.v20180605,9.2.25.v20180606;org.eclipse.jetty:jetty-http:9.4.11.v20180605,9.3.24.v20180605,9.2.25.v20180606"},{"packageType":"Java","groupId":"org.eclipse.jetty","packageName":"jetty-http","packageVersion":"9.4.8.v20171121","isTransitiveDependency":true,"dependencyTree":"com.sparkjava:spark-core:2.7.2;org.eclipse.jetty:jetty-server:9.4.8.v20171121;org.eclipse.jetty:jetty-http:9.4.8.v20171121","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.eclipse.jetty:jetty-server:9.4.11.v20180605,9.3.24.v20180605,9.2.25.v20180606;org.eclipse.jetty.aggregate:jetty-client:9.4.11.v20180605,9.3.24.v20180605,9.2.25.v20180606;org.eclipse.jetty:jetty-http:9.4.11.v20180605,9.3.24.v20180605,9.2.25.v20180606"}],"vulnerabilityIdentifier":"CVE-2017-7658","vulnerabilityDetails":"In Eclipse Jetty Server, versions 9.2.x and older, 9.3.x (all non HTTP/1.x configurations), and 9.4.x (all HTTP/1.x configurations), when presented with two content-lengths headers, Jetty ignored the second. When presented with a content-length and a chunked encoding header, the content-length was ignored (as per RFC 2616). If an intermediary decided on the shorter length, but still passed on the longer body, then body content could be interpreted by Jetty as a pipelined request. If the intermediary was imposing authorization, the fake pipelined request would bypass that authorization.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-7658","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve high detected in jetty server jar jetty http jar cve high severity vulnerability vulnerable libraries jetty server jar jetty http jar jetty server jar the core jetty server artifact library home page a href path to dependency file tmp ws scm jdb pom xml path to vulnerable library home wss scanner repository org eclipse jetty jetty server jetty server jar dependency hierarchy spark core jar root library x jetty server jar vulnerable library jetty http jar the eclipse jetty project library home page a href path to dependency file tmp ws scm jdb pom xml path to vulnerable library home wss scanner repository org eclipse jetty jetty http jetty http jar dependency hierarchy spark core jar root library jetty server jar x jetty http jar vulnerable library found in head commit a href vulnerability details in eclipse jetty server versions x and older x all non http x configurations and x all http x configurations when presented with two content lengths headers jetty ignored the second when presented with a content length and a chunked encoding header the content length was ignored as per rfc if an intermediary decided on the shorter length but still passed on the longer body then body content could be interpreted by jetty as a pipelined request if the intermediary was imposing authorization the fake pipelined request would bypass that authorization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org eclipse jetty jetty server org eclipse jetty aggregate jetty client org eclipse jetty jetty http isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails in eclipse jetty server versions x and older x all non http x configurations and x all http x configurations when presented with two content lengths headers jetty ignored the second when presented with a content length and a chunked encoding header the content length was ignored as per rfc if an intermediary decided on the shorter length but still passed on the longer body then body content could be interpreted by jetty as a pipelined request if the intermediary was imposing authorization the fake pipelined request would bypass that authorization vulnerabilityurl
| 0
|
249,497
| 21,163,557,983
|
IssuesEvent
|
2022-04-07 11:38:19
|
ita-social-projects/EventsExpress
|
https://api.github.com/repos/ita-social-projects/EventsExpress
|
opened
|
Verify that any user is able to filter events by 'Category'
|
test case
|
**Date Time**
04-07-2022 14:00
**Priority**
Medium
**Description**
Any user can go to the website, choose 'Category' filters, apply them and have corresponding events displayed.
**Test Cases**
| S# | Action | Test Data | Expected Result | Actual Result | P/F | Automated |
|:-----------:|:-----------:|:-----------:|:---------------:|:-------------:|:-----------:|:-----------:|
| 1 | Go to https://eventsexpress-test.azurewebsites.net/landing | <br> | <br> | <br> | <br> | <br> |
| 2 | Press 'Find Event' button | <br> | User is redirected to Events search page | <br> | <br> | <br> |
| 3 | Press on 'Filters' button in the top right corner | <br> | Modal window with filters is open | <br> | <br> | <br> |
| 4 | Open 'Category' filter | <br> | The categories, by which filtering can be performed, are displayed | <br> | <br> | <br> |
| 5 | Select 'Art&Craft' category | <br> | Hobbies, that represent the category, are displayed | <br> | <br> | <br> |
| 6 | Select 'Drawing' option from the hobby list | <br> | The option is checked by green stick | <br> | <br> | <br> |
| 7 | Press 'Apply' button | <br> | The events, that meet the filter criteria, are displayed | <br> | <br> | <br> |
**Environment:**
- OS Windows 10
- Browser Chrome
- Version 100.0.4896.75 (Official Build) (64-bit)
User story #983
|
1.0
|
Verify that any user is able to filter events by 'Category' - **Date Time**
04-07-2022 14:00
**Priority**
Medium
**Description**
Any user can go to the website, choose 'Category' filters, apply them and have corresponding events displayed.
**Test Cases**
| S# | Action | Test Data | Expected Result | Actual Result | P/F | Automated |
|:-----------:|:-----------:|:-----------:|:---------------:|:-------------:|:-----------:|:-----------:|
| 1 | Go to https://eventsexpress-test.azurewebsites.net/landing | <br> | <br> | <br> | <br> | <br> |
| 2 | Press 'Find Event' button | <br> | User is redirected to Events search page | <br> | <br> | <br> |
| 3 | Press on 'Filters' button in the top right corner | <br> | Modal window with filters is open | <br> | <br> | <br> |
| 4 | Open 'Category' filter | <br> | The categories, by which filtering can be performed, are displayed | <br> | <br> | <br> |
| 5 | Select 'Art&Craft' category | <br> | Hobbies, that represent the category, are displayed | <br> | <br> | <br> |
| 6 | Select 'Drawing' option from the hobby list | <br> | The option is checked by green stick | <br> | <br> | <br> |
| 7 | Press 'Apply' button | <br> | The events, that meet the filter criteria, are displayed | <br> | <br> | <br> |
**Environment:**
- OS Windows 10
- Browser Chrome
- Version 100.0.4896.75 (Official Build) (64-bit)
User story #983
|
test
|
verify that any user is able to filter events by category date time priority medium description any user can go to the website choose category filters apply them and have corresponding events displayed test cases s action test data expected result actual result p f automated go to press find event button user is redirected to events search page press on filters button in the top right corner modal window with filters is open open category filter the categories by which filtering can be performed are displayed select art craft category hobbies that represent the category are displayed select drawing option from the hobby list the option is checked by green stick press apply button the events that meet the filter criteria are displayed environment os windows browser chrome version official build bit user story
| 1
|
331,855
| 29,144,612,677
|
IssuesEvent
|
2023-05-18 00:57:41
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
opened
|
DISABLED test_call_parent_non_class_methods_from_child_dynamic_shapes (__main__.DynamicShapesMiscTests)
|
triaged module: flaky-tests skipped module: dynamo
|
Platforms: asan, linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_call_parent_non_class_methods_from_child_dynamic_shapes&suite=DynamicShapesMiscTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/undefined).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_call_parent_non_class_methods_from_child_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py` or `dynamo/test_dynamic_shapes.py`
|
1.0
|
DISABLED test_call_parent_non_class_methods_from_child_dynamic_shapes (__main__.DynamicShapesMiscTests) - Platforms: asan, linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_call_parent_non_class_methods_from_child_dynamic_shapes&suite=DynamicShapesMiscTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/undefined).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_call_parent_non_class_methods_from_child_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py` or `dynamo/test_dynamic_shapes.py`
|
test
|
disabled test call parent non class methods from child dynamic shapes main dynamicshapesmisctests platforms asan linux this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with failures and successes debugging instructions after clicking on the recent samples link do not assume things are okay if the ci is green we now shield flaky tests from developers so ci will thus be green but it will be harder to parse the logs to find relevant log snippets click on the workflow logs linked above click on the test step of the job so that it is expanded otherwise the grepping will not work grep for test call parent non class methods from child dynamic shapes there should be several instances run as flaky tests are rerun in ci from which you can study the logs test file path dynamo test dynamic shapes py or dynamo test dynamic shapes py
| 1
|
115,268
| 14,707,528,520
|
IssuesEvent
|
2021-01-04 21:46:46
|
nextcloud/files_pdfviewer
|
https://api.github.com/repos/nextcloud/files_pdfviewer
|
closed
|
Very strange empty space at the bottom of the page in 20.0.2
|
0. Needs triage 20-feedback bug design needs info
|
<!--
Thanks for reporting issues back to Nextcloud! This is the issue tracker of the PDF viewer app, if you have any support question please check out https://help.nextcloud.com/
Find other components at https://github.com/nextcloud/core/blob/master/CONTRIBUTING.md#guidelines
To make it possible for us to help you please fill out below information carefully.
-->
https://github.com/nextcloud/files_pdfviewer/issues/268 Got closed because someone complained I did no use a template.
There is a very unprofessionally looking glitch in the PDF viewer. I'm quite overwhelmed by work right now. So this is the last report I'm able to write in a while. If it doesn't meet your standards, just feel free to let your product looked half broken.
### Steps to reproduce
1. click on a PDF
### Expected behaviour
There is no strange uncovered space at the bottom
### Actual behaviour
There is strange uncovered space at the bottom
### Server configuration
<!--
You can use the Issue Template application to prefill most of the required information: https://apps.nextcloud.com/apps/issuetemplate
-->
**Operating system**:
Ubuntu 20.04
**Web server:**
Apache
**Database:**
MariaDB
**PHP version:**
**Nextcloud version:** (see Nextcloud admin page)
20.02
**Where did you install Nextcloud from:**
Have been upgrading since 15.0
**List of activated apps:**
Breeze Dark
|
1.0
|
Very strange empty space at the bottom of the page in 20.0.2 - <!--
Thanks for reporting issues back to Nextcloud! This is the issue tracker of the PDF viewer app, if you have any support question please check out https://help.nextcloud.com/
Find other components at https://github.com/nextcloud/core/blob/master/CONTRIBUTING.md#guidelines
To make it possible for us to help you please fill out below information carefully.
-->
https://github.com/nextcloud/files_pdfviewer/issues/268 Got closed because someone complained I did no use a template.
There is a very unprofessionally looking glitch in the PDF viewer. I'm quite overwhelmed by work right now. So this is the last report I'm able to write in a while. If it doesn't meet your standards, just feel free to let your product looked half broken.
### Steps to reproduce
1. click on a PDF
### Expected behaviour
There is no strange uncovered space at the bottom
### Actual behaviour
There is strange uncovered space at the bottom
### Server configuration
<!--
You can use the Issue Template application to prefill most of the required information: https://apps.nextcloud.com/apps/issuetemplate
-->
**Operating system**:
Ubuntu 20.04
**Web server:**
Apache
**Database:**
MariaDB
**PHP version:**
**Nextcloud version:** (see Nextcloud admin page)
20.02
**Where did you install Nextcloud from:**
Have been upgrading since 15.0
**List of activated apps:**
Breeze Dark
|
non_test
|
very strange empty space at the bottom of the page in thanks for reporting issues back to nextcloud this is the issue tracker of the pdf viewer app if you have any support question please check out find other components at to make it possible for us to help you please fill out below information carefully got closed because someone complained i did no use a template there is a very unprofessionally looking glitch in the pdf viewer i m quite overwhelmed by work right now so this is the last report i m able to write in a while if it doesn t meet your standards just feel free to let your product looked half broken steps to reproduce click on a pdf expected behaviour there is no strange uncovered space at the bottom actual behaviour there is strange uncovered space at the bottom server configuration you can use the issue template application to prefill most of the required information operating system ubuntu web server apache database mariadb php version nextcloud version see nextcloud admin page where did you install nextcloud from have been upgrading since list of activated apps breeze dark
| 0
|
29,746
| 4,535,654,449
|
IssuesEvent
|
2016-09-08 18:01:03
|
umts/pvta-multiplatform
|
https://api.github.com/repos/umts/pvta-multiplatform
|
closed
|
Tests for Stop
|
testing-controller
|
We need to build out our testing coverage. The scope of this issue is unit tests for the functions defined within the Stop controller.
|
1.0
|
Tests for Stop - We need to build out our testing coverage. The scope of this issue is unit tests for the functions defined within the Stop controller.
|
test
|
tests for stop we need to build out our testing coverage the scope of this issue is unit tests for the functions defined within the stop controller
| 1
|
293,233
| 8,973,848,311
|
IssuesEvent
|
2019-01-29 22:12:22
|
nasa-jpl/LiveViewOpenSource
|
https://api.github.com/repos/nasa-jpl/LiveViewOpenSource
|
closed
|
unix domain socket listen error when running from the command line
|
bug high priority
|
LiveView fails to open issuing a "listen: invalid argument" error when attempting to run LiveView from a directory that is not the build directory.
|
1.0
|
unix domain socket listen error when running from the command line - LiveView fails to open issuing a "listen: invalid argument" error when attempting to run LiveView from a directory that is not the build directory.
|
non_test
|
unix domain socket listen error when running from the command line liveview fails to open issuing a listen invalid argument error when attempting to run liveview from a directory that is not the build directory
| 0
|
407,783
| 11,937,371,770
|
IssuesEvent
|
2020-04-02 12:04:00
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Endless Spinner when using Raw Data
|
.Need More Info Priority:P2 Querying/GUI Type:Bug
|
When using the Raw Data Option or when using "View these ..."-Feature, the query seems to be executed but there is no result shown, just the spinner.
In all browsers used, i get this:
```
reactProdInvariant.js:29 Uncaught (in promise) Error: Minified React error #31; visit http://facebook.github.io/react/docs/error-decoder.html?invariant=31&args[]=TypeError%3A%20Cannot%20read%20property%20'rows'%20of%20undefined&args[]= for the full message or use the non-minified dev environment for full errors and additional helpful warnings.
at reactProdInvariant (reactProdInvariant.js:29)
at traverseAllChildrenImpl (traverseAllChildren.js:142)
at traverseAllChildren (traverseAllChildren.js:170)
at Object.instantiateChildren (ReactChildReconciler.js:72)
at ReactDOMComponent._reconcilerInstantiateChildren (ReactMultiChild.js:189)
at ReactDOMComponent.mountChildren (ReactMultiChild.js:222)
at ReactDOMComponent._createInitialChildren (ReactDOMComponent.js:701)
at ReactDOMComponent.mountComponent (ReactDOMComponent.js:520)
at Object.mountComponent (ReactReconciler.js:43)
at ReactDOMComponent.mountChildren (ReactMultiChild.js:234)
```
Additional in Firefox i got this error:
```
TypeError: internalInstance is null[Weitere Informationen] ReactReconciler.js:60
getHostNode ReactReconciler.js:60
getHostNode ReactCompositeComponent.js:381
getHostNode ReactReconciler.js:60
updateChildren ReactChildReconciler.js:111
_reconcilerUpdateChildren ReactMultiChild.js:209
_updateChildren ReactMultiChild.js:308
updateChildren ReactMultiChild.js:295
_updateDOMChildren ReactDOMComponent.js:944
updateComponent ReactDOMComponent.js:758
receiveComponent ReactDOMComponent.js:720
receiveComponent ReactReconciler.js:122
updateChildren ReactChildReconciler.js:107
_reconcilerUpdateChildren ReactMultiChild.js:209
_updateChildren ReactMultiChild.js:308
updateChildren ReactMultiChild.js:295
_updateDOMChildren ReactDOMComponent.js:944
updateComponent ReactDOMComponent.js:758
receiveComponent ReactDOMComponent.js:720
receiveComponent ReactReconciler.js:122
_updateRenderedComponent ReactCompositeComponent.js:751
_performComponentUpdate ReactCompositeComponent.js:721
updateComponent ReactCompositeComponent.js:642
receiveComponent ReactCompositeComponent.js:544
receiveComponent ReactReconciler.js:122
updateChildren ReactChildReconciler.js:107
_reconcilerUpdateChildren ReactMultiChild.js:209
_updateChildren ReactMultiChild.js:308
updateChildren ReactMultiChild.js:295
_updateDOMChildren ReactDOMComponent.js:944
updateComponent ReactDOMComponent.js:758
receiveComponent ReactDOMComponent.js:720
receiveComponent ReactReconciler.js:122
updateChildren ReactChildReconciler.js:107
_reconcilerUpdateChildren ReactMultiChild.js:209
_updateChildren ReactMultiChild.js:308
updateChildren ReactMultiChild.js:295
_updateDOMChildren ReactDOMComponent.js:944
updateComponent ReactDOMComponent.js:758
receiveComponent ReactDOMComponent.js:720
receiveComponent ReactReconciler.js:122
updateChildren ReactChildReconciler.js:107
_reconcilerUpdateChildren ReactMultiChild.js:209
_updateChildren ReactMultiChild.js:308
updateChildren ReactMultiChild.js:295
_updateDOMChildren ReactDOMComponent.js:944
updateComponent ReactDOMComponent.js:758
receiveComponent ReactDOMComponent.js:720
receiveComponent ReactReconciler.js:122
_updateRenderedComponent ReactCompositeComponent.js:751
_performComponentUpdate ReactCompositeComponent.js:721
updateComponent ReactCompositeComponent.js:642
receiveComponent ReactCompositeComponent.js:544
receiveComponent ReactReconciler.js:122
updateChildren ReactChildReconciler.js:107
_reconcilerUpdateChildren ReactMultiChild.js:209
_updateChildren ReactMultiChild.js:308
updateChildren ReactMultiChild.js:295
_updateDOMChildren ReactDOMComponent.js:944
updateComponent ReactDOMComponent.js:758
receiveComponent ReactDOMComponent.js:720
receiveComponent ReactReconciler.js:122
_updateRenderedComponent ReactCompositeComponent.js:751
_performComponentUpdate ReactCompositeComponent.js:721
updateComponent ReactCompositeComponent.js:642
performUpdateIfNecessary ReactCompositeComponent.js:558
performUpdateIfNecessary ReactReconciler.js:154
runBatchedUpdates ReactUpdates.js:148
perform Transaction.js:141
perform Transaction.js:141
perform ReactUpdates.js:87
flushBatchedUpdates ReactUpdates.js:170
flushBatchedUpdates self-hosted:991:17
closeAll Transaction.js:207
perform Transaction.js:154
batchedUpdates ReactDefaultBatchingStrategy.js:60
enqueueUpdate ReactUpdates.js:198
enqueueUpdate ReactUpdateQueue.js:22
enqueueForceUpdate ReactUpdateQueue.js:154
ReactComponent.prototype.forceUpdate ReactBaseClasses.js:83
<anonym> self-hosted:989:17
later underscore.js:828
```
- browser and the version: Chrome 65.0.3325.162, Firefox 59.0.3, IE 11
- operating system: macOS 10.13.4, Windows 7
- database: MySQL Aurora
- Metabase version: 0.29.3
- Metabase hosting environment: Elastic Beanstalk
- Metabase internal database: Postgres
|
1.0
|
Endless Spinner when using Raw Data - When using the Raw Data Option or when using "View these ..."-Feature, the query seems to be executed but there is no result shown, just the spinner.
In all browsers used, i get this:
```
reactProdInvariant.js:29 Uncaught (in promise) Error: Minified React error #31; visit http://facebook.github.io/react/docs/error-decoder.html?invariant=31&args[]=TypeError%3A%20Cannot%20read%20property%20'rows'%20of%20undefined&args[]= for the full message or use the non-minified dev environment for full errors and additional helpful warnings.
at reactProdInvariant (reactProdInvariant.js:29)
at traverseAllChildrenImpl (traverseAllChildren.js:142)
at traverseAllChildren (traverseAllChildren.js:170)
at Object.instantiateChildren (ReactChildReconciler.js:72)
at ReactDOMComponent._reconcilerInstantiateChildren (ReactMultiChild.js:189)
at ReactDOMComponent.mountChildren (ReactMultiChild.js:222)
at ReactDOMComponent._createInitialChildren (ReactDOMComponent.js:701)
at ReactDOMComponent.mountComponent (ReactDOMComponent.js:520)
at Object.mountComponent (ReactReconciler.js:43)
at ReactDOMComponent.mountChildren (ReactMultiChild.js:234)
```
Additional in Firefox i got this error:
```
TypeError: internalInstance is null[Weitere Informationen] ReactReconciler.js:60
getHostNode ReactReconciler.js:60
getHostNode ReactCompositeComponent.js:381
getHostNode ReactReconciler.js:60
updateChildren ReactChildReconciler.js:111
_reconcilerUpdateChildren ReactMultiChild.js:209
_updateChildren ReactMultiChild.js:308
updateChildren ReactMultiChild.js:295
_updateDOMChildren ReactDOMComponent.js:944
updateComponent ReactDOMComponent.js:758
receiveComponent ReactDOMComponent.js:720
receiveComponent ReactReconciler.js:122
updateChildren ReactChildReconciler.js:107
_reconcilerUpdateChildren ReactMultiChild.js:209
_updateChildren ReactMultiChild.js:308
updateChildren ReactMultiChild.js:295
_updateDOMChildren ReactDOMComponent.js:944
updateComponent ReactDOMComponent.js:758
receiveComponent ReactDOMComponent.js:720
receiveComponent ReactReconciler.js:122
_updateRenderedComponent ReactCompositeComponent.js:751
_performComponentUpdate ReactCompositeComponent.js:721
updateComponent ReactCompositeComponent.js:642
receiveComponent ReactCompositeComponent.js:544
receiveComponent ReactReconciler.js:122
updateChildren ReactChildReconciler.js:107
_reconcilerUpdateChildren ReactMultiChild.js:209
_updateChildren ReactMultiChild.js:308
updateChildren ReactMultiChild.js:295
_updateDOMChildren ReactDOMComponent.js:944
updateComponent ReactDOMComponent.js:758
receiveComponent ReactDOMComponent.js:720
receiveComponent ReactReconciler.js:122
updateChildren ReactChildReconciler.js:107
_reconcilerUpdateChildren ReactMultiChild.js:209
_updateChildren ReactMultiChild.js:308
updateChildren ReactMultiChild.js:295
_updateDOMChildren ReactDOMComponent.js:944
updateComponent ReactDOMComponent.js:758
receiveComponent ReactDOMComponent.js:720
receiveComponent ReactReconciler.js:122
updateChildren ReactChildReconciler.js:107
_reconcilerUpdateChildren ReactMultiChild.js:209
_updateChildren ReactMultiChild.js:308
updateChildren ReactMultiChild.js:295
_updateDOMChildren ReactDOMComponent.js:944
updateComponent ReactDOMComponent.js:758
receiveComponent ReactDOMComponent.js:720
receiveComponent ReactReconciler.js:122
_updateRenderedComponent ReactCompositeComponent.js:751
_performComponentUpdate ReactCompositeComponent.js:721
updateComponent ReactCompositeComponent.js:642
receiveComponent ReactCompositeComponent.js:544
receiveComponent ReactReconciler.js:122
updateChildren ReactChildReconciler.js:107
_reconcilerUpdateChildren ReactMultiChild.js:209
_updateChildren ReactMultiChild.js:308
updateChildren ReactMultiChild.js:295
_updateDOMChildren ReactDOMComponent.js:944
updateComponent ReactDOMComponent.js:758
receiveComponent ReactDOMComponent.js:720
receiveComponent ReactReconciler.js:122
_updateRenderedComponent ReactCompositeComponent.js:751
_performComponentUpdate ReactCompositeComponent.js:721
updateComponent ReactCompositeComponent.js:642
performUpdateIfNecessary ReactCompositeComponent.js:558
performUpdateIfNecessary ReactReconciler.js:154
runBatchedUpdates ReactUpdates.js:148
perform Transaction.js:141
perform Transaction.js:141
perform ReactUpdates.js:87
flushBatchedUpdates ReactUpdates.js:170
flushBatchedUpdates self-hosted:991:17
closeAll Transaction.js:207
perform Transaction.js:154
batchedUpdates ReactDefaultBatchingStrategy.js:60
enqueueUpdate ReactUpdates.js:198
enqueueUpdate ReactUpdateQueue.js:22
enqueueForceUpdate ReactUpdateQueue.js:154
ReactComponent.prototype.forceUpdate ReactBaseClasses.js:83
<anonym> self-hosted:989:17
later underscore.js:828
```
- browser and the version: Chrome 65.0.3325.162, Firefox 59.0.3, IE 11
- operating system: macOS 10.13.4, Windows 7
- database: MySQL Aurora
- Metabase version: 0.29.3
- Metabase hosting environment: Elastic Beanstalk
- Metabase internal database: Postgres
|
non_test
|
endless spinner when using raw data when using the raw data option or when using view these feature the query seems to be executed but there is no result shown just the spinner in all browsers used i get this reactprodinvariant js uncaught in promise error minified react error visit typeerror rows args for the full message or use the non minified dev environment for full errors and additional helpful warnings at reactprodinvariant reactprodinvariant js at traverseallchildrenimpl traverseallchildren js at traverseallchildren traverseallchildren js at object instantiatechildren reactchildreconciler js at reactdomcomponent reconcilerinstantiatechildren reactmultichild js at reactdomcomponent mountchildren reactmultichild js at reactdomcomponent createinitialchildren reactdomcomponent js at reactdomcomponent mountcomponent reactdomcomponent js at object mountcomponent reactreconciler js at reactdomcomponent mountchildren reactmultichild js additional in firefox i got this error typeerror internalinstance is null reactreconciler js gethostnode reactreconciler js gethostnode reactcompositecomponent js gethostnode reactreconciler js updatechildren reactchildreconciler js reconcilerupdatechildren reactmultichild js updatechildren reactmultichild js updatechildren reactmultichild js updatedomchildren reactdomcomponent js updatecomponent reactdomcomponent js receivecomponent reactdomcomponent js receivecomponent reactreconciler js updatechildren reactchildreconciler js reconcilerupdatechildren reactmultichild js updatechildren reactmultichild js updatechildren reactmultichild js updatedomchildren reactdomcomponent js updatecomponent reactdomcomponent js receivecomponent reactdomcomponent js receivecomponent reactreconciler js updaterenderedcomponent reactcompositecomponent js performcomponentupdate reactcompositecomponent js updatecomponent reactcompositecomponent js receivecomponent reactcompositecomponent js receivecomponent reactreconciler js updatechildren reactchildreconciler js reconcilerupdatechildren reactmultichild js updatechildren reactmultichild js updatechildren reactmultichild js updatedomchildren reactdomcomponent js updatecomponent reactdomcomponent js receivecomponent reactdomcomponent js receivecomponent reactreconciler js updatechildren reactchildreconciler js reconcilerupdatechildren reactmultichild js updatechildren reactmultichild js updatechildren reactmultichild js updatedomchildren reactdomcomponent js updatecomponent reactdomcomponent js receivecomponent reactdomcomponent js receivecomponent reactreconciler js updatechildren reactchildreconciler js reconcilerupdatechildren reactmultichild js updatechildren reactmultichild js updatechildren reactmultichild js updatedomchildren reactdomcomponent js updatecomponent reactdomcomponent js receivecomponent reactdomcomponent js receivecomponent reactreconciler js updaterenderedcomponent reactcompositecomponent js performcomponentupdate reactcompositecomponent js updatecomponent reactcompositecomponent js receivecomponent reactcompositecomponent js receivecomponent reactreconciler js updatechildren reactchildreconciler js reconcilerupdatechildren reactmultichild js updatechildren reactmultichild js updatechildren reactmultichild js updatedomchildren reactdomcomponent js updatecomponent reactdomcomponent js receivecomponent reactdomcomponent js receivecomponent reactreconciler js updaterenderedcomponent reactcompositecomponent js performcomponentupdate reactcompositecomponent js updatecomponent reactcompositecomponent js performupdateifnecessary reactcompositecomponent js performupdateifnecessary reactreconciler js runbatchedupdates reactupdates js perform transaction js perform transaction js perform reactupdates js flushbatchedupdates reactupdates js flushbatchedupdates self hosted closeall transaction js perform transaction js batchedupdates reactdefaultbatchingstrategy js enqueueupdate reactupdates js enqueueupdate reactupdatequeue js enqueueforceupdate reactupdatequeue js reactcomponent prototype forceupdate reactbaseclasses js self hosted later underscore js browser and the version chrome firefox ie operating system macos windows database mysql aurora metabase version metabase hosting environment elastic beanstalk metabase internal database postgres
| 0
|
707,020
| 24,291,929,189
|
IssuesEvent
|
2022-09-29 06:57:01
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.npr.org - design is broken
|
browser-firefox priority-important type-tracking-protection-strict engine-gecko
|
<!-- @browser: Firefox 105.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:105.0) Gecko/20100101 Firefox/105.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/111511 -->
**URL**: https://www.npr.org/2021/12/27/1068303629/covid-19-omicron-maps-data-states
**Browser / Version**: Firefox 105.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Edge
**Problem type**: Design is broken
**Description**: Images not loaded
**Steps to Reproduce**:
Hi Thanks for FireFox ... wish I was smart enough to be part of such a great project. In the last month or so, maybe after the latest version update (now on 105.0.1 but same issue on 105.0) but not exactly sure when it started, on the NPR website, at least on the indicated page, I do not see the graphs and charts (some of which are normally interactive). The text and headings all look fine ... just don't see the graphics. I don't normally use Bing but was able to use it to view the NPR page, see the graphics and interact with them. I have not noticed any issues on any other web pages outside of the NPR site, and I have not looked for any other NPR pages with similar features that I could try ... hmmm, I should have thought of doing that. This is on a HP Probook G2 450 with generic Windows 10. Curious problem. I also visit the CDC covid tracker site that has some similar graphics and interaction capabilities and I haven't noticed any issues. Am I a one of?? That would be scarery but anythings is possible! Thanks! Hope you can figure something out.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/9/25eff853-2ccf-4a3d-9289-c63efb5c4dda.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.npr.org - design is broken - <!-- @browser: Firefox 105.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:105.0) Gecko/20100101 Firefox/105.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/111511 -->
**URL**: https://www.npr.org/2021/12/27/1068303629/covid-19-omicron-maps-data-states
**Browser / Version**: Firefox 105.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Edge
**Problem type**: Design is broken
**Description**: Images not loaded
**Steps to Reproduce**:
Hi Thanks for FireFox ... wish I was smart enough to be part of such a great project. In the last month or so, maybe after the latest version update (now on 105.0.1 but same issue on 105.0) but not exactly sure when it started, on the NPR website, at least on the indicated page, I do not see the graphs and charts (some of which are normally interactive). The text and headings all look fine ... just don't see the graphics. I don't normally use Bing but was able to use it to view the NPR page, see the graphics and interact with them. I have not noticed any issues on any other web pages outside of the NPR site, and I have not looked for any other NPR pages with similar features that I could try ... hmmm, I should have thought of doing that. This is on a HP Probook G2 450 with generic Windows 10. Curious problem. I also visit the CDC covid tracker site that has some similar graphics and interaction capabilities and I haven't noticed any issues. Am I a one of?? That would be scarery but anythings is possible! Thanks! Hope you can figure something out.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/9/25eff853-2ccf-4a3d-9289-c63efb5c4dda.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_test
|
design is broken url browser version firefox operating system windows tested another browser yes edge problem type design is broken description images not loaded steps to reproduce hi thanks for firefox wish i was smart enough to be part of such a great project in the last month or so maybe after the latest version update now on but same issue on but not exactly sure when it started on the npr website at least on the indicated page i do not see the graphs and charts some of which are normally interactive the text and headings all look fine just don t see the graphics i don t normally use bing but was able to use it to view the npr page see the graphics and interact with them i have not noticed any issues on any other web pages outside of the npr site and i have not looked for any other npr pages with similar features that i could try hmmm i should have thought of doing that this is on a hp probook with generic windows curious problem i also visit the cdc covid tracker site that has some similar graphics and interaction capabilities and i haven t noticed any issues am i a one of that would be scarery but anythings is possible thanks hope you can figure something out view the screenshot img alt screenshot src browser configuration none from with ❤️
| 0
|
304,579
| 26,288,921,244
|
IssuesEvent
|
2023-01-08 06:21:05
|
airbytehq/airbyte
|
https://api.github.com/repos/airbytehq/airbyte
|
closed
|
SAT: print a URL pointing to the test cases
|
type/enhancement area/connectors Acceptance Tests lang/python team/extensibility autoteam
|
## Tell us about the problem you're trying to solve
As a contributor, it's not clear to me why a particular SAT test case is failing.
## Describe the solution you’d like
I would like a URL pointing to an explanation of SAT test cases.
|
1.0
|
SAT: print a URL pointing to the test cases - ## Tell us about the problem you're trying to solve
As a contributor, it's not clear to me why a particular SAT test case is failing.
## Describe the solution you’d like
I would like a URL pointing to an explanation of SAT test cases.
|
test
|
sat print a url pointing to the test cases tell us about the problem you re trying to solve as a contributor it s not clear to me why a particular sat test case is failing describe the solution you’d like i would like a url pointing to an explanation of sat test cases
| 1
|
297,218
| 25,710,231,151
|
IssuesEvent
|
2022-12-07 05:47:47
|
dotnet/msbuild
|
https://api.github.com/repos/dotnet/msbuild
|
opened
|
DeleteFiles function doesn't delete first file directory when second file is in the subfolder of first file
|
bug needs-triage test
|
### Issue Description
There's a bug in the cleanup logic here. Specifically, it creates the source and dest files, and at the end of the test, it calls Helpers.DeleteFiles(sourceFile, destFile); That method loops through each file and deletes it if it exists, then deletes the directory containing it if it's empty...but when we delete the source file, the directory isn't empty; it has the destination folder/file. When we delete the destination file, its folder just contains the destination file, so we delete that. Afterwards, the source folder never gets deleted. That means we can't write to it.
https://github.com/dotnet/msbuild/blob/c5532da3a3c99817e70d95fe9e07302ba72ee523/src/Shared/UnitTests/ObjectModelHelpers.cs#L1818-L1833
|
1.0
|
DeleteFiles function doesn't delete first file directory when second file is in the subfolder of first file - ### Issue Description
There's a bug in the cleanup logic here. Specifically, it creates the source and dest files, and at the end of the test, it calls Helpers.DeleteFiles(sourceFile, destFile); That method loops through each file and deletes it if it exists, then deletes the directory containing it if it's empty...but when we delete the source file, the directory isn't empty; it has the destination folder/file. When we delete the destination file, its folder just contains the destination file, so we delete that. Afterwards, the source folder never gets deleted. That means we can't write to it.
https://github.com/dotnet/msbuild/blob/c5532da3a3c99817e70d95fe9e07302ba72ee523/src/Shared/UnitTests/ObjectModelHelpers.cs#L1818-L1833
|
test
|
deletefiles function doesn t delete first file directory when second file is in the subfolder of first file issue description there s a bug in the cleanup logic here specifically it creates the source and dest files and at the end of the test it calls helpers deletefiles sourcefile destfile that method loops through each file and deletes it if it exists then deletes the directory containing it if it s empty but when we delete the source file the directory isn t empty it has the destination folder file when we delete the destination file its folder just contains the destination file so we delete that afterwards the source folder never gets deleted that means we can t write to it
| 1
|
133,972
| 10,868,772,644
|
IssuesEvent
|
2019-11-15 05:11:56
|
owncloud/phoenix
|
https://api.github.com/repos/owncloud/phoenix
|
closed
|
more collaborators tests
|
Acceptance tests Effort: trivial QA-team feature:collaborators
|
scenarios copied from old webUI does not work, because the workflow changed see https://github.com/owncloud/enterprise/blob/master/product/oc10/index.md#collaborators
Some tests were implemented, but by far not enough.
missing e.g.
- resharing
- changing collaborators type - #2124
- allow / disallow resharing
- unshare as receiver/sender
- etc.
|
1.0
|
more collaborators tests - scenarios copied from old webUI does not work, because the workflow changed see https://github.com/owncloud/enterprise/blob/master/product/oc10/index.md#collaborators
Some tests were implemented, but by far not enough.
missing e.g.
- resharing
- changing collaborators type - #2124
- allow / disallow resharing
- unshare as receiver/sender
- etc.
|
test
|
more collaborators tests scenarios copied from old webui does not work because the workflow changed see some tests were implemented but by far not enough missing e g resharing changing collaborators type allow disallow resharing unshare as receiver sender etc
| 1
|
281,555
| 24,403,707,436
|
IssuesEvent
|
2022-10-05 05:25:24
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
sql/tests: TestRandomSyntaxGeneration failed
|
C-test-failure O-robot branch-release-22.2.0
|
sql/tests.TestRandomSyntaxGeneration [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RandomSyntaxTestsBazel/6784095?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RandomSyntaxTestsBazel/6784095?buildTab=artifacts#/) on release-22.2.0 @ [9bb8a7faf624dcd23ce60e2a8a805ef863b10f72](https://github.com/cockroachdb/cockroach/commits/9bb8a7faf624dcd23ce60e2a8a805ef863b10f72):
Random syntax error:
```
rsg_test.go:850: Crash detected: server panic: statement exec timeout
```
Query:
```
DROP OWNED BY DETACHED , ident , SETS , ident;
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #89054 sql/tests: TestRandomSyntaxGeneration failed [C-test-failure O-robot T-sql-experience branch-master]
- #87572 sql/tests: TestRandomSyntaxGeneration failed [DROP OWNED BY timeout] [C-test-failure O-robot T-sql-schema branch-release-22.2]
- #77893 sql/tests: TestRandomSyntaxGeneration failed [C-test-failure O-robot T-sql-experience branch-release-22.1]
- #74271 sql/tests: TestRandomSyntaxGeneration failed [C-test-failure O-robot branch-release-21.2]
- #65210 sql/tests: TestRandomSyntaxGeneration failed [C-test-failure O-robot branch-release-21.1]
</p>
</details>
/cc @cockroachdb/sql-experience
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestRandomSyntaxGeneration.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
1.0
|
sql/tests: TestRandomSyntaxGeneration failed - sql/tests.TestRandomSyntaxGeneration [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RandomSyntaxTestsBazel/6784095?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RandomSyntaxTestsBazel/6784095?buildTab=artifacts#/) on release-22.2.0 @ [9bb8a7faf624dcd23ce60e2a8a805ef863b10f72](https://github.com/cockroachdb/cockroach/commits/9bb8a7faf624dcd23ce60e2a8a805ef863b10f72):
Random syntax error:
```
rsg_test.go:850: Crash detected: server panic: statement exec timeout
```
Query:
```
DROP OWNED BY DETACHED , ident , SETS , ident;
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #89054 sql/tests: TestRandomSyntaxGeneration failed [C-test-failure O-robot T-sql-experience branch-master]
- #87572 sql/tests: TestRandomSyntaxGeneration failed [DROP OWNED BY timeout] [C-test-failure O-robot T-sql-schema branch-release-22.2]
- #77893 sql/tests: TestRandomSyntaxGeneration failed [C-test-failure O-robot T-sql-experience branch-release-22.1]
- #74271 sql/tests: TestRandomSyntaxGeneration failed [C-test-failure O-robot branch-release-21.2]
- #65210 sql/tests: TestRandomSyntaxGeneration failed [C-test-failure O-robot branch-release-21.1]
</p>
</details>
/cc @cockroachdb/sql-experience
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestRandomSyntaxGeneration.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
test
|
sql tests testrandomsyntaxgeneration failed sql tests testrandomsyntaxgeneration with on release random syntax error rsg test go crash detected server panic statement exec timeout query drop owned by detached ident sets ident help see also same failure on other branches sql tests testrandomsyntaxgeneration failed sql tests testrandomsyntaxgeneration failed sql tests testrandomsyntaxgeneration failed sql tests testrandomsyntaxgeneration failed sql tests testrandomsyntaxgeneration failed cc cockroachdb sql experience
| 1
|
153,387
| 24,120,405,788
|
IssuesEvent
|
2022-09-20 18:10:14
|
bcgov/cloud-pathfinder
|
https://api.github.com/repos/bcgov/cloud-pathfinder
|
opened
|
Wordpress plug in for CPF website
|
UX Service Design
|
**Describe the issue**
the private team has gone through their website design, we would like to connect with the website team to ask them about the wordpress plug in they used and why so we can leverage the work they went through in building the CPF website.
**Additional context**
Add any other context, attachments or screenshots
**Definition of done**
- connect to Theressa and ask her about the plug in used
- share outcome to the CPF team and PO
|
1.0
|
Wordpress plug in for CPF website - **Describe the issue**
the private team has gone through their website design, we would like to connect with the website team to ask them about the wordpress plug in they used and why so we can leverage the work they went through in building the CPF website.
**Additional context**
Add any other context, attachments or screenshots
**Definition of done**
- connect to Theressa and ask her about the plug in used
- share outcome to the CPF team and PO
|
non_test
|
wordpress plug in for cpf website describe the issue the private team has gone through their website design we would like to connect with the website team to ask them about the wordpress plug in they used and why so we can leverage the work they went through in building the cpf website additional context add any other context attachments or screenshots definition of done connect to theressa and ask her about the plug in used share outcome to the cpf team and po
| 0
|
10,881
| 3,145,574,391
|
IssuesEvent
|
2015-09-14 18:40:01
|
nickdesaulniers/node-nanomsg
|
https://api.github.com/repos/nickdesaulniers/node-nanomsg
|
opened
|
bad form in tests/standalone
|
bug tests
|
`term()` test:
https://github.com/nickdesaulniers/node-nanomsg/blob/master/test/standalone/term.js
not sure what error gets tested. we're just seding or awking there without calling bind or connect.
|
1.0
|
bad form in tests/standalone - `term()` test:
https://github.com/nickdesaulniers/node-nanomsg/blob/master/test/standalone/term.js
not sure what error gets tested. we're just seding or awking there without calling bind or connect.
|
test
|
bad form in tests standalone term test not sure what error gets tested we re just seding or awking there without calling bind or connect
| 1
|
83,699
| 15,712,607,821
|
IssuesEvent
|
2021-03-27 12:51:53
|
Symbolk/IntelliMerge-UI
|
https://api.github.com/repos/Symbolk/IntelliMerge-UI
|
closed
|
CVE-2018-20822 (Medium) detected in opennmsopennms-source-26.0.0-1
|
security vulnerability
|
## CVE-2018-20822 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opennmsopennms-source-26.0.0-1</b></p></summary>
<p>
<p>A Java based fault and performance management system</p>
<p>Library home page: <a href=https://sourceforge.net/projects/opennms/>https://sourceforge.net/projects/opennms/</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Symbolk/IntelliMerge-UI/commit/1a120f45942616d291864928a8d13e410fcc0ec6">1a120f45942616d291864928a8d13e410fcc0ec6</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>IntelliMerge-UI/node_modules/node-sass/src/libsass/src/ast.hpp</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
LibSass 3.5.4 allows attackers to cause a denial-of-service (uncontrolled recursion in Sass::Complex_Selector::perform in ast.hpp and Sass::Inspect::operator in inspect.cpp).
<p>Publish Date: 2019-04-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20822>CVE-2018-20822</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20822">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20822</a></p>
<p>Release Date: 2019-08-06</p>
<p>Fix Resolution: LibSass - 3.6.0;node-sass - 4.13.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-20822 (Medium) detected in opennmsopennms-source-26.0.0-1 - ## CVE-2018-20822 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opennmsopennms-source-26.0.0-1</b></p></summary>
<p>
<p>A Java based fault and performance management system</p>
<p>Library home page: <a href=https://sourceforge.net/projects/opennms/>https://sourceforge.net/projects/opennms/</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Symbolk/IntelliMerge-UI/commit/1a120f45942616d291864928a8d13e410fcc0ec6">1a120f45942616d291864928a8d13e410fcc0ec6</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>IntelliMerge-UI/node_modules/node-sass/src/libsass/src/ast.hpp</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
LibSass 3.5.4 allows attackers to cause a denial-of-service (uncontrolled recursion in Sass::Complex_Selector::perform in ast.hpp and Sass::Inspect::operator in inspect.cpp).
<p>Publish Date: 2019-04-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20822>CVE-2018-20822</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20822">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20822</a></p>
<p>Release Date: 2019-08-06</p>
<p>Fix Resolution: LibSass - 3.6.0;node-sass - 4.13.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in opennmsopennms source cve medium severity vulnerability vulnerable library opennmsopennms source a java based fault and performance management system library home page a href found in head commit a href found in base branch master vulnerable source files intellimerge ui node modules node sass src libsass src ast hpp vulnerability details libsass allows attackers to cause a denial of service uncontrolled recursion in sass complex selector perform in ast hpp and sass inspect operator in inspect cpp publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass node sass step up your open source security game with whitesource
| 0
|
64,631
| 6,913,256,312
|
IssuesEvent
|
2017-11-28 14:46:35
|
F5Networks/f5-openstack-agent
|
https://api.github.com/repos/F5Networks/f5-openstack-agent
|
closed
|
Test failed to cleanup singlebigip/test_network.py:test_arp_delete_by_network
|
P3 S3 test-bug
|
* Title: Test failed to cleanup singlebigip/test_network.py:test_arp_delete_by_network
* Attachments:
* Details:
##### Traceback:
```
def teardown():
> system_helper.delete_folder(mgmt_root, default_partition)
test_network.py:72:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../.tox/singlebigip/local/lib/python2.7/site-packages/f5_openstack_agent/lbaasv2/drivers/bigip/system_helper.py:41: in delete_folder
obj.delete()
../../../.tox/singlebigip/local/lib/python2.7/site-packages/f5/bigip/resource.py:1058: in delete
self._delete(**kwargs)
../../../.tox/singlebigip/local/lib/python2.7/site-packages/f5/bigip/resource.py:1040: in _delete
response = session.delete(delete_uri, **requests_params)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <icontrol.session.iControlRESTSession object at 0x7f518a5545d0>
RIC_base_uri = 'https://10.190.25.59:443/mgmt/tm/sys/folder/~test/', kwargs = {}
partition = '', sub_path = '', suffix = '', identifier = ''
uri_as_parts = False, transform_name = False
REST_uri = 'https://10.190.25.59:443/mgmt/tm/sys/folder/~test/'
@functools.wraps(method)
def wrapper(self, RIC_base_uri, **kwargs):
partition = kwargs.pop('partition', '')
sub_path = kwargs.pop('subPath', '')
suffix = kwargs.pop('suffix', '')
identifier, kwargs = _unique_resource_identifier_from_kwargs(**kwargs)
uri_as_parts = kwargs.pop('uri_as_parts', False)
transform_name = kwargs.pop('transform_name', False)
if uri_as_parts:
REST_uri = generate_bigip_uri(RIC_base_uri, partition, identifier,
sub_path, suffix,
transform_name=transform_name,
**kwargs)
else:
REST_uri = RIC_base_uri
pre_message = "%s WITH uri: %s AND suffix: %s AND kwargs: %s" %\
(method.__name__, REST_uri, suffix, kwargs)
logging.debug(pre_message)
response = method(self, REST_uri, **kwargs)
post_message =\
"RESPONSE::STATUS: %s Content-Type: %s Content-Encoding:"\
" %s\nText: %r" % (response.status_code,
response.headers.get('Content-Type', None),
response.headers.get('Content-Encoding', None),
response.text)
logging.debug(post_message)
if response.status_code not in range(200, 207):
error_message = '%s Unexpected Error: %s for uri: %s\nText: %r' %\
(response.status_code,
response.reason,
response.url,
response.text)
> raise iControlUnexpectedHTTPError(error_message, response=response)
E iControlUnexpectedHTTPError: 400 Unexpected Error: Bad Request for uri: https://10.190.25.59:443/mgmt/tm/sys/folder/~test/
E Text: u'{"code":400,"message":"0107082a:3: All objects must be removed from a partition (test) before the partition may be removed, type ID (6090)","errorStack":[],"apiError":3}'
/usr/local/lib/python2.7/dist-packages/icontrol/session.py:272: iControlUnexpectedHTTPError
```
##### Tests:
`openstack_agent_mitaka_12.1.1-overcloud.90.consoleText`
* Type: errors
* Test: `test_arp_delete_by_network`
* Sum: `0ab3974d6817174d1b73cce9e0826517`
#### OpenStack Release
Mitaka
#### Agent Version
mitaka - 323189b9db8e3298832a1c7b3623ce33adc1a632
#### Operating System
Nightly
#### Deployment
singlebigip agent functional tests
|
1.0
|
Test failed to cleanup singlebigip/test_network.py:test_arp_delete_by_network - * Title: Test failed to cleanup singlebigip/test_network.py:test_arp_delete_by_network
* Attachments:
* Details:
##### Traceback:
```
def teardown():
> system_helper.delete_folder(mgmt_root, default_partition)
test_network.py:72:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../.tox/singlebigip/local/lib/python2.7/site-packages/f5_openstack_agent/lbaasv2/drivers/bigip/system_helper.py:41: in delete_folder
obj.delete()
../../../.tox/singlebigip/local/lib/python2.7/site-packages/f5/bigip/resource.py:1058: in delete
self._delete(**kwargs)
../../../.tox/singlebigip/local/lib/python2.7/site-packages/f5/bigip/resource.py:1040: in _delete
response = session.delete(delete_uri, **requests_params)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <icontrol.session.iControlRESTSession object at 0x7f518a5545d0>
RIC_base_uri = 'https://10.190.25.59:443/mgmt/tm/sys/folder/~test/', kwargs = {}
partition = '', sub_path = '', suffix = '', identifier = ''
uri_as_parts = False, transform_name = False
REST_uri = 'https://10.190.25.59:443/mgmt/tm/sys/folder/~test/'
@functools.wraps(method)
def wrapper(self, RIC_base_uri, **kwargs):
partition = kwargs.pop('partition', '')
sub_path = kwargs.pop('subPath', '')
suffix = kwargs.pop('suffix', '')
identifier, kwargs = _unique_resource_identifier_from_kwargs(**kwargs)
uri_as_parts = kwargs.pop('uri_as_parts', False)
transform_name = kwargs.pop('transform_name', False)
if uri_as_parts:
REST_uri = generate_bigip_uri(RIC_base_uri, partition, identifier,
sub_path, suffix,
transform_name=transform_name,
**kwargs)
else:
REST_uri = RIC_base_uri
pre_message = "%s WITH uri: %s AND suffix: %s AND kwargs: %s" %\
(method.__name__, REST_uri, suffix, kwargs)
logging.debug(pre_message)
response = method(self, REST_uri, **kwargs)
post_message =\
"RESPONSE::STATUS: %s Content-Type: %s Content-Encoding:"\
" %s\nText: %r" % (response.status_code,
response.headers.get('Content-Type', None),
response.headers.get('Content-Encoding', None),
response.text)
logging.debug(post_message)
if response.status_code not in range(200, 207):
error_message = '%s Unexpected Error: %s for uri: %s\nText: %r' %\
(response.status_code,
response.reason,
response.url,
response.text)
> raise iControlUnexpectedHTTPError(error_message, response=response)
E iControlUnexpectedHTTPError: 400 Unexpected Error: Bad Request for uri: https://10.190.25.59:443/mgmt/tm/sys/folder/~test/
E Text: u'{"code":400,"message":"0107082a:3: All objects must be removed from a partition (test) before the partition may be removed, type ID (6090)","errorStack":[],"apiError":3}'
/usr/local/lib/python2.7/dist-packages/icontrol/session.py:272: iControlUnexpectedHTTPError
```
##### Tests:
`openstack_agent_mitaka_12.1.1-overcloud.90.consoleText`
* Type: errors
* Test: `test_arp_delete_by_network`
* Sum: `0ab3974d6817174d1b73cce9e0826517`
#### OpenStack Release
Mitaka
#### Agent Version
mitaka - 323189b9db8e3298832a1c7b3623ce33adc1a632
#### Operating System
Nightly
#### Deployment
singlebigip agent functional tests
|
test
|
test failed to cleanup singlebigip test network py test arp delete by network title test failed to cleanup singlebigip test network py test arp delete by network attachments details traceback def teardown system helper delete folder mgmt root default partition test network py tox singlebigip local lib site packages openstack agent drivers bigip system helper py in delete folder obj delete tox singlebigip local lib site packages bigip resource py in delete self delete kwargs tox singlebigip local lib site packages bigip resource py in delete response session delete delete uri requests params self ric base uri kwargs partition sub path suffix identifier uri as parts false transform name false rest uri functools wraps method def wrapper self ric base uri kwargs partition kwargs pop partition sub path kwargs pop subpath suffix kwargs pop suffix identifier kwargs unique resource identifier from kwargs kwargs uri as parts kwargs pop uri as parts false transform name kwargs pop transform name false if uri as parts rest uri generate bigip uri ric base uri partition identifier sub path suffix transform name transform name kwargs else rest uri ric base uri pre message s with uri s and suffix s and kwargs s method name rest uri suffix kwargs logging debug pre message response method self rest uri kwargs post message response status s content type s content encoding s ntext r response status code response headers get content type none response headers get content encoding none response text logging debug post message if response status code not in range error message s unexpected error s for uri s ntext r response status code response reason response url response text raise icontrolunexpectedhttperror error message response response e icontrolunexpectedhttperror unexpected error bad request for uri e text u code message all objects must be removed from a partition test before the partition may be removed type id errorstack apierror usr local lib dist packages icontrol session py icontrolunexpectedhttperror tests openstack agent mitaka overcloud consoletext type errors test test arp delete by network sum openstack release mitaka agent version mitaka operating system nightly deployment singlebigip agent functional tests
| 1
|
96,633
| 3,971,352,173
|
IssuesEvent
|
2016-05-04 11:30:13
|
OCHA-DAP/hdx-ckan
|
https://api.github.com/repos/OCHA-DAP/hdx-ckan
|
closed
|
JSON Preview is ugly. Can we parse it more nicely
|
Priority-Low
|
Can we embed something for rendering JSON more nicely on the preview page? We could also consider XML at the same time, though this is lower priority.
|
1.0
|
JSON Preview is ugly. Can we parse it more nicely - Can we embed something for rendering JSON more nicely on the preview page? We could also consider XML at the same time, though this is lower priority.
|
non_test
|
json preview is ugly can we parse it more nicely can we embed something for rendering json more nicely on the preview page we could also consider xml at the same time though this is lower priority
| 0
|
284,323
| 24,591,487,232
|
IssuesEvent
|
2022-10-14 02:57:26
|
milvus-io/milvus
|
https://api.github.com/repos/milvus-io/milvus
|
closed
|
[Bug]: [Stability] Milvus crashes if running tests with query, insert and delete requests
|
kind/bug priority/critical-urgent test/stability triage/accepted test/benchmark
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Environment
```markdown
- Milvus version: master-20220930-08ade66c
- Deployment mode(standalone or cluster): standalone
- SDK version(e.g. pymilvus v2.0.0rc2): 2.2.0-dev33
```
### Current Behavior
Milvus crashes during stability test with search, query, insert and delete concurrently
client config: client-random-locust-insert-delete-search-filter
```
{
"config.yaml": "locust_random_performance:
collections:
-
collection_name: sift_10w_128_l2
ni_per: 50000
other_fields: float1
build_index: true
index_type: hnsw
index_param:
M: 8
efConstruction: 200
task:
types:
-
type: query
weight: 10
params:
top_k: 10
nq: 10
search_param:
ef: 16
filters:
-
range: \"{'range': {'float1': {'GT': collection_size * 0.5, 'LT': collection_size * 1}}}\"
-
type: load
weight: 2
-
type: get
weight: 5
params:
ids_length: 10
-
type: insert
weight: 1
params:
ni_per: 1
-
type: delete
weight: 1
params:
ni_per: 1
connection_num: 1
clients_num: 20
spawn_rate: 2
# during_time: 84h
during_time: 72h
"
}
```
### Expected Behavior
no crash and all requests return successfully
### Steps To Reproduce
```markdown
https://argo-workflows.zilliz.cc/workflows/qa/fouram-g864x?tab=workflow&nodeId=fouram-g864x-3353266211&nodePanelView=inputs-outputs
```
### Milvus Log
go to loki for full logs
```
Common labels: {"cluster":"4am","component":"standalone","job":"qa-milvus/milvus","namespace":"qa-milvus","node_name":"4am-node12","pod":"fouram-g864x-1-milvus-standalone-6449cccbfd-cflbs","app":"milvus","container":"standalone"}
Line limit: 1000
Total bytes processed: "668 kB"
2022-10-03T00:41:40+08:00 /go/pkg/mod/google.golang.org/grpc@v1.46.0/server.go:920 +0x28a
2022-10-03T00:41:40+08:00 created by google.golang.org/grpc.(*Server).serveStreams.func1
2022-10-03T00:41:40+08:00 /go/pkg/mod/google.golang.org/grpc@v1.46.0/server.go:922 +0x98
2022-10-03T00:41:40+08:00 google.golang.org/grpc.(*Server).serveStreams.func1.2()
2022-10-03T00:41:40+08:00 /go/pkg/mod/google.golang.org/grpc@v1.46.0/server.go:1620 +0xa1b
2022-10-03T00:41:40+08:00 google.golang.org/grpc.(*Server).handleStream(0xc0007d96c0, {0x3e64820, 0xc001ffbba0}, 0xc006e9eb40, 0x0)
2022-10-03T00:41:40+08:00 /go/pkg/mod/google.golang.org/grpc@v1.46.0/server.go:1283 +0xcfd
2022-10-03T00:41:40+08:00 google.golang.org/grpc.(*Server).processUnaryRPC(0xc0007d96c0, {0x3e64820, 0xc001ffbba0}, 0xc006e9eb40, 0xc003825cb0, 0x525b080, 0x0)
2022-10-03T00:41:40+08:00 /go/src/github.com/milvus-io/milvus/internal/proto/datapb/data_coord.pb.go:6476 +0x138
2022-10-03T00:41:40+08:00 github.com/milvus-io/milvus/internal/proto/datapb._DataNode_SyncSegments_Handler({0x39bc320?, 0xc000d25220}, {0x3e55720, 0xc006e95620}, 0xc00f696c60, 0xc003825b30)
2022-10-03T00:41:40+08:00 /go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.3.0/chain.go:34 +0xbf
2022-10-03T00:41:40+08:00 github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1({0x3e55720, 0xc006e95620}, {0x395cc20, 0xc000c7c0e0}, 0xc000624af0?, 0x37283e0?)
2022-10-03T00:41:40+08:00 /go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.3.0/chain.go:25 +0x3a
2022-10-03T00:41:40+08:00 github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1({0x3e55720?, 0xc006e95620?}, {0x395cc20?, 0xc000c7c0e0?})
2022-10-03T00:41:40+08:00 /go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.3.0/tracing/opentracing/server_interceptors.go:38 +0x16a
2022-10-03T00:41:40+08:00 github.com/grpc-ecosystem/go-grpc-middleware/tracing/opentracing.UnaryServerInterceptor.func1({0x3e55720, 0xc006e95620}, {0x395cc20, 0xc000c7c0e0}, 0xc006be2f20?, 0xc006be2f40)
2022-10-03T00:41:40+08:00 /go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.3.0/chain.go:25 +0x3a
2022-10-03T00:41:40+08:00 github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1({0x3e55720?, 0xc006e956b0?}, {0x395cc20?, 0xc000c7c0e0?})
2022-10-03T00:41:40+08:00 /go/src/github.com/milvus-io/milvus/internal/util/logutil/grpc_interceptor.go:22 +0x49
2022-10-03T00:41:40+08:00 github.com/milvus-io/milvus/internal/util/logutil.UnaryTraceLoggerInterceptor({0x3e55720?, 0xc006e956b0?}, {0x395cc20, 0xc000c7c0e0}, 0x3e42480?, 0xc0074fd0e0)
2022-10-03T00:41:40+08:00 /go/src/github.com/milvus-io/milvus/internal/proto/datapb/data_coord.pb.go:6474 +0x78
2022-10-03T00:41:40+08:00 github.com/milvus-io/milvus/internal/proto/datapb._DataNode_SyncSegments_Handler.func1({0x3e55720, 0xc006e95740}, {0x395cc20?, 0xc000c7c0e0})
2022-10-03T00:41:40+08:00 /go/src/github.com/milvus-io/milvus/internal/distributed/datanode/service.go:386 +0x2e
2022-10-03T00:41:40+08:00 github.com/milvus-io/milvus/internal/distributed/datanode.(*Server).SyncSegments(0xf?, {0x3e55720?, 0xc006e95740?}, 0x10?)
2022-10-03T00:41:40+08:00 /go/src/github.com/milvus-io/milvus/internal/datanode/data_node.go:933 +0x9b6
2022-10-03T00:41:40+08:00 github.com/milvus-io/milvus/internal/datanode.(*DataNode).SyncSegments(0xc0007a6d00, {0x3e55720, 0xc006e95740}, 0xc000c7c0e0)
2022-10-03T00:41:40+08:00 /go/src/github.com/milvus-io/milvus/internal/datanode/segment_replica.go:782 +0x9ce
2022-10-03T00:41:40+08:00 github.com/milvus-io/milvus/internal/datanode.(*SegmentReplica).mergeFlushedSegments(0xc002114180, 0xc006b99a00, 0x60e63d6160101af, {0xc002a17040?, 0x2, 0x1?})
2022-10-03T00:41:40+08:00 goroutine 73423623 [running]:
2022-10-03T00:41:40+08:00
2022-10-03T00:41:40+08:00 [signal SIGSEGV: segmentation violation code=0x1 addr=0x58 pc=0x269eece]
2022-10-03T00:41:40+08:00 panic: runtime error: invalid memory address or nil pointer dereference
2022-10-03T00:41:40+08:00 [2022/10/02 16:41:40.963 +00:00] [DEBUG] [proxy/impl.go:676] ["DescribeCollection done"] [traceID=7507708cb906c586] [role=proxy] [MsgID=436395985044768838] [BeginTS=436398693011161091] [EndTS=436398693011161091] [db=] [collection=sift_10w_128_l2]
2022-10-03T00:41:40+08:00 [2022/10/02 16:41:40.963 +00:00] [INFO] [datanode/segment_replica.go:777] ["merge flushed segments"] ["segment ID"=436395985044832688] ["collection ID"=436348556669553987] ["partition ID"=436348556669553988] ["compacted from"="[436348556857559969,436395509919679665]"] [planID=436395985044832687] ["channel name"=by-dev-rootcoord-dml_0_436348556669553987v0]
```
### Anything else?
_No response_
|
2.0
|
[Bug]: [Stability] Milvus crashes if running tests with query, insert and delete requests - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Environment
```markdown
- Milvus version: master-20220930-08ade66c
- Deployment mode(standalone or cluster): standalone
- SDK version(e.g. pymilvus v2.0.0rc2): 2.2.0-dev33
```
### Current Behavior
Milvus crashes during stability test with search, query, insert and delete concurrently
client config: client-random-locust-insert-delete-search-filter
```
{
"config.yaml": "locust_random_performance:
collections:
-
collection_name: sift_10w_128_l2
ni_per: 50000
other_fields: float1
build_index: true
index_type: hnsw
index_param:
M: 8
efConstruction: 200
task:
types:
-
type: query
weight: 10
params:
top_k: 10
nq: 10
search_param:
ef: 16
filters:
-
range: \"{'range': {'float1': {'GT': collection_size * 0.5, 'LT': collection_size * 1}}}\"
-
type: load
weight: 2
-
type: get
weight: 5
params:
ids_length: 10
-
type: insert
weight: 1
params:
ni_per: 1
-
type: delete
weight: 1
params:
ni_per: 1
connection_num: 1
clients_num: 20
spawn_rate: 2
# during_time: 84h
during_time: 72h
"
}
```
### Expected Behavior
no crash and all requests return successfully
### Steps To Reproduce
```markdown
https://argo-workflows.zilliz.cc/workflows/qa/fouram-g864x?tab=workflow&nodeId=fouram-g864x-3353266211&nodePanelView=inputs-outputs
```
### Milvus Log
go to loki for full logs
```
Common labels: {"cluster":"4am","component":"standalone","job":"qa-milvus/milvus","namespace":"qa-milvus","node_name":"4am-node12","pod":"fouram-g864x-1-milvus-standalone-6449cccbfd-cflbs","app":"milvus","container":"standalone"}
Line limit: 1000
Total bytes processed: "668 kB"
2022-10-03T00:41:40+08:00 /go/pkg/mod/google.golang.org/grpc@v1.46.0/server.go:920 +0x28a
2022-10-03T00:41:40+08:00 created by google.golang.org/grpc.(*Server).serveStreams.func1
2022-10-03T00:41:40+08:00 /go/pkg/mod/google.golang.org/grpc@v1.46.0/server.go:922 +0x98
2022-10-03T00:41:40+08:00 google.golang.org/grpc.(*Server).serveStreams.func1.2()
2022-10-03T00:41:40+08:00 /go/pkg/mod/google.golang.org/grpc@v1.46.0/server.go:1620 +0xa1b
2022-10-03T00:41:40+08:00 google.golang.org/grpc.(*Server).handleStream(0xc0007d96c0, {0x3e64820, 0xc001ffbba0}, 0xc006e9eb40, 0x0)
2022-10-03T00:41:40+08:00 /go/pkg/mod/google.golang.org/grpc@v1.46.0/server.go:1283 +0xcfd
2022-10-03T00:41:40+08:00 google.golang.org/grpc.(*Server).processUnaryRPC(0xc0007d96c0, {0x3e64820, 0xc001ffbba0}, 0xc006e9eb40, 0xc003825cb0, 0x525b080, 0x0)
2022-10-03T00:41:40+08:00 /go/src/github.com/milvus-io/milvus/internal/proto/datapb/data_coord.pb.go:6476 +0x138
2022-10-03T00:41:40+08:00 github.com/milvus-io/milvus/internal/proto/datapb._DataNode_SyncSegments_Handler({0x39bc320?, 0xc000d25220}, {0x3e55720, 0xc006e95620}, 0xc00f696c60, 0xc003825b30)
2022-10-03T00:41:40+08:00 /go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.3.0/chain.go:34 +0xbf
2022-10-03T00:41:40+08:00 github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1({0x3e55720, 0xc006e95620}, {0x395cc20, 0xc000c7c0e0}, 0xc000624af0?, 0x37283e0?)
2022-10-03T00:41:40+08:00 /go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.3.0/chain.go:25 +0x3a
2022-10-03T00:41:40+08:00 github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1({0x3e55720?, 0xc006e95620?}, {0x395cc20?, 0xc000c7c0e0?})
2022-10-03T00:41:40+08:00 /go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.3.0/tracing/opentracing/server_interceptors.go:38 +0x16a
2022-10-03T00:41:40+08:00 github.com/grpc-ecosystem/go-grpc-middleware/tracing/opentracing.UnaryServerInterceptor.func1({0x3e55720, 0xc006e95620}, {0x395cc20, 0xc000c7c0e0}, 0xc006be2f20?, 0xc006be2f40)
2022-10-03T00:41:40+08:00 /go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.3.0/chain.go:25 +0x3a
2022-10-03T00:41:40+08:00 github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1({0x3e55720?, 0xc006e956b0?}, {0x395cc20?, 0xc000c7c0e0?})
2022-10-03T00:41:40+08:00 /go/src/github.com/milvus-io/milvus/internal/util/logutil/grpc_interceptor.go:22 +0x49
2022-10-03T00:41:40+08:00 github.com/milvus-io/milvus/internal/util/logutil.UnaryTraceLoggerInterceptor({0x3e55720?, 0xc006e956b0?}, {0x395cc20, 0xc000c7c0e0}, 0x3e42480?, 0xc0074fd0e0)
2022-10-03T00:41:40+08:00 /go/src/github.com/milvus-io/milvus/internal/proto/datapb/data_coord.pb.go:6474 +0x78
2022-10-03T00:41:40+08:00 github.com/milvus-io/milvus/internal/proto/datapb._DataNode_SyncSegments_Handler.func1({0x3e55720, 0xc006e95740}, {0x395cc20?, 0xc000c7c0e0})
2022-10-03T00:41:40+08:00 /go/src/github.com/milvus-io/milvus/internal/distributed/datanode/service.go:386 +0x2e
2022-10-03T00:41:40+08:00 github.com/milvus-io/milvus/internal/distributed/datanode.(*Server).SyncSegments(0xf?, {0x3e55720?, 0xc006e95740?}, 0x10?)
2022-10-03T00:41:40+08:00 /go/src/github.com/milvus-io/milvus/internal/datanode/data_node.go:933 +0x9b6
2022-10-03T00:41:40+08:00 github.com/milvus-io/milvus/internal/datanode.(*DataNode).SyncSegments(0xc0007a6d00, {0x3e55720, 0xc006e95740}, 0xc000c7c0e0)
2022-10-03T00:41:40+08:00 /go/src/github.com/milvus-io/milvus/internal/datanode/segment_replica.go:782 +0x9ce
2022-10-03T00:41:40+08:00 github.com/milvus-io/milvus/internal/datanode.(*SegmentReplica).mergeFlushedSegments(0xc002114180, 0xc006b99a00, 0x60e63d6160101af, {0xc002a17040?, 0x2, 0x1?})
2022-10-03T00:41:40+08:00 goroutine 73423623 [running]:
2022-10-03T00:41:40+08:00
2022-10-03T00:41:40+08:00 [signal SIGSEGV: segmentation violation code=0x1 addr=0x58 pc=0x269eece]
2022-10-03T00:41:40+08:00 panic: runtime error: invalid memory address or nil pointer dereference
2022-10-03T00:41:40+08:00 [2022/10/02 16:41:40.963 +00:00] [DEBUG] [proxy/impl.go:676] ["DescribeCollection done"] [traceID=7507708cb906c586] [role=proxy] [MsgID=436395985044768838] [BeginTS=436398693011161091] [EndTS=436398693011161091] [db=] [collection=sift_10w_128_l2]
2022-10-03T00:41:40+08:00 [2022/10/02 16:41:40.963 +00:00] [INFO] [datanode/segment_replica.go:777] ["merge flushed segments"] ["segment ID"=436395985044832688] ["collection ID"=436348556669553987] ["partition ID"=436348556669553988] ["compacted from"="[436348556857559969,436395509919679665]"] [planID=436395985044832687] ["channel name"=by-dev-rootcoord-dml_0_436348556669553987v0]
```
### Anything else?
_No response_
|
test
|
milvus crashes if running tests with query insert and delete requests is there an existing issue for this i have searched the existing issues environment markdown milvus version master deployment mode standalone or cluster standalone sdk version e g pymilvus current behavior milvus crashes during stability test with search query insert and delete concurrently client config client random locust insert delete search filter config yaml locust random performance collections collection name sift ni per other fields build index true index type hnsw index param m efconstruction task types type query weight params top k nq search param ef filters range range gt collection size lt collection size type load weight type get weight params ids length type insert weight params ni per type delete weight params ni per connection num clients num spawn rate during time during time expected behavior no crash and all requests return successfully steps to reproduce markdown milvus log go to loki for full logs common labels cluster component standalone job qa milvus milvus namespace qa milvus node name pod fouram milvus standalone cflbs app milvus container standalone line limit total bytes processed kb go pkg mod google golang org grpc server go created by google golang org grpc server servestreams go pkg mod google golang org grpc server go google golang org grpc server servestreams go pkg mod google golang org grpc server go google golang org grpc server handlestream go pkg mod google golang org grpc server go google golang org grpc server processunaryrpc go src github com milvus io milvus internal proto datapb data coord pb go github com milvus io milvus internal proto datapb datanode syncsegments handler go pkg mod github com grpc ecosystem go grpc middleware chain go github com grpc ecosystem go grpc middleware chainunaryserver go pkg mod github com grpc ecosystem go grpc middleware chain go github com grpc ecosystem go grpc middleware chainunaryserver go pkg mod github com grpc ecosystem go grpc middleware tracing opentracing server interceptors go github com grpc ecosystem go grpc middleware tracing opentracing unaryserverinterceptor go pkg mod github com grpc ecosystem go grpc middleware chain go github com grpc ecosystem go grpc middleware chainunaryserver go src github com milvus io milvus internal util logutil grpc interceptor go github com milvus io milvus internal util logutil unarytraceloggerinterceptor go src github com milvus io milvus internal proto datapb data coord pb go github com milvus io milvus internal proto datapb datanode syncsegments handler go src github com milvus io milvus internal distributed datanode service go github com milvus io milvus internal distributed datanode server syncsegments go src github com milvus io milvus internal datanode data node go github com milvus io milvus internal datanode datanode syncsegments go src github com milvus io milvus internal datanode segment replica go github com milvus io milvus internal datanode segmentreplica mergeflushedsegments goroutine panic runtime error invalid memory address or nil pointer dereference anything else no response
| 1
|
178,168
| 13,766,871,178
|
IssuesEvent
|
2020-10-07 15:03:40
|
ZupIT/beagle
|
https://api.github.com/repos/ZupIT/beagle
|
closed
|
[Integration Test] Shared folder for .feature files
|
Integration Test android iOS web
|
To have reusability between platforms and avoid diferences in scenarios, as a sugestion we could have a shared folder to centralize the .feature files that can be implemented on all platforms
|
1.0
|
[Integration Test] Shared folder for .feature files - To have reusability between platforms and avoid diferences in scenarios, as a sugestion we could have a shared folder to centralize the .feature files that can be implemented on all platforms
|
test
|
shared folder for feature files to have reusability between platforms and avoid diferences in scenarios as a sugestion we could have a shared folder to centralize the feature files that can be implemented on all platforms
| 1
|
313,297
| 26,915,534,407
|
IssuesEvent
|
2023-02-07 05:58:44
|
sebastianbergmann/phpunit
|
https://api.github.com/repos/sebastianbergmann/phpunit
|
closed
|
Silenced E_DEPRECATED errors are reported
|
type/bug feature/test-runner version/10
|
| Q | A
| --------------------| ---------------
| PHPUnit version | 10.0.4
| PHP version | 8.2.2
| Installation Method | Composer
#### Summary
Silenced deprecated errors are reported.
#### Current behavior
In BetterReflection is this code:
```
// @ because access to deprecated constant throws deprecated warning
$constantValue = @constant($constantName);
```
Since PHPUnit 10 the deprecated errors are reported:
```
....................DDD.DDDD.DDD.DD.DDD.DD.DDDDDDDDDDDD.DDD 10030 / 13001 ( 77%)
.DDDDDDDDDDDD.DDDDDDDDDD..DD.DD.DDDDD.DDDDDD.DDDD.DDDDDDDDD 10089 / 13001 ( 77%)
DDDDDDDDDD..DDD.DDD.D..DDDDDDD.DDDDDDDDD..DDD.DD.D...DD.DDD 10148 / 13001 ( 78%)
DDD.DDDDDDDD.DD.DDDDDDDDDDD.DDDDDDD.DDDDD.DD.DDDD.DDDDDD..D 10207 / 13001 ( 78%)
```
```
Tests: 13001, Assertions: 73217, Deprecations: 398, Skipped: 5.
```
See https://github.com/Roave/BetterReflection/actions/runs/4107910137/jobs/7087976443
With `--display-deprecations`
```
304) Roave\BetterReflectionTest\SourceLocator\SourceStubber\PhpStormStubsSourceStubberTest::testConstantInPhpVersion with data set #4
* Constant FILTER_SANITIZE_STRING is deprecated
* Constant FILTER_SANITIZE_STRIPPED is deprecated
better-reflection\test\unit\SourceLocator\SourceStubber\PhpStormStubsSourceStubberTest.php:950
305) Roave\BetterReflectionTest\SourceLocator\SourceStubber\PhpStormStubsSourceStubberTest::testConstantInPhpVersion with data set #2
* Constant FILTER_SANITIZE_STRING is deprecated
* Constant FILTER_SANITIZE_STRIPPED is deprecated
better-reflection\test\unit\SourceLocator\SourceStubber\PhpStormStubsSourceStubberTest.php:950
306) Roave\BetterReflectionTest\SourceLocator\SourceStubber\PhpStormStubsSourceStubberTest::testConstantInPhpVersion with data set #3
* Constant FILTER_SANITIZE_STRING is deprecated
* Constant FILTER_SANITIZE_STRIPPED is deprecated
better-reflection\test\unit\SourceLocator\SourceStubber\PhpStormStubsSourceStubberTest.php:950
```
It also looks the same error is reported many times.
#### How to reproduce
https://github.com/Roave/BetterReflection/pull/1326
Eg.
```
vendor\bin\phpunit test\unit\SourceLocator\SourceStubber\PhpStormStubsSourceStubberTest.php --display-deprecations
```
#### Expected behavior
No `D` in progressbar and no deprecations reported.
|
1.0
|
Silenced E_DEPRECATED errors are reported - | Q | A
| --------------------| ---------------
| PHPUnit version | 10.0.4
| PHP version | 8.2.2
| Installation Method | Composer
#### Summary
Silenced deprecated errors are reported.
#### Current behavior
In BetterReflection is this code:
```
// @ because access to deprecated constant throws deprecated warning
$constantValue = @constant($constantName);
```
Since PHPUnit 10 the deprecated errors are reported:
```
....................DDD.DDDD.DDD.DD.DDD.DD.DDDDDDDDDDDD.DDD 10030 / 13001 ( 77%)
.DDDDDDDDDDDD.DDDDDDDDDD..DD.DD.DDDDD.DDDDDD.DDDD.DDDDDDDDD 10089 / 13001 ( 77%)
DDDDDDDDDD..DDD.DDD.D..DDDDDDD.DDDDDDDDD..DDD.DD.D...DD.DDD 10148 / 13001 ( 78%)
DDD.DDDDDDDD.DD.DDDDDDDDDDD.DDDDDDD.DDDDD.DD.DDDD.DDDDDD..D 10207 / 13001 ( 78%)
```
```
Tests: 13001, Assertions: 73217, Deprecations: 398, Skipped: 5.
```
See https://github.com/Roave/BetterReflection/actions/runs/4107910137/jobs/7087976443
With `--display-deprecations`
```
304) Roave\BetterReflectionTest\SourceLocator\SourceStubber\PhpStormStubsSourceStubberTest::testConstantInPhpVersion with data set #4
* Constant FILTER_SANITIZE_STRING is deprecated
* Constant FILTER_SANITIZE_STRIPPED is deprecated
better-reflection\test\unit\SourceLocator\SourceStubber\PhpStormStubsSourceStubberTest.php:950
305) Roave\BetterReflectionTest\SourceLocator\SourceStubber\PhpStormStubsSourceStubberTest::testConstantInPhpVersion with data set #2
* Constant FILTER_SANITIZE_STRING is deprecated
* Constant FILTER_SANITIZE_STRIPPED is deprecated
better-reflection\test\unit\SourceLocator\SourceStubber\PhpStormStubsSourceStubberTest.php:950
306) Roave\BetterReflectionTest\SourceLocator\SourceStubber\PhpStormStubsSourceStubberTest::testConstantInPhpVersion with data set #3
* Constant FILTER_SANITIZE_STRING is deprecated
* Constant FILTER_SANITIZE_STRIPPED is deprecated
better-reflection\test\unit\SourceLocator\SourceStubber\PhpStormStubsSourceStubberTest.php:950
```
It also looks the same error is reported many times.
#### How to reproduce
https://github.com/Roave/BetterReflection/pull/1326
Eg.
```
vendor\bin\phpunit test\unit\SourceLocator\SourceStubber\PhpStormStubsSourceStubberTest.php --display-deprecations
```
#### Expected behavior
No `D` in progressbar and no deprecations reported.
|
test
|
silenced e deprecated errors are reported q a phpunit version php version installation method composer summary silenced deprecated errors are reported current behavior in betterreflection is this code because access to deprecated constant throws deprecated warning constantvalue constant constantname since phpunit the deprecated errors are reported ddd dddd ddd dd ddd dd dddddddddddd ddd dddddddddddd dddddddddd dd dd ddddd dddddd dddd ddddddddd dddddddddd ddd ddd d ddddddd ddddddddd ddd dd d dd ddd ddd dddddddd dd ddddddddddd ddddddd ddddd dd dddd dddddd d tests assertions deprecations skipped see with display deprecations roave betterreflectiontest sourcelocator sourcestubber phpstormstubssourcestubbertest testconstantinphpversion with data set constant filter sanitize string is deprecated constant filter sanitize stripped is deprecated better reflection test unit sourcelocator sourcestubber phpstormstubssourcestubbertest php roave betterreflectiontest sourcelocator sourcestubber phpstormstubssourcestubbertest testconstantinphpversion with data set constant filter sanitize string is deprecated constant filter sanitize stripped is deprecated better reflection test unit sourcelocator sourcestubber phpstormstubssourcestubbertest php roave betterreflectiontest sourcelocator sourcestubber phpstormstubssourcestubbertest testconstantinphpversion with data set constant filter sanitize string is deprecated constant filter sanitize stripped is deprecated better reflection test unit sourcelocator sourcestubber phpstormstubssourcestubbertest php it also looks the same error is reported many times how to reproduce eg vendor bin phpunit test unit sourcelocator sourcestubber phpstormstubssourcestubbertest php display deprecations expected behavior no d in progressbar and no deprecations reported
| 1
|
174,004
| 13,452,448,292
|
IssuesEvent
|
2020-09-08 22:12:36
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
[Logging v2] - rancher-logging chart yaml error when enabling rke additional source
|
[zube]: To Test alpha-priority/0 area/logging kind/bug-qa status/blocker
|
**What kind of request is this:**
bug
**Steps to reproduce:**
```
helm install rancher-logging-crd https://charts.rancher.io/assets/rancher-logging/rancher-logging-crd-3.4.000.tgz --create-namespace=true --namespace=cattle-logging-system
helm install --create-namespace=true --namespace=cattle-logging-system rancher-logging -f values.yaml https://charts.rancher.io/assets/rancher-logging/rancher-logging-3.4.000.tgz
```
**Result:**
`Error: YAML parse error on rancher-logging/templates/loggings/rke1/logging-rke1.yaml: error converting YAML to JSON: yaml: line 6: mapping values are not allowed in this context`
**Other details that may be helpful:**
`values.yaml`
```yaml
disablePvc: true
additionalLoggingSources:
rke1:
enabled: true
rke2:
enabled: false
k3s:
enabled: false
container_engine: "systemd" # openrc or systemd
images:
config_reloader:
repository: rancher/jimmidyson-configmap-reload
tag: v0.2.2
fluentbit:
repository: rancher/fluent-fluent-bit
tag: 1.5.0
fluentd:
repository: rancher/banzaicloud-fluentd
tag: v1.10.4-alpine-2
syslog_forwarder:
repository: rancher/fluent-bit-out-syslog
tag: 0.1.0
global:
systemDefaultRegistry: ""
```
**Environment information**
- Rancher version: `master-head (09/03/2020)` _ef11fe3b0_
- Installation option: HA k8s
**Cluster information**
- Cluster type: Infrastructure Provider (Downstream DO Linode)
- Machine type: VM 8gb RAM all roles
- Kubernetes version (use `kubectl version`): v1.17.9
|
1.0
|
[Logging v2] - rancher-logging chart yaml error when enabling rke additional source - **What kind of request is this:**
bug
**Steps to reproduce:**
```
helm install rancher-logging-crd https://charts.rancher.io/assets/rancher-logging/rancher-logging-crd-3.4.000.tgz --create-namespace=true --namespace=cattle-logging-system
helm install --create-namespace=true --namespace=cattle-logging-system rancher-logging -f values.yaml https://charts.rancher.io/assets/rancher-logging/rancher-logging-3.4.000.tgz
```
**Result:**
`Error: YAML parse error on rancher-logging/templates/loggings/rke1/logging-rke1.yaml: error converting YAML to JSON: yaml: line 6: mapping values are not allowed in this context`
**Other details that may be helpful:**
`values.yaml`
```yaml
disablePvc: true
additionalLoggingSources:
rke1:
enabled: true
rke2:
enabled: false
k3s:
enabled: false
container_engine: "systemd" # openrc or systemd
images:
config_reloader:
repository: rancher/jimmidyson-configmap-reload
tag: v0.2.2
fluentbit:
repository: rancher/fluent-fluent-bit
tag: 1.5.0
fluentd:
repository: rancher/banzaicloud-fluentd
tag: v1.10.4-alpine-2
syslog_forwarder:
repository: rancher/fluent-bit-out-syslog
tag: 0.1.0
global:
systemDefaultRegistry: ""
```
**Environment information**
- Rancher version: `master-head (09/03/2020)` _ef11fe3b0_
- Installation option: HA k8s
**Cluster information**
- Cluster type: Infrastructure Provider (Downstream DO Linode)
- Machine type: VM 8gb RAM all roles
- Kubernetes version (use `kubectl version`): v1.17.9
|
test
|
rancher logging chart yaml error when enabling rke additional source what kind of request is this bug steps to reproduce helm install rancher logging crd create namespace true namespace cattle logging system helm install create namespace true namespace cattle logging system rancher logging f values yaml result error yaml parse error on rancher logging templates loggings logging yaml error converting yaml to json yaml line mapping values are not allowed in this context other details that may be helpful values yaml yaml disablepvc true additionalloggingsources enabled true enabled false enabled false container engine systemd openrc or systemd images config reloader repository rancher jimmidyson configmap reload tag fluentbit repository rancher fluent fluent bit tag fluentd repository rancher banzaicloud fluentd tag alpine syslog forwarder repository rancher fluent bit out syslog tag global systemdefaultregistry environment information rancher version master head installation option ha cluster information cluster type infrastructure provider downstream do linode machine type vm ram all roles kubernetes version use kubectl version
| 1
|
106,580
| 9,167,577,837
|
IssuesEvent
|
2019-03-02 14:58:19
|
FasterXML/jackson-core
|
https://api.github.com/repos/FasterXML/jackson-core
|
closed
|
ArrayIndexOutOfBoundsException: 1801
|
need-test-case
|
upgrade version?
org.springframework.data.redis.serializer.SerializationException: Could not read JSON: 1801; nested exception is java.lang.ArrayIndexOutOfBoundsException: 1801
at org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer.deserialize(GenericJackson2JsonRedisSerializer.java:119)
at org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer.deserialize(GenericJackson2JsonRedisSerializer.java:98)
at org.springframework.data.redis.core.AbstractOperations.deserializeValue(AbstractOperations.java:315)
at org.springframework.data.redis.core.AbstractOperations$ValueDeserializingRedisCallback.doInRedis(AbstractOperations.java:55)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:202)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:164)
at org.springframework.data.redis.core.AbstractOperations.execute(AbstractOperations.java:88)
at org.springframework.data.redis.core.DefaultValueOperations.get(DefaultValueOperations.java:43)
at com.tuya.basic.service.sms.SmsService.getRandomCodeWithCount(SmsService.java:267)
at com.tuya.basic.service.sms.SmsService.sendConfirmCode(SmsService.java:400)
at com.alibaba.dubbo.common.bytecode.Wrapper6.invokeMethod(Wrapper6.java)
at com.alibaba.dubbo.rpc.proxy.javassist.JavassistProxyFactory$1.doInvoke(JavassistProxyFactory.java:46)
at com.alibaba.dubbo.rpc.proxy.AbstractProxyInvoker.invoke(AbstractProxyInvoker.java:72)
at com.alibaba.dubbo.rpc.protocol.InvokerWrapper.invoke(InvokerWrapper.java:53)
at com.alibaba.csp.sentinel.adapter.dubbo.SentinelDubboProviderFilter.invoke(SentinelDubboProviderFilter.java:66)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.alibaba.dubbo.rpc.filter.TimeoutFilter.invoke(TimeoutFilter.java:42)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.alibaba.dubbo.rpc.protocol.dubbo.filter.TraceFilter.invoke(TraceFilter.java:78)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.tuya.basic.dubbo.ext.filter.InvocationTimeFilter.invoke(InvocationTimeFilter.java:31)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.alibaba.dubbo.monitor.support.MonitorFilter.invoke(MonitorFilter.java:75)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.tuya.basic.dubbo.ext.filter.TuyaExceptionFilter.invoke(TuyaExceptionFilter.java:67)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.alibaba.dubbo.rpc.filter.ContextFilter.invoke(ContextFilter.java:60)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.alibaba.dubbo.rpc.filter.GenericFilter.invoke(GenericFilter.java:132)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.alibaba.dubbo.rpc.filter.ClassLoaderFilter.invoke(ClassLoaderFilter.java:38)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.alibaba.dubbo.rpc.filter.EchoFilter.invoke(EchoFilter.java:38)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.alibaba.dubbo.rpc.protocol.dubbo.DubboProtocol$1.reply(DubboProtocol.java:108)
at com.alibaba.dubbo.remoting.exchange.support.header.HeaderExchangeHandler.handleRequest(HeaderExchangeHandler.java:84)
at com.alibaba.dubbo.remoting.exchange.support.header.HeaderExchangeHandler.received(HeaderExchangeHandler.java:170)
at com.alibaba.dubbo.remoting.transport.DecodeHandler.received(DecodeHandler.java:52)
at com.alibaba.dubbo.remoting.transport.dispatcher.ChannelEventRunnable.run(ChannelEventRunnable.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1801
at com.fasterxml.jackson.core.io.UTF32Reader.read(UTF32Reader.java:138)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._loadMore(ReaderBasedJsonParser.java:241)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipWSOrEnd(ReaderBasedJsonParser.java:2345)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:644)
at com.fasterxml.jackson.databind.ObjectMapper._initForReading(ObjectMapper.java:3834)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3783)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2929)
at org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer.deserialize(GenericJackson2JsonRedisSerializer.java:117)
... 41 common frames omitted
version
<!-- Jackson -->
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-core-asl</artifactId>
<version>1.9.13</version>
</dependency>
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-mapper-asl</artifactId>
<version>1.9.13</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.8.7</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.8.7</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
<version>2.8.7</version>
</dependency>
|
1.0
|
ArrayIndexOutOfBoundsException: 1801 - upgrade version?
org.springframework.data.redis.serializer.SerializationException: Could not read JSON: 1801; nested exception is java.lang.ArrayIndexOutOfBoundsException: 1801
at org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer.deserialize(GenericJackson2JsonRedisSerializer.java:119)
at org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer.deserialize(GenericJackson2JsonRedisSerializer.java:98)
at org.springframework.data.redis.core.AbstractOperations.deserializeValue(AbstractOperations.java:315)
at org.springframework.data.redis.core.AbstractOperations$ValueDeserializingRedisCallback.doInRedis(AbstractOperations.java:55)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:202)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:164)
at org.springframework.data.redis.core.AbstractOperations.execute(AbstractOperations.java:88)
at org.springframework.data.redis.core.DefaultValueOperations.get(DefaultValueOperations.java:43)
at com.tuya.basic.service.sms.SmsService.getRandomCodeWithCount(SmsService.java:267)
at com.tuya.basic.service.sms.SmsService.sendConfirmCode(SmsService.java:400)
at com.alibaba.dubbo.common.bytecode.Wrapper6.invokeMethod(Wrapper6.java)
at com.alibaba.dubbo.rpc.proxy.javassist.JavassistProxyFactory$1.doInvoke(JavassistProxyFactory.java:46)
at com.alibaba.dubbo.rpc.proxy.AbstractProxyInvoker.invoke(AbstractProxyInvoker.java:72)
at com.alibaba.dubbo.rpc.protocol.InvokerWrapper.invoke(InvokerWrapper.java:53)
at com.alibaba.csp.sentinel.adapter.dubbo.SentinelDubboProviderFilter.invoke(SentinelDubboProviderFilter.java:66)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.alibaba.dubbo.rpc.filter.TimeoutFilter.invoke(TimeoutFilter.java:42)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.alibaba.dubbo.rpc.protocol.dubbo.filter.TraceFilter.invoke(TraceFilter.java:78)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.tuya.basic.dubbo.ext.filter.InvocationTimeFilter.invoke(InvocationTimeFilter.java:31)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.alibaba.dubbo.monitor.support.MonitorFilter.invoke(MonitorFilter.java:75)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.tuya.basic.dubbo.ext.filter.TuyaExceptionFilter.invoke(TuyaExceptionFilter.java:67)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.alibaba.dubbo.rpc.filter.ContextFilter.invoke(ContextFilter.java:60)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.alibaba.dubbo.rpc.filter.GenericFilter.invoke(GenericFilter.java:132)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.alibaba.dubbo.rpc.filter.ClassLoaderFilter.invoke(ClassLoaderFilter.java:38)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.alibaba.dubbo.rpc.filter.EchoFilter.invoke(EchoFilter.java:38)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
at com.alibaba.dubbo.rpc.protocol.dubbo.DubboProtocol$1.reply(DubboProtocol.java:108)
at com.alibaba.dubbo.remoting.exchange.support.header.HeaderExchangeHandler.handleRequest(HeaderExchangeHandler.java:84)
at com.alibaba.dubbo.remoting.exchange.support.header.HeaderExchangeHandler.received(HeaderExchangeHandler.java:170)
at com.alibaba.dubbo.remoting.transport.DecodeHandler.received(DecodeHandler.java:52)
at com.alibaba.dubbo.remoting.transport.dispatcher.ChannelEventRunnable.run(ChannelEventRunnable.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1801
at com.fasterxml.jackson.core.io.UTF32Reader.read(UTF32Reader.java:138)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._loadMore(ReaderBasedJsonParser.java:241)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipWSOrEnd(ReaderBasedJsonParser.java:2345)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:644)
at com.fasterxml.jackson.databind.ObjectMapper._initForReading(ObjectMapper.java:3834)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3783)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2929)
at org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer.deserialize(GenericJackson2JsonRedisSerializer.java:117)
... 41 common frames omitted
version
<!-- Jackson -->
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-core-asl</artifactId>
<version>1.9.13</version>
</dependency>
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-mapper-asl</artifactId>
<version>1.9.13</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.8.7</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.8.7</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
<version>2.8.7</version>
</dependency>
|
test
|
arrayindexoutofboundsexception upgrade version org springframework data redis serializer serializationexception could not read json nested exception is java lang arrayindexoutofboundsexception at org springframework data redis serializer deserialize java at org springframework data redis serializer deserialize java at org springframework data redis core abstractoperations deserializevalue abstractoperations java at org springframework data redis core abstractoperations valuedeserializingrediscallback doinredis abstractoperations java at org springframework data redis core redistemplate execute redistemplate java at org springframework data redis core redistemplate execute redistemplate java at org springframework data redis core abstractoperations execute abstractoperations java at org springframework data redis core defaultvalueoperations get defaultvalueoperations java at com tuya basic service sms smsservice getrandomcodewithcount smsservice java at com tuya basic service sms smsservice sendconfirmcode smsservice java at com alibaba dubbo common bytecode invokemethod java at com alibaba dubbo rpc proxy javassist javassistproxyfactory doinvoke javassistproxyfactory java at com alibaba dubbo rpc proxy abstractproxyinvoker invoke abstractproxyinvoker java at com alibaba dubbo rpc protocol invokerwrapper invoke invokerwrapper java at com alibaba csp sentinel adapter dubbo sentineldubboproviderfilter invoke sentineldubboproviderfilter java at com alibaba dubbo rpc protocol protocolfilterwrapper invoke protocolfilterwrapper java at com alibaba dubbo rpc filter timeoutfilter invoke timeoutfilter java at com alibaba dubbo rpc protocol protocolfilterwrapper invoke protocolfilterwrapper java at com alibaba dubbo rpc protocol dubbo filter tracefilter invoke tracefilter java at com alibaba dubbo rpc protocol protocolfilterwrapper invoke protocolfilterwrapper java at com tuya basic dubbo ext filter invocationtimefilter invoke invocationtimefilter java at com alibaba dubbo rpc protocol protocolfilterwrapper invoke protocolfilterwrapper java at com alibaba dubbo monitor support monitorfilter invoke monitorfilter java at com alibaba dubbo rpc protocol protocolfilterwrapper invoke protocolfilterwrapper java at com tuya basic dubbo ext filter tuyaexceptionfilter invoke tuyaexceptionfilter java at com alibaba dubbo rpc protocol protocolfilterwrapper invoke protocolfilterwrapper java at com alibaba dubbo rpc filter contextfilter invoke contextfilter java at com alibaba dubbo rpc protocol protocolfilterwrapper invoke protocolfilterwrapper java at com alibaba dubbo rpc filter genericfilter invoke genericfilter java at com alibaba dubbo rpc protocol protocolfilterwrapper invoke protocolfilterwrapper java at com alibaba dubbo rpc filter classloaderfilter invoke classloaderfilter java at com alibaba dubbo rpc protocol protocolfilterwrapper invoke protocolfilterwrapper java at com alibaba dubbo rpc filter echofilter invoke echofilter java at com alibaba dubbo rpc protocol protocolfilterwrapper invoke protocolfilterwrapper java at com alibaba dubbo rpc protocol dubbo dubboprotocol reply dubboprotocol java at com alibaba dubbo remoting exchange support header headerexchangehandler handlerequest headerexchangehandler java at com alibaba dubbo remoting exchange support header headerexchangehandler received headerexchangehandler java at com alibaba dubbo remoting transport decodehandler received decodehandler java at com alibaba dubbo remoting transport dispatcher channeleventrunnable run channeleventrunnable java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by java lang arrayindexoutofboundsexception at com fasterxml jackson core io read java at com fasterxml jackson core json readerbasedjsonparser loadmore readerbasedjsonparser java at com fasterxml jackson core json readerbasedjsonparser skipwsorend readerbasedjsonparser java at com fasterxml jackson core json readerbasedjsonparser nexttoken readerbasedjsonparser java at com fasterxml jackson databind objectmapper initforreading objectmapper java at com fasterxml jackson databind objectmapper readmapandclose objectmapper java at com fasterxml jackson databind objectmapper readvalue objectmapper java at org springframework data redis serializer deserialize java common frames omitted version org codehaus jackson jackson core asl org codehaus jackson jackson mapper asl com fasterxml jackson core jackson core com fasterxml jackson core jackson databind com fasterxml jackson core jackson annotations
| 1
|
51,581
| 6,180,615,558
|
IssuesEvent
|
2017-07-03 06:36:32
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
opened
|
System.Net.Http.Functional.Tests.ManagedHandler_HttpClientHandler_ClientCertificates_Test failed with "Xunit.Sdk.ThrowsException" in CI
|
area-System.Net.Http test-run-core
|
failed test: System.Net.Http.Functional.Tests.ManagedHandler_HttpClientHandler_ClientCertificates_Test.Manual_SSLBackendNotSupported_ThrowsPlatformNotSupportedException
detail: https://ci.dot.net/job/dotnet_corefx/job/master/job/outerloop_netcoreapp_centos7.1_release/95/testReport/System.Net.Http.Functional.Tests/ManagedHandler_HttpClientHandler_ClientCertificates_Test/Manual_SSLBackendNotSupported_ThrowsPlatformNotSupportedException/
MESSAGE:
~~~
Assert.Throws() Failure
Expected: typeof(System.PlatformNotSupportedException)
Actual: (No exception was thrown)
~~~
STACK TRACE:
~~~
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Net.Http.Functional.Tests.HttpClientHandler_ClientCertificates_Test.<Manual_SSLBackendNotSupported_ThrowsPlatformNotSupportedException>d__5.MoveNext() in /mnt/resource/j/workspace/dotnet_corefx/master/outerloop_netcoreapp_centos7.1_release/src/System.Net.Http/tests/FunctionalTests/HttpClientHandlerTest.ClientCertificates.cs:line 80 --- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) --- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) --- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
~~~
|
1.0
|
System.Net.Http.Functional.Tests.ManagedHandler_HttpClientHandler_ClientCertificates_Test failed with "Xunit.Sdk.ThrowsException" in CI - failed test: System.Net.Http.Functional.Tests.ManagedHandler_HttpClientHandler_ClientCertificates_Test.Manual_SSLBackendNotSupported_ThrowsPlatformNotSupportedException
detail: https://ci.dot.net/job/dotnet_corefx/job/master/job/outerloop_netcoreapp_centos7.1_release/95/testReport/System.Net.Http.Functional.Tests/ManagedHandler_HttpClientHandler_ClientCertificates_Test/Manual_SSLBackendNotSupported_ThrowsPlatformNotSupportedException/
MESSAGE:
~~~
Assert.Throws() Failure
Expected: typeof(System.PlatformNotSupportedException)
Actual: (No exception was thrown)
~~~
STACK TRACE:
~~~
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Net.Http.Functional.Tests.HttpClientHandler_ClientCertificates_Test.<Manual_SSLBackendNotSupported_ThrowsPlatformNotSupportedException>d__5.MoveNext() in /mnt/resource/j/workspace/dotnet_corefx/master/outerloop_netcoreapp_centos7.1_release/src/System.Net.Http/tests/FunctionalTests/HttpClientHandlerTest.ClientCertificates.cs:line 80 --- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) --- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) --- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
~~~
|
test
|
system net http functional tests managedhandler httpclienthandler clientcertificates test failed with xunit sdk throwsexception in ci failed test system net http functional tests managedhandler httpclienthandler clientcertificates test manual sslbackendnotsupported throwsplatformnotsupportedexception detail message assert throws failure expected typeof system platformnotsupportedexception actual no exception was thrown stack trace end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system net http functional tests httpclienthandler clientcertificates test d movenext in mnt resource j workspace dotnet corefx master outerloop netcoreapp release src system net http tests functionaltests httpclienthandlertest clientcertificates cs line end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task
| 1
|
45,728
| 7,198,372,531
|
IssuesEvent
|
2018-02-05 12:35:02
|
silvershop/silvershop-core
|
https://api.github.com/repos/silvershop/silvershop-core
|
closed
|
How to set up PayPal
|
documentation
|
Feedback from a user:
I found that the installation documentation was missing specifics in terms of what I needed to have (at a minimum) in the mysite/_config.php file to have the module function. After scouring the forum for hours and trying various things I was able to figure it out but felt that if this was included more clearly in the installation documentation I would have saved a ton of time and been more satisfied with the Paypal Payment extension as well as the ease to add extensions to the ecommerce module itself.
|
1.0
|
How to set up PayPal - Feedback from a user:
I found that the installation documentation was missing specifics in terms of what I needed to have (at a minimum) in the mysite/_config.php file to have the module function. After scouring the forum for hours and trying various things I was able to figure it out but felt that if this was included more clearly in the installation documentation I would have saved a ton of time and been more satisfied with the Paypal Payment extension as well as the ease to add extensions to the ecommerce module itself.
|
non_test
|
how to set up paypal feedback from a user i found that the installation documentation was missing specifics in terms of what i needed to have at a minimum in the mysite config php file to have the module function after scouring the forum for hours and trying various things i was able to figure it out but felt that if this was included more clearly in the installation documentation i would have saved a ton of time and been more satisfied with the paypal payment extension as well as the ease to add extensions to the ecommerce module itself
| 0
|
3,814
| 3,573,380,413
|
IssuesEvent
|
2016-01-27 05:57:06
|
cortoproject/corto
|
https://api.github.com/repos/cortoproject/corto
|
opened
|
Add 'auto' declarations to C binding
|
Corto:C binding Corto:Usability
|
In the C binding, users will often find themselves writing code that looks like:
```c
mypackage_Foo obj = mypackage_FooCreateChild(root_o, "obj");
```
This is a bit tedious as there is a lot of redundancy in this statement. In C++ the type of a variable can be inferred from its declaration with the `auto` keyword. In C something similar can be achieved with relatively little effort:
```c
#define mypackage_FooCreateChild_auto(parent, name)\
mypackage_Foo name = mypackage_FooCreateChild(parent, #name)
```
A user will then able to do:
```c
mypackage_FooCreateChild_auto(root_o, obj);
```
which looks a lot nicer and does exactly the same.
|
True
|
Add 'auto' declarations to C binding - In the C binding, users will often find themselves writing code that looks like:
```c
mypackage_Foo obj = mypackage_FooCreateChild(root_o, "obj");
```
This is a bit tedious as there is a lot of redundancy in this statement. In C++ the type of a variable can be inferred from its declaration with the `auto` keyword. In C something similar can be achieved with relatively little effort:
```c
#define mypackage_FooCreateChild_auto(parent, name)\
mypackage_Foo name = mypackage_FooCreateChild(parent, #name)
```
A user will then able to do:
```c
mypackage_FooCreateChild_auto(root_o, obj);
```
which looks a lot nicer and does exactly the same.
|
non_test
|
add auto declarations to c binding in the c binding users will often find themselves writing code that looks like c mypackage foo obj mypackage foocreatechild root o obj this is a bit tedious as there is a lot of redundancy in this statement in c the type of a variable can be inferred from its declaration with the auto keyword in c something similar can be achieved with relatively little effort c define mypackage foocreatechild auto parent name mypackage foo name mypackage foocreatechild parent name a user will then able to do c mypackage foocreatechild auto root o obj which looks a lot nicer and does exactly the same
| 0
|
295,918
| 25,515,382,828
|
IssuesEvent
|
2022-11-28 15:57:57
|
irods/irods_resource_plugin_s3
|
https://api.github.com/repos/irods/irods_resource_plugin_s3
|
closed
|
CI test updates to add remove_if_exists() and move make_arbitrary_file() in a common lib directory.
|
enhancement testing
|
- [x] main
- [x] 4-2-stable
-----
CI Test changes:
1. Add remove_if_exists() in a common lib library for cleanup of files on the filesystem. This can replace all checks if file exists and removal.
2. Move the versions of make_arbitrary_file() into a common lib.
|
1.0
|
CI test updates to add remove_if_exists() and move make_arbitrary_file() in a common lib directory. - - [x] main
- [x] 4-2-stable
-----
CI Test changes:
1. Add remove_if_exists() in a common lib library for cleanup of files on the filesystem. This can replace all checks if file exists and removal.
2. Move the versions of make_arbitrary_file() into a common lib.
|
test
|
ci test updates to add remove if exists and move make arbitrary file in a common lib directory main stable ci test changes add remove if exists in a common lib library for cleanup of files on the filesystem this can replace all checks if file exists and removal move the versions of make arbitrary file into a common lib
| 1
|
3,562
| 4,413,742,227
|
IssuesEvent
|
2016-08-13 01:36:16
|
chartjs/Chart.js
|
https://api.github.com/repos/chartjs/Chart.js
|
opened
|
Composer install does not come with built files
|
Category: Infrastructure Needs Investigation Priority: p1
|
When Chart.js is installed with composer, the built files are not included. The only thing installed is the `master` of a tagged release.
Is there a way to possibly set composer to use the npm release (like what we do with bower)? Do we want to completely drop official support for composer because of the new release system?
As far as I know, Composer is not nearly as popular as any of the other distribution options for installing Chart.js, and users would still be able to install older releases of Chart.js even when new support is dropped.
Also, the package is listed on composer as `nnnick/chartjs` which could (and should) be updated to `chartjs/chartjs` if we choose to keep support.
|
1.0
|
Composer install does not come with built files - When Chart.js is installed with composer, the built files are not included. The only thing installed is the `master` of a tagged release.
Is there a way to possibly set composer to use the npm release (like what we do with bower)? Do we want to completely drop official support for composer because of the new release system?
As far as I know, Composer is not nearly as popular as any of the other distribution options for installing Chart.js, and users would still be able to install older releases of Chart.js even when new support is dropped.
Also, the package is listed on composer as `nnnick/chartjs` which could (and should) be updated to `chartjs/chartjs` if we choose to keep support.
|
non_test
|
composer install does not come with built files when chart js is installed with composer the built files are not included the only thing installed is the master of a tagged release is there a way to possibly set composer to use the npm release like what we do with bower do we want to completely drop official support for composer because of the new release system as far as i know composer is not nearly as popular as any of the other distribution options for installing chart js and users would still be able to install older releases of chart js even when new support is dropped also the package is listed on composer as nnnick chartjs which could and should be updated to chartjs chartjs if we choose to keep support
| 0
|
46,009
| 5,774,732,557
|
IssuesEvent
|
2017-04-28 08:09:13
|
eigenmethod/mol
|
https://api.github.com/repos/eigenmethod/mol
|
closed
|
Chrome. Пропадает чекбокс при наведении в приложении $mol_app_todomvc_demo
|
browser:chrome need testing
|
**Приложение:** $mol_app_todomvc_demo
**Браузер:** Chrome Version 55.0.2883.87 (64-bit)
**Детали:**
При наведении на чекбокс (выполнено/невыполнено) он пропадает. Возможно это такой не совсем удачный hover, но выглядит это довольно странно.

|
1.0
|
Chrome. Пропадает чекбокс при наведении в приложении $mol_app_todomvc_demo - **Приложение:** $mol_app_todomvc_demo
**Браузер:** Chrome Version 55.0.2883.87 (64-bit)
**Детали:**
При наведении на чекбокс (выполнено/невыполнено) он пропадает. Возможно это такой не совсем удачный hover, но выглядит это довольно странно.

|
test
|
chrome пропадает чекбокс при наведении в приложении mol app todomvc demo приложение mol app todomvc demo браузер chrome version bit детали при наведении на чекбокс выполнено невыполнено он пропадает возможно это такой не совсем удачный hover но выглядит это довольно странно
| 1
|
150,959
| 5,794,236,336
|
IssuesEvent
|
2017-05-02 14:29:56
|
GSA/fpki-guides
|
https://api.github.com/repos/GSA/fpki-guides
|
closed
|
Trust Store Management Guide
|
Audience - Engineers general overview Priority - High
|
@dasgituser (Dave Silver) and @tkpk (Giuseppe Cimmino) are converting the FPKI Management Authority''s Trust Store Management Guide to a playbook. The Federal Public Key Infrastructure Management Authority designed and created the Trust Store Management Guide as an education resource for Department, Agency, corporate, and other organizational system level administrators and managers who use the Federal Public Key Infrastructure (FPKI) as part of regular business practices.
|
1.0
|
Trust Store Management Guide - @dasgituser (Dave Silver) and @tkpk (Giuseppe Cimmino) are converting the FPKI Management Authority''s Trust Store Management Guide to a playbook. The Federal Public Key Infrastructure Management Authority designed and created the Trust Store Management Guide as an education resource for Department, Agency, corporate, and other organizational system level administrators and managers who use the Federal Public Key Infrastructure (FPKI) as part of regular business practices.
|
non_test
|
trust store management guide dasgituser dave silver and tkpk giuseppe cimmino are converting the fpki management authority s trust store management guide to a playbook the federal public key infrastructure management authority designed and created the trust store management guide as an education resource for department agency corporate and other organizational system level administrators and managers who use the federal public key infrastructure fpki as part of regular business practices
| 0
|
222,076
| 24,684,235,422
|
IssuesEvent
|
2022-10-19 01:22:04
|
samq-ghdemo/js-monorepo
|
https://api.github.com/repos/samq-ghdemo/js-monorepo
|
opened
|
CVE-2022-3517 (High) detected in multiple libraries
|
security vulnerability
|
## CVE-2022-3517 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimatch-3.0.3.tgz</b>, <b>minimatch-0.3.0.tgz</b>, <b>minimatch-3.0.2.tgz</b>, <b>minimatch-3.0.4.tgz</b></p></summary>
<p>
<details><summary><b>minimatch-3.0.3.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-3.0.3.tgz">https://registry.npmjs.org/minimatch/-/minimatch-3.0.3.tgz</a></p>
<p>Path to dependency file: /NodeGoat/package.json</p>
<p>Path to vulnerable library: /NodeGoat/node_modules/npm/node_modules/read-package-json/node_modules/glob/node_modules/minimatch/package.json,/NodeGoat/node_modules/npm/node_modules/glob/node_modules/minimatch/package.json,/NodeGoat/node_modules/npm/node_modules/init-package-json/node_modules/glob/node_modules/minimatch/package.json,/NodeGoat/node_modules/npm/node_modules/fstream-npm/node_modules/fstream-ignore/node_modules/minimatch/package.json,/NodeGoat/node_modules/npm/node_modules/node-gyp/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- grunt-npm-install-0.3.1.tgz (Root Library)
- npm-3.10.10.tgz
- read-package-json-2.0.4.tgz
- glob-6.0.4.tgz
- :x: **minimatch-3.0.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimatch-0.3.0.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-0.3.0.tgz">https://registry.npmjs.org/minimatch/-/minimatch-0.3.0.tgz</a></p>
<p>Path to dependency file: /NodeGoat/package.json</p>
<p>Path to vulnerable library: /NodeGoat/node_modules/mocha/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- mocha-2.5.3.tgz (Root Library)
- glob-3.2.11.tgz
- :x: **minimatch-0.3.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimatch-3.0.2.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-3.0.2.tgz">https://registry.npmjs.org/minimatch/-/minimatch-3.0.2.tgz</a></p>
<p>Path to dependency file: /NodeGoat/package.json</p>
<p>Path to vulnerable library: /NodeGoat/node_modules/nyc/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- realize-package-specifier-3.0.3.tgz (Root Library)
- grunt-contrib-nodeunit-1.0.0.tgz
- nodeunit-0.9.5.tgz
- tap-7.1.2.tgz
- nyc-7.1.0.tgz
- glob-7.0.5.tgz
- :x: **minimatch-3.0.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimatch-3.0.4.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz">https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz</a></p>
<p>
Dependency Hierarchy:
- nodemon-1.19.1.tgz (Root Library)
- chokidar-2.1.6.tgz
- fsevents-1.2.9.tgz
- node-pre-gyp-0.12.0.tgz
- npm-packlist-1.4.1.tgz
- ignore-walk-3.0.1.tgz
- :x: **minimatch-3.0.4.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/samq-ghdemo/js-monorepo/commit/f3701923c18333c1e4e49bf595dd36b3f186812f">f3701923c18333c1e4e49bf595dd36b3f186812f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in the minimatch package. This flaw allows a Regular Expression Denial of Service (ReDoS) when calling the braceExpand function with specific arguments, resulting in a Denial of Service.
<p>Publish Date: 2022-10-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-3517>CVE-2022-3517</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-17</p>
<p>Fix Resolution: minimatch - 3.0.5</p>
</p>
</details>
<p></p>
|
True
|
CVE-2022-3517 (High) detected in multiple libraries - ## CVE-2022-3517 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimatch-3.0.3.tgz</b>, <b>minimatch-0.3.0.tgz</b>, <b>minimatch-3.0.2.tgz</b>, <b>minimatch-3.0.4.tgz</b></p></summary>
<p>
<details><summary><b>minimatch-3.0.3.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-3.0.3.tgz">https://registry.npmjs.org/minimatch/-/minimatch-3.0.3.tgz</a></p>
<p>Path to dependency file: /NodeGoat/package.json</p>
<p>Path to vulnerable library: /NodeGoat/node_modules/npm/node_modules/read-package-json/node_modules/glob/node_modules/minimatch/package.json,/NodeGoat/node_modules/npm/node_modules/glob/node_modules/minimatch/package.json,/NodeGoat/node_modules/npm/node_modules/init-package-json/node_modules/glob/node_modules/minimatch/package.json,/NodeGoat/node_modules/npm/node_modules/fstream-npm/node_modules/fstream-ignore/node_modules/minimatch/package.json,/NodeGoat/node_modules/npm/node_modules/node-gyp/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- grunt-npm-install-0.3.1.tgz (Root Library)
- npm-3.10.10.tgz
- read-package-json-2.0.4.tgz
- glob-6.0.4.tgz
- :x: **minimatch-3.0.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimatch-0.3.0.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-0.3.0.tgz">https://registry.npmjs.org/minimatch/-/minimatch-0.3.0.tgz</a></p>
<p>Path to dependency file: /NodeGoat/package.json</p>
<p>Path to vulnerable library: /NodeGoat/node_modules/mocha/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- mocha-2.5.3.tgz (Root Library)
- glob-3.2.11.tgz
- :x: **minimatch-0.3.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimatch-3.0.2.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-3.0.2.tgz">https://registry.npmjs.org/minimatch/-/minimatch-3.0.2.tgz</a></p>
<p>Path to dependency file: /NodeGoat/package.json</p>
<p>Path to vulnerable library: /NodeGoat/node_modules/nyc/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- realize-package-specifier-3.0.3.tgz (Root Library)
- grunt-contrib-nodeunit-1.0.0.tgz
- nodeunit-0.9.5.tgz
- tap-7.1.2.tgz
- nyc-7.1.0.tgz
- glob-7.0.5.tgz
- :x: **minimatch-3.0.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimatch-3.0.4.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz">https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz</a></p>
<p>
Dependency Hierarchy:
- nodemon-1.19.1.tgz (Root Library)
- chokidar-2.1.6.tgz
- fsevents-1.2.9.tgz
- node-pre-gyp-0.12.0.tgz
- npm-packlist-1.4.1.tgz
- ignore-walk-3.0.1.tgz
- :x: **minimatch-3.0.4.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/samq-ghdemo/js-monorepo/commit/f3701923c18333c1e4e49bf595dd36b3f186812f">f3701923c18333c1e4e49bf595dd36b3f186812f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in the minimatch package. This flaw allows a Regular Expression Denial of Service (ReDoS) when calling the braceExpand function with specific arguments, resulting in a Denial of Service.
<p>Publish Date: 2022-10-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-3517>CVE-2022-3517</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-17</p>
<p>Fix Resolution: minimatch - 3.0.5</p>
</p>
</details>
<p></p>
|
non_test
|
cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries minimatch tgz minimatch tgz minimatch tgz minimatch tgz minimatch tgz a glob matcher in javascript library home page a href path to dependency file nodegoat package json path to vulnerable library nodegoat node modules npm node modules read package json node modules glob node modules minimatch package json nodegoat node modules npm node modules glob node modules minimatch package json nodegoat node modules npm node modules init package json node modules glob node modules minimatch package json nodegoat node modules npm node modules fstream npm node modules fstream ignore node modules minimatch package json nodegoat node modules npm node modules node gyp node modules minimatch package json dependency hierarchy grunt npm install tgz root library npm tgz read package json tgz glob tgz x minimatch tgz vulnerable library minimatch tgz a glob matcher in javascript library home page a href path to dependency file nodegoat package json path to vulnerable library nodegoat node modules mocha node modules minimatch package json dependency hierarchy mocha tgz root library glob tgz x minimatch tgz vulnerable library minimatch tgz a glob matcher in javascript library home page a href path to dependency file nodegoat package json path to vulnerable library nodegoat node modules nyc node modules minimatch package json dependency hierarchy realize package specifier tgz root library grunt contrib nodeunit tgz nodeunit tgz tap tgz nyc tgz glob tgz x minimatch tgz vulnerable library minimatch tgz a glob matcher in javascript library home page a href dependency hierarchy nodemon tgz root library chokidar tgz fsevents tgz node pre gyp tgz npm packlist tgz ignore walk tgz x minimatch tgz vulnerable library found in head commit a href found in base branch main vulnerability details a vulnerability was found in the minimatch package this flaw allows a regular expression denial of service redos when calling the braceexpand function with specific arguments resulting in a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution minimatch
| 0
|
22,653
| 7,198,021,099
|
IssuesEvent
|
2018-02-05 11:11:53
|
QupZilla/qupzilla
|
https://api.github.com/repos/QupZilla/qupzilla
|
closed
|
Build failure
|
Building
|
Clipped session:
``` sh-session
...
cd VerticalTabs/ && ( test -e Makefile || /opt/Qt/5.10.0/gcc_64/bin/qmake -o Makefile /home/user/repo/git/QupZilla/martii/qupzilla/src/plugins/VerticalTabs/VerticalTabs.pro ) && make -f Makefile
make[2]: Entering directory '/home/user/repo/git/QupZilla/martii/qupzilla/src/plugins/VerticalTabs'
/opt/Qt/5.10.0/gcc_64/bin/uic -no-stringliteral verticaltabssettings.ui -o build/ui_verticaltabssettings.h
Could not create output file
make[2]: *** [Makefile:1799: build/ui_verticaltabssettings.h] Error 1
make[2]: Leaving directory '/home/user/repo/git/QupZilla/martii/qupzilla/src/plugins/VerticalTabs'
make[1]: *** [Makefile:253: sub--home-user-repo-git-QupZilla-martii-qupzilla-src-plugins-VerticalTabs-make_first] Error 2
make[1]: Leaving directory '/home/user/repo/git/QupZilla/martii/qupzilla/src/plugins'
make: *** [Makefile:97: sub-src-plugins-make_first] Error 2
```
9e632e8
Linux
|
1.0
|
Build failure - Clipped session:
``` sh-session
...
cd VerticalTabs/ && ( test -e Makefile || /opt/Qt/5.10.0/gcc_64/bin/qmake -o Makefile /home/user/repo/git/QupZilla/martii/qupzilla/src/plugins/VerticalTabs/VerticalTabs.pro ) && make -f Makefile
make[2]: Entering directory '/home/user/repo/git/QupZilla/martii/qupzilla/src/plugins/VerticalTabs'
/opt/Qt/5.10.0/gcc_64/bin/uic -no-stringliteral verticaltabssettings.ui -o build/ui_verticaltabssettings.h
Could not create output file
make[2]: *** [Makefile:1799: build/ui_verticaltabssettings.h] Error 1
make[2]: Leaving directory '/home/user/repo/git/QupZilla/martii/qupzilla/src/plugins/VerticalTabs'
make[1]: *** [Makefile:253: sub--home-user-repo-git-QupZilla-martii-qupzilla-src-plugins-VerticalTabs-make_first] Error 2
make[1]: Leaving directory '/home/user/repo/git/QupZilla/martii/qupzilla/src/plugins'
make: *** [Makefile:97: sub-src-plugins-make_first] Error 2
```
9e632e8
Linux
|
non_test
|
build failure clipped session sh session cd verticaltabs test e makefile opt qt gcc bin qmake o makefile home user repo git qupzilla martii qupzilla src plugins verticaltabs verticaltabs pro make f makefile make entering directory home user repo git qupzilla martii qupzilla src plugins verticaltabs opt qt gcc bin uic no stringliteral verticaltabssettings ui o build ui verticaltabssettings h could not create output file make error make leaving directory home user repo git qupzilla martii qupzilla src plugins verticaltabs make error make leaving directory home user repo git qupzilla martii qupzilla src plugins make error linux
| 0
|
56,071
| 13,752,774,450
|
IssuesEvent
|
2020-10-06 14:53:53
|
srgblnch/skippy
|
https://api.github.com/repos/srgblnch/skippy
|
opened
|
read after a write, bis
|
builder enhancement
|
There is already a solution for instruments that returns an answer to the write actions. This was solved adding an attribute [ReadAfterWrite](https://github.com/srgblnch/skippy/issues/39).
But this solution requires to remember to set this attribute each time a new instrument is installed. But this behavior comes from the model not the instrument particularly, and forces the installation to come with a human remembering to set an attribute (at least once because is memorized).
It could be a better solution to link this with the [feature](https://github.com/srgblnch/skippy/issues/37) to move the instrument identification to within the file that describes it.
If, somehow, mention in the descriptor that the model with behave one way or another, it will safe time (and headache).
By the way, this solution should come with another extension (perhaps a third iteration as this is the second). It is possible that someday we discover an instrument that returns an answer to certain writes but not all of them, so it would be nice if the Attribute() builder can handle it.
|
1.0
|
read after a write, bis - There is already a solution for instruments that returns an answer to the write actions. This was solved adding an attribute [ReadAfterWrite](https://github.com/srgblnch/skippy/issues/39).
But this solution requires to remember to set this attribute each time a new instrument is installed. But this behavior comes from the model not the instrument particularly, and forces the installation to come with a human remembering to set an attribute (at least once because is memorized).
It could be a better solution to link this with the [feature](https://github.com/srgblnch/skippy/issues/37) to move the instrument identification to within the file that describes it.
If, somehow, mention in the descriptor that the model with behave one way or another, it will safe time (and headache).
By the way, this solution should come with another extension (perhaps a third iteration as this is the second). It is possible that someday we discover an instrument that returns an answer to certain writes but not all of them, so it would be nice if the Attribute() builder can handle it.
|
non_test
|
read after a write bis there is already a solution for instruments that returns an answer to the write actions this was solved adding an attribute but this solution requires to remember to set this attribute each time a new instrument is installed but this behavior comes from the model not the instrument particularly and forces the installation to come with a human remembering to set an attribute at least once because is memorized it could be a better solution to link this with the to move the instrument identification to within the file that describes it if somehow mention in the descriptor that the model with behave one way or another it will safe time and headache by the way this solution should come with another extension perhaps a third iteration as this is the second it is possible that someday we discover an instrument that returns an answer to certain writes but not all of them so it would be nice if the attribute builder can handle it
| 0
|
38,262
| 6,665,024,468
|
IssuesEvent
|
2017-10-02 22:34:42
|
F5Networks/k8s-bigip-ctlr
|
https://api.github.com/repos/F5Networks/k8s-bigip-ctlr
|
closed
|
Deployment file f5-k8s-bigip-ctlr_openshift-sdn.yaml issue (documentation)
|
documentation
|
[http://clouddocs.f5.com/containers/v1/openshift/kctlr-openshift-app-install.html](url) - the deployment file “f5-k8s-bigip-ctlr_openshift-sdn.yaml” (both the embedded text and the linked download) contain the following stanza:
spec:
containers:
serviceAccountName: bigip-ctlr
- name: k8s-bigip-ctlr
It should be:
spec:
serviceAccountName: bigip-ctlr
containers:
- name: k8s-bigip-ctlr
|
1.0
|
Deployment file f5-k8s-bigip-ctlr_openshift-sdn.yaml issue (documentation) - [http://clouddocs.f5.com/containers/v1/openshift/kctlr-openshift-app-install.html](url) - the deployment file “f5-k8s-bigip-ctlr_openshift-sdn.yaml” (both the embedded text and the linked download) contain the following stanza:
spec:
containers:
serviceAccountName: bigip-ctlr
- name: k8s-bigip-ctlr
It should be:
spec:
serviceAccountName: bigip-ctlr
containers:
- name: k8s-bigip-ctlr
|
non_test
|
deployment file bigip ctlr openshift sdn yaml issue documentation url the deployment file “ bigip ctlr openshift sdn yaml” both the embedded text and the linked download contain the following stanza spec containers serviceaccountname bigip ctlr name bigip ctlr it should be spec serviceaccountname bigip ctlr containers name bigip ctlr
| 0
|
16,648
| 3,548,510,234
|
IssuesEvent
|
2016-01-20 14:45:01
|
agda/agda
|
https://api.github.com/repos/agda/agda
|
closed
|
Testsuite destroys FFI package installations!?
|
bug compiler priority-high test-suite
|
I had the feeling that I needed to reinstall the FFI components of the standard library all the time recently. It seems that the testsuite destroys my FFI installation:
```
$ ghc-pkg check
There are problems in package agda-tests-ffi-0.0.1:
Warning: library-dirs: /home/abel/agda-master/exec-test-pkgs17795/lib/x86_64-linux-ghc-7.8.4/agda-tests-ffi-0.0.1 doesn't exist or isn't a directory
...
import-dirs: /home/abel/agda-master/exec-test-pkgs17795/lib/x86_64-linux-ghc-7.8.4/agda-tests-ffi-0.0.1 doesn't exist or isn't a directory
cannot find any of ["Common/FFI.hi","Common/FFI.p_hi","Common/FFI.dyn_hi"]
cannot find any of ["libHSagda-tests-ffi-0.0.1.a","libHSagda-tests-ffi-0.0.1.p_a","libHSagda-tests-ffi-0.0.1-ghc7.8.4.so","libHSagda-tests-ffi-0.0.1-ghc7.8.4.dylib","HSagda-tests-ffi-0.0.1-ghc7.8.4.dll"] on library path
There are problems in package agda-lib-ffi-0.11:
Warning: library-dirs: /home/abel/agda-master/exec-test-pkgs17796/lib/x86_64-linux-ghc-7.8.4/agda-lib-ffi-0.11 doesn't exist or isn't a directory
...
import-dirs: /home/abel/agda-master/exec-test-pkgs17796/lib/x86_64-linux-ghc-7.8.4/agda-lib-ffi-0.11 doesn't exist or isn't a directory
cannot find any of ["Data/FFI.hi","Data/FFI.p_hi","Data/FFI.dyn_hi"]
cannot find any of ["IO/FFI.hi","IO/FFI.p_hi","IO/FFI.dyn_hi"]
cannot find any of ["libHSagda-lib-ffi-0.11.a","libHSagda-lib-ffi-0.11.p_a","libHSagda-lib-ffi-0.11-ghc7.8.4.so","libHSagda-lib-ffi-0.11-ghc7.8.4
```
Somehow the compiler test (?) reinstalls the FFI packages into a temporary directory like `exec-test-pkgsXXXXX` which is then deleted.
That's bad.
|
1.0
|
Testsuite destroys FFI package installations!? - I had the feeling that I needed to reinstall the FFI components of the standard library all the time recently. It seems that the testsuite destroys my FFI installation:
```
$ ghc-pkg check
There are problems in package agda-tests-ffi-0.0.1:
Warning: library-dirs: /home/abel/agda-master/exec-test-pkgs17795/lib/x86_64-linux-ghc-7.8.4/agda-tests-ffi-0.0.1 doesn't exist or isn't a directory
...
import-dirs: /home/abel/agda-master/exec-test-pkgs17795/lib/x86_64-linux-ghc-7.8.4/agda-tests-ffi-0.0.1 doesn't exist or isn't a directory
cannot find any of ["Common/FFI.hi","Common/FFI.p_hi","Common/FFI.dyn_hi"]
cannot find any of ["libHSagda-tests-ffi-0.0.1.a","libHSagda-tests-ffi-0.0.1.p_a","libHSagda-tests-ffi-0.0.1-ghc7.8.4.so","libHSagda-tests-ffi-0.0.1-ghc7.8.4.dylib","HSagda-tests-ffi-0.0.1-ghc7.8.4.dll"] on library path
There are problems in package agda-lib-ffi-0.11:
Warning: library-dirs: /home/abel/agda-master/exec-test-pkgs17796/lib/x86_64-linux-ghc-7.8.4/agda-lib-ffi-0.11 doesn't exist or isn't a directory
...
import-dirs: /home/abel/agda-master/exec-test-pkgs17796/lib/x86_64-linux-ghc-7.8.4/agda-lib-ffi-0.11 doesn't exist or isn't a directory
cannot find any of ["Data/FFI.hi","Data/FFI.p_hi","Data/FFI.dyn_hi"]
cannot find any of ["IO/FFI.hi","IO/FFI.p_hi","IO/FFI.dyn_hi"]
cannot find any of ["libHSagda-lib-ffi-0.11.a","libHSagda-lib-ffi-0.11.p_a","libHSagda-lib-ffi-0.11-ghc7.8.4.so","libHSagda-lib-ffi-0.11-ghc7.8.4
```
Somehow the compiler test (?) reinstalls the FFI packages into a temporary directory like `exec-test-pkgsXXXXX` which is then deleted.
That's bad.
|
test
|
testsuite destroys ffi package installations i had the feeling that i needed to reinstall the ffi components of the standard library all the time recently it seems that the testsuite destroys my ffi installation ghc pkg check there are problems in package agda tests ffi warning library dirs home abel agda master exec test lib linux ghc agda tests ffi doesn t exist or isn t a directory import dirs home abel agda master exec test lib linux ghc agda tests ffi doesn t exist or isn t a directory cannot find any of cannot find any of on library path there are problems in package agda lib ffi warning library dirs home abel agda master exec test lib linux ghc agda lib ffi doesn t exist or isn t a directory import dirs home abel agda master exec test lib linux ghc agda lib ffi doesn t exist or isn t a directory cannot find any of cannot find any of cannot find any of libhsagda lib ffi a libhsagda lib ffi p a libhsagda lib ffi so libhsagda lib ffi somehow the compiler test reinstalls the ffi packages into a temporary directory like exec test pkgsxxxxx which is then deleted that s bad
| 1
|
309,429
| 9,474,577,669
|
IssuesEvent
|
2019-04-19 07:59:03
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
Servers with version mis-match still allow attempt to join
|
Semi-High Priority
|
WHen you click on a server that is running a version different from the one you have installed it does show the server version on the button that would usually say "Join" if the versions matched. This used to not do anything if you clicked on it and there was a version mismatch because the versions were not the same.. Now it attempts to log you in and then you get a connection error with no mention of the a version error or mismatch. This has some players confused that there maybe a problem on their client that they cannot get in these servers but see other people in them and since it allowed them to attempt to connect the fact that the versions are not the same is not realized.
|
1.0
|
Servers with version mis-match still allow attempt to join - WHen you click on a server that is running a version different from the one you have installed it does show the server version on the button that would usually say "Join" if the versions matched. This used to not do anything if you clicked on it and there was a version mismatch because the versions were not the same.. Now it attempts to log you in and then you get a connection error with no mention of the a version error or mismatch. This has some players confused that there maybe a problem on their client that they cannot get in these servers but see other people in them and since it allowed them to attempt to connect the fact that the versions are not the same is not realized.
|
non_test
|
servers with version mis match still allow attempt to join when you click on a server that is running a version different from the one you have installed it does show the server version on the button that would usually say join if the versions matched this used to not do anything if you clicked on it and there was a version mismatch because the versions were not the same now it attempts to log you in and then you get a connection error with no mention of the a version error or mismatch this has some players confused that there maybe a problem on their client that they cannot get in these servers but see other people in them and since it allowed them to attempt to connect the fact that the versions are not the same is not realized
| 0
|
91,853
| 3,863,516,224
|
IssuesEvent
|
2016-04-08 09:45:32
|
iamxavier/elmah
|
https://api.github.com/repos/iamxavier/elmah
|
closed
|
SQL Server table ought to have a proper clustering key
|
auto-migrated Priority-Medium Type-Enhancement
|
```
I am stumped to see that the SQL Server table for ELMAH doesn't have a proper
clustering key...
Any particular reason for this? Having a clustering key on any SQL Server table
typically speeds up all operations - yes, all - including INSERT, UPDATE,
DELETE - so why *not* have it?
A "ElmahID BIGINT IDENTITY(1,1)" would be a good addition and a great candidate
for a clustering key - narrow, static, unique and ever increasing!
```
Original issue reported on code.google.com by `mscheu...@gmail.com` on 12 Jan 2012 at 6:26
|
1.0
|
SQL Server table ought to have a proper clustering key - ```
I am stumped to see that the SQL Server table for ELMAH doesn't have a proper
clustering key...
Any particular reason for this? Having a clustering key on any SQL Server table
typically speeds up all operations - yes, all - including INSERT, UPDATE,
DELETE - so why *not* have it?
A "ElmahID BIGINT IDENTITY(1,1)" would be a good addition and a great candidate
for a clustering key - narrow, static, unique and ever increasing!
```
Original issue reported on code.google.com by `mscheu...@gmail.com` on 12 Jan 2012 at 6:26
|
non_test
|
sql server table ought to have a proper clustering key i am stumped to see that the sql server table for elmah doesn t have a proper clustering key any particular reason for this having a clustering key on any sql server table typically speeds up all operations yes all including insert update delete so why not have it a elmahid bigint identity would be a good addition and a great candidate for a clustering key narrow static unique and ever increasing original issue reported on code google com by mscheu gmail com on jan at
| 0
|
193,802
| 14,662,022,310
|
IssuesEvent
|
2020-12-29 05:59:30
|
github-vet/rangeloop-pointer-findings
|
https://api.github.com/repos/github-vet/rangeloop-pointer-findings
|
closed
|
hjimmy/tg-origin: source/pkg/quota/image/imagestreamimport_evaluator_test.go; 3 LoC
|
fresh test tiny
|
Found a possible issue in [hjimmy/tg-origin](https://www.github.com/hjimmy/tg-origin) at [source/pkg/quota/image/imagestreamimport_evaluator_test.go](https://github.com/hjimmy/tg-origin/blob/07e45cb8ec631f158f6c7c29330a9cdeae9f4a29/source/pkg/quota/image/imagestreamimport_evaluator_test.go#L166-L168)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to is at line 167 may start a goroutine
[Click here to see the code in its original context.](https://github.com/hjimmy/tg-origin/blob/07e45cb8ec631f158f6c7c29330a9cdeae9f4a29/source/pkg/quota/image/imagestreamimport_evaluator_test.go#L166-L168)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, is := range tc.iss {
isInformer.Informer().GetIndexer().Add(&is)
}
```
</details>
<details>
<summary>Click here to show extra information the analyzer produced.</summary>
```
The following graphviz dot graph describes paths through the callgraph that could lead to a function calling a goroutine:
digraph G {
"(implements, 1)" -> {"(Build, 1)";}
"(newArchiveFromGuest, 1)" -> {}
"(restartNode, 2)" -> {"(RestartNode, 1)";}
"(NewTimeoutListener, 5)" -> {"(wrapTLS, 4)";}
"(newArchiveToGuest, 1)" -> {}
"(NewSession, 2)" -> {}
"(List, 5)" -> {"(WaitUntilFreshAndList, 2)";}
"(NewUnsecuredEtcd3TestClientServer, 1)" -> {"(NewClusterV3, 2)";}
"(tlsDialWithDialer, 4)" -> {}
"(attachScale, 4)" -> {"(scaleUp, 5)";}
"(Upgrade, 7)" -> {"(pull, 4)";}
"(create, 2)" -> {"(Snapshot, 3)";"(Create, 3)";}
"(add, 1)" -> {"(post, 1)";"(newWatchBroadcast, 3)";"(newWatchBroadcasts, 1)";"(Cleanup, 0)";"(performCopy, 2)";"(Put, 1)";"(Update, 1)";}
"(Get, 2)" -> {"(New, 2)";"(Open, 2)";"(Do, 2)";"(Accept, 2)";"(implements, 1)";"(Get, 3)";"(Stat, 2)";"(handle, 1)";"(Validate, 1)";"(Set, 2)";"(getURL, 1)";"(Put, 3)";}
"(NewDaemon, 4)" -> {}
"(Exec, 1)" -> {"(Run, 1)";}
"(createAggregatorServer, 3)" -> {}
"(ForEachPackage, 2)" -> {"(allPackages, 3)";}
"(PullImage, 7)" -> {"(pullImageWithReference, 6)";}
"(ServeHTTP, 2)" -> {"(RoundTrip, 1)";"(RemoveMember, 2)";"(serveUpgrade, 2)";"(Validate, 2)";"(Check, 1)";"(handleHttps, 2)";"(serveStatus, 2)";"(tryUpgrade, 2)";}
"(Register, 2)" -> {"(addEndpoint, 1)";}
"(Put, 1)" -> {"(Update, 3)";"(Write, 1)";"(Create, 1)";"(Put, 3)";"(Set, 2)";}
"(CreateVolume, 1)" -> {}
"(Pull, 8)" -> {"(pull, 4)";}
"(Apply, 2)" -> {"(Diff, 3)";}
"(RestartNode, 1)" -> {}
"(Interpret, 5)" -> {"(call, 5)";}
"(Remove, 2)" -> {"(Do, 2)";"(update, 3)";}
"(Watch, 2)" -> {}
"(DetachDisk, 3)" -> {"(Accept, 2)";"(Run, 3)";}
"(Check, 1)" -> {}
"(prepareStatement, 2)" -> {"(Prepare, 1)";}
"(callRemoteBalancer, 2)" -> {}
"(NewServerTLS, 2)" -> {"(newServer, 2)";"(NewGroup, 3)";}
"(Dial, 3)" -> {"(NewClientConn, 3)";}
"(insert, 1)" -> {"(Update, 1)";}
"(nodeShouldRunDaemonPod, 2)" -> {"(simulate, 3)";}
"(DestroyBricks, 3)" -> {"(createDestroyConcurrently, 4)";}
"(removeMember, 2)" -> {"(Terminate, 1)";}
"(NewClient, 1)" -> {"(newClient, 1)";}
"(Ping, 1)" -> {"(Get, 3)";}
"(replaceBrickInVolume, 4)" -> {"(CreateBricks, 3)";"(DestroyBricks, 3)";"(Destroy, 2)";}
"(Terminate, 1)" -> {}
"(update, 3)" -> {"(update, 1)";}
"(add, 2)" -> {"(add, 1)";"(Add, 2)";"(New, 1)";"(Get, 1)";"(Get, 0)";"(Put, 1)";"(replace, 2)";"(copy, 2)";}
"(Write, 1)" -> {"(wait, 1)";"(Capture, 2)";}
"(Create, 3)" -> {"(Destroy, 2)";"(Validate, 2)";"(CreateBricks, 3)";"(DestroyBricks, 3)";}
"(checkBricksCanBeDestroyed, 3)" -> {}
"(Watch, 1)" -> {"(Put, 3)";"(Watch, 2)";}
"(Format, 1)" -> {"(Set, 2)";"(Do, 1)";}
"(dial, 3)" -> {"(tlsDial, 3)";}
"(run, 4)" -> {"(NewDynamicResourceConsumer, 12)";"(Interpret, 5)";}
"(newClient, 1)" -> {"(dial, 2)";}
"(newLocalFakeStorage, 2)" -> {"(NewServer, 1)";}
"(parse, 2)" -> {"(Init, 1)";}
"(Open, 2)" -> {"(newArchiveFromGuest, 1)";"(newArchiveToGuest, 1)";"(Dispatch, 3)";"(ServeHTTP, 2)";}
"(newAsyncIDTokenVerifier, 3)" -> {}
"(Get, 5)" -> {"(WaitUntilFreshAndGet, 3)";}
"(CreateBricks, 3)" -> {"(createDestroyConcurrently, 4)";}
"(Put, 3)" -> {"(Commit, 2)";}
"(delete, 2)" -> {"(Delete, 3)";"(Detach, 2)";"(Encode, 1)";"(runDelete, 6)";"(Inspect, 1)";"(Remove, 2)";"(Delete, 1)";}
"(newWatchBroadcast, 3)" -> {"(update, 1)";}
"(Build, 0)" -> {"(Build, 1)";}
"(WaitUntilFreshAndGet, 3)" -> {"(waitUntilFreshAndBlock, 2)";}
"(getConn, 1)" -> {}
"(hash, 1)" -> {"(Write, 1)";}
"(Validate, 1)" -> {"(Validate, 2)";}
"(runDelete, 6)" -> {"(Delete, 2)";}
"(Create, 1)" -> {"(Create, 2)";"(create, 1)";"(Apply, 1)";"(Update, 1)";}
"(Open, 1)" -> {"(Open, 2)";"(New, 1)";"(Exec, 1)";"(prepareStatement, 2)";}
"(Get, 1)" -> {"(Get, 2)";"(New, 2)";"(Get, 3)";"(Set, 2)";"(getURL, 1)";"(Put, 1)";"(Do, 2)";}
"(Search, 2)" -> {"(Get, 2)";"(Parse, 1)";"(Set, 2)";"(ListImages, 1)";"(Create, 1)";"(Execute, 2)";"(List, 1)";}
"(Dispatch, 3)" -> {}
"(delete, 1)" -> {"(Delete, 3)";"(Delete, 1)";"(Update, 1)";"(Remove, 1)";}
"(Commit, 2)" -> {"(Register, 3)";}
"(List, 2)" -> {"(Accept, 2)";}
"(newLeaseUpdater, 3)" -> {}
"(startKubelet, 5)" -> {}
"(wrapTLS, 4)" -> {"(newTLSListener, 2)";}
"(Do, 3)" -> {}
"(Run, 1)" -> {"(Register, 2)";"(Write, 1)";"(create, 2)";"(StreamContainerIO, 3)";"(run, 1)";"(Update, 1)";"(startControllers, 5)";"(CreateControllerContext, 4)";"(NewSSHTunnelList, 4)";"(New, 5)";"(BuildHandlerChain, 5)";"(Start, 1)";"(withAggregator, 4)";"(Run, 2)";}
"(Push, 2)" -> {"(Set, 2)";"(doRequest, 2)";}
"(UpdateTransport, 4)" -> {"(updateTransport, 5)";}
"(Diff, 3)" -> {"(Diff, 2)";}
"(Contains, 1)" -> {"(index, 1)";}
"(runFrame, 1)" -> {"(visitInstr, 2)";}
"(Add, 1)" -> {"(Open, 1)";"(Time, 1)";"(watch, 3)";"(AppendChild, 1)";"(get, 1)";"(delete, 2)";"(Add, 2)";"(Parse, 1)";"(Search, 2)";"(Stat, 1)";"(delete, 1)";"(copy, 2)";"(Push, 2)";"(Insert, 1)";"(insert, 1)";"(New, 1)";"(add, 1)";"(Has, 1)";"(add, 2)";"(hash, 1)";}
"(NewSSHTunnelList, 4)" -> {}
"(start, 1)" -> {"(CopyConsole, 7)";"(copyPipes, 7)";"(NewDaemon, 4)";"(shutdownDaemon, 1)";"(handleControlSocketChange, 2)";"(Init, 4)";}
"(StartUpdater, 2)" -> {"(newLeaseUpdater, 3)";}
"(Snapshot, 2)" -> {}
"(Delete, 2)" -> {"(Accept, 2)";"(stop, 1)";}
"(ListImages, 2)" -> {"(List, 2)";}
"(Run, 3)" -> {"(run, 3)";}
"(getURL, 1)" -> {"(RoundTrip, 1)";}
"(updateTransport, 5)" -> {}
"(newClientTransport, 6)" -> {}
"(UpdatePod, 1)" -> {}
"(Execute, 4)" -> {"(executeExecNewPod, 4)";}
"(addNode, 1)" -> {"(nodeShouldRunDaemonPod, 2)";}
"(commitContainer, 3)" -> {"(Commit, 2)";}
"(copy, 2)" -> {"(Copy, 2)";"(Open, 1)";"(Create, 1)";"(get, 1)";}
"(Start, 2)" -> {"(initializeAndStartHeartbeat, 4)";"(callRemoteBalancer, 2)";"(NewServer, 1)";}
"(AppendChild, 1)" -> {"(Write, 1)";}
"(FromURL, 2)" -> {"(Dial, 3)";}
"(Get, 0)" -> {"(Get, 2)";"(Get, 3)";"(Do, 1)";"(Get, 5)";}
"(addReadinessCheckRoute, 3)" -> {"(Write, 1)";}
"(Update, 1)" -> {"(Build, 0)";"(update, 1)";}
"(Snapshot, 3)" -> {"(Snapshot, 2)";}
"(NewListener, 2)" -> {"(NewTimeoutListener, 5)";}
"(newTLSListener, 2)" -> {}
"(wait, 1)" -> {}
"(List, 4)" -> {"(Validate, 1)";}
"(ConfigureTransport, 1)" -> {"(configureTransport, 1)";}
"(RemoveMember, 2)" -> {"(removeMember, 2)";}
"(loadPlugins, 1)" -> {"(Init, 1)";}
"(NewClientConn, 3)" -> {"(clientHandshake, 2)";}
"(NewClusterV3, 2)" -> {"(Launch, 1)";}
"(Parse, 1)" -> {"(handle, 1)";"(Build, 0)";"(Set, 2)";"(parse, 2)";}
"(Execute, 2)" -> {"(Validate, 1)";"(Run, 3)";}
"(addEndpoint, 1)" -> {}
"(initializeAndStartHeartbeat, 4)" -> {}
"(DetailedRoundTrip, 1)" -> {"(getConn, 1)";}
"(watch, 3)" -> {"(Run, 1)";"(Watch, 1)";"(New, 2)";}
"(parseFiles, 6)" -> {}
"(newAuthenticator, 2)" -> {"(SetTransportDefaults, 1)";}
"(NewGroup, 3)" -> {"(newServer, 2)";}
"(startControllers, 5)" -> {}
"(recvtty, 2)" -> {"(ConsoleFromFile, 1)";}
"(newServer, 2)" -> {}
"(Remove, 3)" -> {"(replaceBrickInVolume, 4)";}
"(prune, 1)" -> {"(update, 1)";}
"(TarWithOptions, 2)" -> {}
"(addPod, 1)" -> {"(AddPod, 1)";}
"(WaitUntilFreshAndList, 2)" -> {"(waitUntilFreshAndBlock, 2)";}
"(Copy, 4)" -> {"(Execute, 4)";}
"(schedule, 1)" -> {"(Schedule, 2)";}
"(updatePod, 2)" -> {"(AddPod, 1)";"(addPod, 1)";}
"(cleanup, 1)" -> {"(Update, 1)";}
"(Start, 1)" -> {"(start, 2)";"(NewServer, 1)";"(Dial, 3)";"(Read, 2)";"(start, 1)";"(copyPipes, 7)";"(Start, 2)";}
"(Stat, 1)" -> {"(Stat, 2)";"(Get, 1)";"(New, 1)";}
"(tryUpgrade, 2)" -> {}
"(parsePackageFiles, 2)" -> {"(parseFiles, 6)";}
"(ConsoleFromFile, 1)" -> {"(newMaster, 1)";}
"(openBackend, 1)" -> {}
"(Do, 1)" -> {"(Prepare, 3)";}
"(Init, 1)" -> {"(Init, 4)";}
"(startZfsWatcher, 1)" -> {}
"(Copy, 2)" -> {"(ServeHTTP, 2)";"(Write, 1)";}
"(newOOMCollector, 1)" -> {}
"(handle, 1)" -> {"(Delete, 2)";}
"(doRequest, 2)" -> {"(Do, 3)";}
"(snapshotPath, 4)" -> {}
"(NewKubernetesSource, 1)" -> {}
"(Cleanup, 0)" -> {"(Delete, 2)";}
"(Inspect, 1)" -> {"(Get, 0)";}
"(PrioritizeNodes, 6)" -> {}
"(RunShort, 2)" -> {"(mount, 2)";}
"(call, 5)" -> {"(callSSA, 6)";}
"(Add, 2)" -> {"(Validate, 1)";"(Remove, 1)";"(index, 1)";"(Do, 2)";"(add, 1)";}
"(Install, 1)" -> {"(Delete, 2)";}
"(CreateAndInitKubelet, 31)" -> {"(NewMainKubelet, 31)";}
"(Accept, 2)" -> {}
"(serveUpgrade, 2)" -> {}
"(Prepare, 3)" -> {"(CreateVolume, 1)";}
"(NewServer, 2)" -> {"(NewServer, 1)";}
"(Destroy, 2)" -> {"(checkBricksCanBeDestroyed, 3)";}
"(StreamContainerIO, 3)" -> {}
"(SetTransportDefaults, 1)" -> {"(ConfigureTransport, 1)";}
"(startNode, 3)" -> {"(StartNode, 2)";}
"(Validate, 2)" -> {}
"(New, 9)" -> {"(CopyConsole, 7)";}
"(RunKubelet, 4)" -> {"(startKubelet, 5)";"(CreateAndInitKubelet, 31)";}
"(newServerTransport, 4)" -> {}
"(UpdateVol, 1)" -> {"(Put, 3)";}
"(NewTCPSocket, 2)" -> {"(NewListener, 2)";}
"(copyPipes, 7)" -> {}
"(serveStatus, 2)" -> {}
"(initOAuthAuthorizationServerMetadataRoute, 2)" -> {"(Write, 1)";}
"(stop, 1)" -> {}
"(NewServer, 1)" -> {"(restartNode, 2)";"(restartAsStandaloneNode, 2)";"(openBackend, 1)";"(NewServerTLS, 2)";"(startNode, 3)";}
"(setupProcessPipes, 3)" -> {}
"(Dial, 2)" -> {"(dial, 3)";"(Dial, 3)";}
"(tlsDial, 3)" -> {"(tlsDialWithDialer, 4)";}
"(post, 1)" -> {"(RoundTrip, 1)";}
"(newWatchBroadcasts, 1)" -> {}
"(Register, 1)" -> {"(Register, 2)";"(Validate, 1)";"(Put, 3)";}
"(ListImages, 1)" -> {"(ListImages, 2)";}
"(load, 1)" -> {"(parsePackageFiles, 2)";}
"(Init, 4)" -> {"(ListenPipe, 2)";"(NewTCPSocket, 2)";}
"(configureTransport, 1)" -> {"(addConnIfNeeded, 3)";}
"(allPackages, 3)" -> {}
"(startThinPoolWatcher, 1)" -> {}
"(shutdownDaemon, 1)" -> {}
"(handleControlSocketChange, 2)" -> {}
"(executeExecNewPod, 4)" -> {}
"(Do, 2)" -> {"(RoundTrip, 1)";"(Delete, 2)";"(Put, 3)";}
"(Verifier, 1)" -> {"(newAsyncIDTokenVerifier, 3)";}
"(run, 1)" -> {"(setupIO, 6)";"(forward, 3)";"(schedule, 1)";"(addNode, 1)";"(commitContainer, 3)";}
"(run, 3)" -> {"(RunKubelet, 4)";"(UpdateTransport, 4)";}
"(update, 1)" -> {"(UpdatePod, 2)";}
"(Put, 2)" -> {"(Save, 3)";}
"(pullImageWithReference, 6)" -> {}
"(Upload, 3)" -> {"(update, 1)";}
"(ListenPipe, 2)" -> {}
"(performCopy, 2)" -> {"(exportImage, 3)";}
"(Delete, 3)" -> {"(Get, 3)";"(Remove, 3)";"(Do, 2)";"(Validate, 1)";"(run, 1)";"(Set, 2)";"(update, 3)";"(Delete, 2)";"(Get, 2)";"(Get, 5)";"(Update, 1)";}
"(Has, 1)" -> {"(Get, 1)";"(Stat, 1)";"(Contains, 1)";}
"(New, 3)" -> {"(newLocalFakeStorage, 2)";}
"(Read, 2)" -> {"(Copy, 4)";}
"(CreateControllerContext, 4)" -> {}
"(Execute, 3)" -> {}
"(Time, 1)" -> {"(Format, 1)";}
"(index, 1)" -> {"(update, 1)";}
"(exportImage, 3)" -> {"(Commit, 1)";}
"(Diff, 2)" -> {"(TarWithOptions, 2)";}
"(Delete, 1)" -> {"(Do, 2)";"(Delete, 2)";"(Set, 2)";"(Get, 2)";"(Delete, 3)";}
"(CopyConsole, 7)" -> {}
"(NewDynamicResourceConsumer, 12)" -> {"(newResourceConsumer, 18)";}
"(setUp, 1)" -> {"(NewUnsecuredEtcd3TestClientServer, 1)";}
"(Detach, 2)" -> {"(Delete, 2)";"(DetachDisk, 3)";"(UpdateVol, 1)";}
"(Encode, 1)" -> {"(Write, 1)";"(Put, 1)";}
"(replace, 2)" -> {"(Execute, 2)";"(index, 1)";}
"(Apply, 3)" -> {"(Apply, 2)";}
"(Launch, 1)" -> {}
"(Stat, 2)" -> {"(Do, 3)";"(Get, 3)";}
"(start, 2)" -> {}
"(Create, 2)" -> {"(Apply, 3)";"(initInformers, 1)";"(Check, 1)";"(CreateVolume, 1)";"(New, 9)";"(Create, 3)";}
"(Register, 3)" -> {"(startThinPoolWatcher, 1)";"(startZfsWatcher, 1)";}
"(List, 1)" -> {"(Set, 2)";"(List, 5)";"(List, 2)";"(Do, 2)";"(Validate, 1)";}
"(Set, 2)" -> {"(Do, 3)";"(Put, 3)";}
"(dial, 2)" -> {"(NewClientConn, 3)";"(NewServerConn, 2)";}
"(forward, 3)" -> {}
"(Insert, 1)" -> {"(Delete, 1)";}
"(createDestroyConcurrently, 4)" -> {}
"(addConnIfNeeded, 3)" -> {}
"(Commit, 1)" -> {"(Register, 3)";}
"(Capture, 2)" -> {}
"(Update, 3)" -> {"(Accept, 2)";}
"(handleHttps, 2)" -> {}
"(Schedule, 2)" -> {"(PrioritizeNodes, 6)";}
"(create, 1)" -> {"(Create, 3)";}
"(NewServerConn, 2)" -> {"(serverHandshake, 1)";}
"(AddPod, 1)" -> {"(UpdatePod, 1)";}
"(BuildHandlerChain, 5)" -> {}
"(Get, 3)" -> {"(Get, 5)";"(Do, 3)";"(RoundTrip, 1)";}
"(mount, 2)" -> {"(attachScale, 4)";}
"(New, 1)" -> {"(newAuthenticator, 2)";"(get, 1)";"(Run, 2)";"(newOOMCollector, 1)";"(newClient, 1)";"(addReadinessCheckRoute, 3)";"(List, 4)";"(Get, 0)";"(Read, 2)";"(NewBroker, 3)";"(New, 5)";"(Update, 1)";"(Parse, 1)";"(Get, 1)";"(Register, 1)";"(Remove, 1)";"(NewClient, 1)";"(Install, 1)";"(Start, 1)";"(Run, 1)";"(cleanup, 1)";"(New, 2)";"(initOAuthAuthorizationServerMetadataRoute, 2)";"(New, 3)";"(Verifier, 1)";"(Ping, 1)";"(Handle, 2)";}
"(configureKubeConfigForClientCertRotation, 2)" -> {"(RefreshCertificateAfterExpiry, 4)";}
"(RefreshCertificateAfterExpiry, 4)" -> {}
"(New, 5)" -> {"(configureKubeConfigForClientCertRotation, 2)";}
"(Handle, 2)" -> {"(implements, 1)";}
"(pull, 4)" -> {}
"(restartAsStandaloneNode, 2)" -> {"(RestartNode, 1)";}
"(Apply, 1)" -> {"(Apply, 3)";}
"(pullImage, 1)" -> {"(PullImage, 7)";}
"(clientHandshake, 2)" -> {"(newClientTransport, 6)";}
"(simulate, 3)" -> {"(AddPod, 1)";}
"(scaleUp, 5)" -> {"(run, 4)";}
"(Run, 2)" -> {"(Write, 1)";"(Create, 2)";"(implements, 1)";"(run, 1)";"(Apply, 3)";"(create, 1)";"(List, 2)";"(Prepare, 1)";"(Create, 3)";"(Build, 0)";"(Start, 2)";"(Save, 3)";"(StartUpdater, 2)";"(prune, 1)";"(FromURL, 2)";"(RunShort, 2)";"(Start, 1)";"(Execute, 3)";"(Register, 2)";}
"(setupIO, 6)" -> {"(setupProcessPipes, 3)";"(recvtty, 2)";}
"(Prepare, 1)" -> {"(Upgrade, 7)";"(Pull, 8)";"(pullImage, 1)";}
"(withAggregator, 4)" -> {"(createAggregatorServer, 3)";}
"(awaitOpenSlotForRequest, 1)" -> {}
"(New, 2)" -> {"(NewSession, 2)";"(NewServer, 2)";"(Init, 1)";"(Put, 2)";"(loadPlugins, 1)";"(Update, 1)";"(Start, 1)";"(Register, 1)";"(load, 1)";"(Dial, 2)";"(Apply, 1)";}
"(initInformers, 1)" -> {}
"(Remove, 1)" -> {"(Delete, 2)";"(index, 1)";}
"(RoundTrip, 1)" -> {"(DetailedRoundTrip, 1)";"(awaitOpenSlotForRequest, 1)";}
"(NewBroker, 3)" -> {}
"(Build, 1)" -> {"(ForEachPackage, 2)";"(NewKubernetesSource, 1)";}
"(get, 1)" -> {"(Get, 2)";"(Do, 1)";"(load, 1)";"(SetTransportDefaults, 1)";"(Read, 2)";"(Upload, 3)";}
"(waitUntilFreshAndBlock, 2)" -> {}
"(newResourceConsumer, 18)" -> {}
"(newMaster, 1)" -> {"(setUp, 1)";}
"(Save, 3)" -> {"(snapshotPath, 4)";}
"(StartNode, 2)" -> {}
"(serverHandshake, 1)" -> {"(newServerTransport, 4)";}
"(UpdatePod, 2)" -> {"(updatePod, 2)";}
"(NewMainKubelet, 31)" -> {}
"(callSSA, 6)" -> {"(runFrame, 1)";}
"(visitInstr, 2)" -> {}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 07e45cb8ec631f158f6c7c29330a9cdeae9f4a29
|
1.0
|
hjimmy/tg-origin: source/pkg/quota/image/imagestreamimport_evaluator_test.go; 3 LoC -
Found a possible issue in [hjimmy/tg-origin](https://www.github.com/hjimmy/tg-origin) at [source/pkg/quota/image/imagestreamimport_evaluator_test.go](https://github.com/hjimmy/tg-origin/blob/07e45cb8ec631f158f6c7c29330a9cdeae9f4a29/source/pkg/quota/image/imagestreamimport_evaluator_test.go#L166-L168)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to is at line 167 may start a goroutine
[Click here to see the code in its original context.](https://github.com/hjimmy/tg-origin/blob/07e45cb8ec631f158f6c7c29330a9cdeae9f4a29/source/pkg/quota/image/imagestreamimport_evaluator_test.go#L166-L168)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, is := range tc.iss {
isInformer.Informer().GetIndexer().Add(&is)
}
```
</details>
<details>
<summary>Click here to show extra information the analyzer produced.</summary>
```
The following graphviz dot graph describes paths through the callgraph that could lead to a function calling a goroutine:
digraph G {
"(implements, 1)" -> {"(Build, 1)";}
"(newArchiveFromGuest, 1)" -> {}
"(restartNode, 2)" -> {"(RestartNode, 1)";}
"(NewTimeoutListener, 5)" -> {"(wrapTLS, 4)";}
"(newArchiveToGuest, 1)" -> {}
"(NewSession, 2)" -> {}
"(List, 5)" -> {"(WaitUntilFreshAndList, 2)";}
"(NewUnsecuredEtcd3TestClientServer, 1)" -> {"(NewClusterV3, 2)";}
"(tlsDialWithDialer, 4)" -> {}
"(attachScale, 4)" -> {"(scaleUp, 5)";}
"(Upgrade, 7)" -> {"(pull, 4)";}
"(create, 2)" -> {"(Snapshot, 3)";"(Create, 3)";}
"(add, 1)" -> {"(post, 1)";"(newWatchBroadcast, 3)";"(newWatchBroadcasts, 1)";"(Cleanup, 0)";"(performCopy, 2)";"(Put, 1)";"(Update, 1)";}
"(Get, 2)" -> {"(New, 2)";"(Open, 2)";"(Do, 2)";"(Accept, 2)";"(implements, 1)";"(Get, 3)";"(Stat, 2)";"(handle, 1)";"(Validate, 1)";"(Set, 2)";"(getURL, 1)";"(Put, 3)";}
"(NewDaemon, 4)" -> {}
"(Exec, 1)" -> {"(Run, 1)";}
"(createAggregatorServer, 3)" -> {}
"(ForEachPackage, 2)" -> {"(allPackages, 3)";}
"(PullImage, 7)" -> {"(pullImageWithReference, 6)";}
"(ServeHTTP, 2)" -> {"(RoundTrip, 1)";"(RemoveMember, 2)";"(serveUpgrade, 2)";"(Validate, 2)";"(Check, 1)";"(handleHttps, 2)";"(serveStatus, 2)";"(tryUpgrade, 2)";}
"(Register, 2)" -> {"(addEndpoint, 1)";}
"(Put, 1)" -> {"(Update, 3)";"(Write, 1)";"(Create, 1)";"(Put, 3)";"(Set, 2)";}
"(CreateVolume, 1)" -> {}
"(Pull, 8)" -> {"(pull, 4)";}
"(Apply, 2)" -> {"(Diff, 3)";}
"(RestartNode, 1)" -> {}
"(Interpret, 5)" -> {"(call, 5)";}
"(Remove, 2)" -> {"(Do, 2)";"(update, 3)";}
"(Watch, 2)" -> {}
"(DetachDisk, 3)" -> {"(Accept, 2)";"(Run, 3)";}
"(Check, 1)" -> {}
"(prepareStatement, 2)" -> {"(Prepare, 1)";}
"(callRemoteBalancer, 2)" -> {}
"(NewServerTLS, 2)" -> {"(newServer, 2)";"(NewGroup, 3)";}
"(Dial, 3)" -> {"(NewClientConn, 3)";}
"(insert, 1)" -> {"(Update, 1)";}
"(nodeShouldRunDaemonPod, 2)" -> {"(simulate, 3)";}
"(DestroyBricks, 3)" -> {"(createDestroyConcurrently, 4)";}
"(removeMember, 2)" -> {"(Terminate, 1)";}
"(NewClient, 1)" -> {"(newClient, 1)";}
"(Ping, 1)" -> {"(Get, 3)";}
"(replaceBrickInVolume, 4)" -> {"(CreateBricks, 3)";"(DestroyBricks, 3)";"(Destroy, 2)";}
"(Terminate, 1)" -> {}
"(update, 3)" -> {"(update, 1)";}
"(add, 2)" -> {"(add, 1)";"(Add, 2)";"(New, 1)";"(Get, 1)";"(Get, 0)";"(Put, 1)";"(replace, 2)";"(copy, 2)";}
"(Write, 1)" -> {"(wait, 1)";"(Capture, 2)";}
"(Create, 3)" -> {"(Destroy, 2)";"(Validate, 2)";"(CreateBricks, 3)";"(DestroyBricks, 3)";}
"(checkBricksCanBeDestroyed, 3)" -> {}
"(Watch, 1)" -> {"(Put, 3)";"(Watch, 2)";}
"(Format, 1)" -> {"(Set, 2)";"(Do, 1)";}
"(dial, 3)" -> {"(tlsDial, 3)";}
"(run, 4)" -> {"(NewDynamicResourceConsumer, 12)";"(Interpret, 5)";}
"(newClient, 1)" -> {"(dial, 2)";}
"(newLocalFakeStorage, 2)" -> {"(NewServer, 1)";}
"(parse, 2)" -> {"(Init, 1)";}
"(Open, 2)" -> {"(newArchiveFromGuest, 1)";"(newArchiveToGuest, 1)";"(Dispatch, 3)";"(ServeHTTP, 2)";}
"(newAsyncIDTokenVerifier, 3)" -> {}
"(Get, 5)" -> {"(WaitUntilFreshAndGet, 3)";}
"(CreateBricks, 3)" -> {"(createDestroyConcurrently, 4)";}
"(Put, 3)" -> {"(Commit, 2)";}
"(delete, 2)" -> {"(Delete, 3)";"(Detach, 2)";"(Encode, 1)";"(runDelete, 6)";"(Inspect, 1)";"(Remove, 2)";"(Delete, 1)";}
"(newWatchBroadcast, 3)" -> {"(update, 1)";}
"(Build, 0)" -> {"(Build, 1)";}
"(WaitUntilFreshAndGet, 3)" -> {"(waitUntilFreshAndBlock, 2)";}
"(getConn, 1)" -> {}
"(hash, 1)" -> {"(Write, 1)";}
"(Validate, 1)" -> {"(Validate, 2)";}
"(runDelete, 6)" -> {"(Delete, 2)";}
"(Create, 1)" -> {"(Create, 2)";"(create, 1)";"(Apply, 1)";"(Update, 1)";}
"(Open, 1)" -> {"(Open, 2)";"(New, 1)";"(Exec, 1)";"(prepareStatement, 2)";}
"(Get, 1)" -> {"(Get, 2)";"(New, 2)";"(Get, 3)";"(Set, 2)";"(getURL, 1)";"(Put, 1)";"(Do, 2)";}
"(Search, 2)" -> {"(Get, 2)";"(Parse, 1)";"(Set, 2)";"(ListImages, 1)";"(Create, 1)";"(Execute, 2)";"(List, 1)";}
"(Dispatch, 3)" -> {}
"(delete, 1)" -> {"(Delete, 3)";"(Delete, 1)";"(Update, 1)";"(Remove, 1)";}
"(Commit, 2)" -> {"(Register, 3)";}
"(List, 2)" -> {"(Accept, 2)";}
"(newLeaseUpdater, 3)" -> {}
"(startKubelet, 5)" -> {}
"(wrapTLS, 4)" -> {"(newTLSListener, 2)";}
"(Do, 3)" -> {}
"(Run, 1)" -> {"(Register, 2)";"(Write, 1)";"(create, 2)";"(StreamContainerIO, 3)";"(run, 1)";"(Update, 1)";"(startControllers, 5)";"(CreateControllerContext, 4)";"(NewSSHTunnelList, 4)";"(New, 5)";"(BuildHandlerChain, 5)";"(Start, 1)";"(withAggregator, 4)";"(Run, 2)";}
"(Push, 2)" -> {"(Set, 2)";"(doRequest, 2)";}
"(UpdateTransport, 4)" -> {"(updateTransport, 5)";}
"(Diff, 3)" -> {"(Diff, 2)";}
"(Contains, 1)" -> {"(index, 1)";}
"(runFrame, 1)" -> {"(visitInstr, 2)";}
"(Add, 1)" -> {"(Open, 1)";"(Time, 1)";"(watch, 3)";"(AppendChild, 1)";"(get, 1)";"(delete, 2)";"(Add, 2)";"(Parse, 1)";"(Search, 2)";"(Stat, 1)";"(delete, 1)";"(copy, 2)";"(Push, 2)";"(Insert, 1)";"(insert, 1)";"(New, 1)";"(add, 1)";"(Has, 1)";"(add, 2)";"(hash, 1)";}
"(NewSSHTunnelList, 4)" -> {}
"(start, 1)" -> {"(CopyConsole, 7)";"(copyPipes, 7)";"(NewDaemon, 4)";"(shutdownDaemon, 1)";"(handleControlSocketChange, 2)";"(Init, 4)";}
"(StartUpdater, 2)" -> {"(newLeaseUpdater, 3)";}
"(Snapshot, 2)" -> {}
"(Delete, 2)" -> {"(Accept, 2)";"(stop, 1)";}
"(ListImages, 2)" -> {"(List, 2)";}
"(Run, 3)" -> {"(run, 3)";}
"(getURL, 1)" -> {"(RoundTrip, 1)";}
"(updateTransport, 5)" -> {}
"(newClientTransport, 6)" -> {}
"(UpdatePod, 1)" -> {}
"(Execute, 4)" -> {"(executeExecNewPod, 4)";}
"(addNode, 1)" -> {"(nodeShouldRunDaemonPod, 2)";}
"(commitContainer, 3)" -> {"(Commit, 2)";}
"(copy, 2)" -> {"(Copy, 2)";"(Open, 1)";"(Create, 1)";"(get, 1)";}
"(Start, 2)" -> {"(initializeAndStartHeartbeat, 4)";"(callRemoteBalancer, 2)";"(NewServer, 1)";}
"(AppendChild, 1)" -> {"(Write, 1)";}
"(FromURL, 2)" -> {"(Dial, 3)";}
"(Get, 0)" -> {"(Get, 2)";"(Get, 3)";"(Do, 1)";"(Get, 5)";}
"(addReadinessCheckRoute, 3)" -> {"(Write, 1)";}
"(Update, 1)" -> {"(Build, 0)";"(update, 1)";}
"(Snapshot, 3)" -> {"(Snapshot, 2)";}
"(NewListener, 2)" -> {"(NewTimeoutListener, 5)";}
"(newTLSListener, 2)" -> {}
"(wait, 1)" -> {}
"(List, 4)" -> {"(Validate, 1)";}
"(ConfigureTransport, 1)" -> {"(configureTransport, 1)";}
"(RemoveMember, 2)" -> {"(removeMember, 2)";}
"(loadPlugins, 1)" -> {"(Init, 1)";}
"(NewClientConn, 3)" -> {"(clientHandshake, 2)";}
"(NewClusterV3, 2)" -> {"(Launch, 1)";}
"(Parse, 1)" -> {"(handle, 1)";"(Build, 0)";"(Set, 2)";"(parse, 2)";}
"(Execute, 2)" -> {"(Validate, 1)";"(Run, 3)";}
"(addEndpoint, 1)" -> {}
"(initializeAndStartHeartbeat, 4)" -> {}
"(DetailedRoundTrip, 1)" -> {"(getConn, 1)";}
"(watch, 3)" -> {"(Run, 1)";"(Watch, 1)";"(New, 2)";}
"(parseFiles, 6)" -> {}
"(newAuthenticator, 2)" -> {"(SetTransportDefaults, 1)";}
"(NewGroup, 3)" -> {"(newServer, 2)";}
"(startControllers, 5)" -> {}
"(recvtty, 2)" -> {"(ConsoleFromFile, 1)";}
"(newServer, 2)" -> {}
"(Remove, 3)" -> {"(replaceBrickInVolume, 4)";}
"(prune, 1)" -> {"(update, 1)";}
"(TarWithOptions, 2)" -> {}
"(addPod, 1)" -> {"(AddPod, 1)";}
"(WaitUntilFreshAndList, 2)" -> {"(waitUntilFreshAndBlock, 2)";}
"(Copy, 4)" -> {"(Execute, 4)";}
"(schedule, 1)" -> {"(Schedule, 2)";}
"(updatePod, 2)" -> {"(AddPod, 1)";"(addPod, 1)";}
"(cleanup, 1)" -> {"(Update, 1)";}
"(Start, 1)" -> {"(start, 2)";"(NewServer, 1)";"(Dial, 3)";"(Read, 2)";"(start, 1)";"(copyPipes, 7)";"(Start, 2)";}
"(Stat, 1)" -> {"(Stat, 2)";"(Get, 1)";"(New, 1)";}
"(tryUpgrade, 2)" -> {}
"(parsePackageFiles, 2)" -> {"(parseFiles, 6)";}
"(ConsoleFromFile, 1)" -> {"(newMaster, 1)";}
"(openBackend, 1)" -> {}
"(Do, 1)" -> {"(Prepare, 3)";}
"(Init, 1)" -> {"(Init, 4)";}
"(startZfsWatcher, 1)" -> {}
"(Copy, 2)" -> {"(ServeHTTP, 2)";"(Write, 1)";}
"(newOOMCollector, 1)" -> {}
"(handle, 1)" -> {"(Delete, 2)";}
"(doRequest, 2)" -> {"(Do, 3)";}
"(snapshotPath, 4)" -> {}
"(NewKubernetesSource, 1)" -> {}
"(Cleanup, 0)" -> {"(Delete, 2)";}
"(Inspect, 1)" -> {"(Get, 0)";}
"(PrioritizeNodes, 6)" -> {}
"(RunShort, 2)" -> {"(mount, 2)";}
"(call, 5)" -> {"(callSSA, 6)";}
"(Add, 2)" -> {"(Validate, 1)";"(Remove, 1)";"(index, 1)";"(Do, 2)";"(add, 1)";}
"(Install, 1)" -> {"(Delete, 2)";}
"(CreateAndInitKubelet, 31)" -> {"(NewMainKubelet, 31)";}
"(Accept, 2)" -> {}
"(serveUpgrade, 2)" -> {}
"(Prepare, 3)" -> {"(CreateVolume, 1)";}
"(NewServer, 2)" -> {"(NewServer, 1)";}
"(Destroy, 2)" -> {"(checkBricksCanBeDestroyed, 3)";}
"(StreamContainerIO, 3)" -> {}
"(SetTransportDefaults, 1)" -> {"(ConfigureTransport, 1)";}
"(startNode, 3)" -> {"(StartNode, 2)";}
"(Validate, 2)" -> {}
"(New, 9)" -> {"(CopyConsole, 7)";}
"(RunKubelet, 4)" -> {"(startKubelet, 5)";"(CreateAndInitKubelet, 31)";}
"(newServerTransport, 4)" -> {}
"(UpdateVol, 1)" -> {"(Put, 3)";}
"(NewTCPSocket, 2)" -> {"(NewListener, 2)";}
"(copyPipes, 7)" -> {}
"(serveStatus, 2)" -> {}
"(initOAuthAuthorizationServerMetadataRoute, 2)" -> {"(Write, 1)";}
"(stop, 1)" -> {}
"(NewServer, 1)" -> {"(restartNode, 2)";"(restartAsStandaloneNode, 2)";"(openBackend, 1)";"(NewServerTLS, 2)";"(startNode, 3)";}
"(setupProcessPipes, 3)" -> {}
"(Dial, 2)" -> {"(dial, 3)";"(Dial, 3)";}
"(tlsDial, 3)" -> {"(tlsDialWithDialer, 4)";}
"(post, 1)" -> {"(RoundTrip, 1)";}
"(newWatchBroadcasts, 1)" -> {}
"(Register, 1)" -> {"(Register, 2)";"(Validate, 1)";"(Put, 3)";}
"(ListImages, 1)" -> {"(ListImages, 2)";}
"(load, 1)" -> {"(parsePackageFiles, 2)";}
"(Init, 4)" -> {"(ListenPipe, 2)";"(NewTCPSocket, 2)";}
"(configureTransport, 1)" -> {"(addConnIfNeeded, 3)";}
"(allPackages, 3)" -> {}
"(startThinPoolWatcher, 1)" -> {}
"(shutdownDaemon, 1)" -> {}
"(handleControlSocketChange, 2)" -> {}
"(executeExecNewPod, 4)" -> {}
"(Do, 2)" -> {"(RoundTrip, 1)";"(Delete, 2)";"(Put, 3)";}
"(Verifier, 1)" -> {"(newAsyncIDTokenVerifier, 3)";}
"(run, 1)" -> {"(setupIO, 6)";"(forward, 3)";"(schedule, 1)";"(addNode, 1)";"(commitContainer, 3)";}
"(run, 3)" -> {"(RunKubelet, 4)";"(UpdateTransport, 4)";}
"(update, 1)" -> {"(UpdatePod, 2)";}
"(Put, 2)" -> {"(Save, 3)";}
"(pullImageWithReference, 6)" -> {}
"(Upload, 3)" -> {"(update, 1)";}
"(ListenPipe, 2)" -> {}
"(performCopy, 2)" -> {"(exportImage, 3)";}
"(Delete, 3)" -> {"(Get, 3)";"(Remove, 3)";"(Do, 2)";"(Validate, 1)";"(run, 1)";"(Set, 2)";"(update, 3)";"(Delete, 2)";"(Get, 2)";"(Get, 5)";"(Update, 1)";}
"(Has, 1)" -> {"(Get, 1)";"(Stat, 1)";"(Contains, 1)";}
"(New, 3)" -> {"(newLocalFakeStorage, 2)";}
"(Read, 2)" -> {"(Copy, 4)";}
"(CreateControllerContext, 4)" -> {}
"(Execute, 3)" -> {}
"(Time, 1)" -> {"(Format, 1)";}
"(index, 1)" -> {"(update, 1)";}
"(exportImage, 3)" -> {"(Commit, 1)";}
"(Diff, 2)" -> {"(TarWithOptions, 2)";}
"(Delete, 1)" -> {"(Do, 2)";"(Delete, 2)";"(Set, 2)";"(Get, 2)";"(Delete, 3)";}
"(CopyConsole, 7)" -> {}
"(NewDynamicResourceConsumer, 12)" -> {"(newResourceConsumer, 18)";}
"(setUp, 1)" -> {"(NewUnsecuredEtcd3TestClientServer, 1)";}
"(Detach, 2)" -> {"(Delete, 2)";"(DetachDisk, 3)";"(UpdateVol, 1)";}
"(Encode, 1)" -> {"(Write, 1)";"(Put, 1)";}
"(replace, 2)" -> {"(Execute, 2)";"(index, 1)";}
"(Apply, 3)" -> {"(Apply, 2)";}
"(Launch, 1)" -> {}
"(Stat, 2)" -> {"(Do, 3)";"(Get, 3)";}
"(start, 2)" -> {}
"(Create, 2)" -> {"(Apply, 3)";"(initInformers, 1)";"(Check, 1)";"(CreateVolume, 1)";"(New, 9)";"(Create, 3)";}
"(Register, 3)" -> {"(startThinPoolWatcher, 1)";"(startZfsWatcher, 1)";}
"(List, 1)" -> {"(Set, 2)";"(List, 5)";"(List, 2)";"(Do, 2)";"(Validate, 1)";}
"(Set, 2)" -> {"(Do, 3)";"(Put, 3)";}
"(dial, 2)" -> {"(NewClientConn, 3)";"(NewServerConn, 2)";}
"(forward, 3)" -> {}
"(Insert, 1)" -> {"(Delete, 1)";}
"(createDestroyConcurrently, 4)" -> {}
"(addConnIfNeeded, 3)" -> {}
"(Commit, 1)" -> {"(Register, 3)";}
"(Capture, 2)" -> {}
"(Update, 3)" -> {"(Accept, 2)";}
"(handleHttps, 2)" -> {}
"(Schedule, 2)" -> {"(PrioritizeNodes, 6)";}
"(create, 1)" -> {"(Create, 3)";}
"(NewServerConn, 2)" -> {"(serverHandshake, 1)";}
"(AddPod, 1)" -> {"(UpdatePod, 1)";}
"(BuildHandlerChain, 5)" -> {}
"(Get, 3)" -> {"(Get, 5)";"(Do, 3)";"(RoundTrip, 1)";}
"(mount, 2)" -> {"(attachScale, 4)";}
"(New, 1)" -> {"(newAuthenticator, 2)";"(get, 1)";"(Run, 2)";"(newOOMCollector, 1)";"(newClient, 1)";"(addReadinessCheckRoute, 3)";"(List, 4)";"(Get, 0)";"(Read, 2)";"(NewBroker, 3)";"(New, 5)";"(Update, 1)";"(Parse, 1)";"(Get, 1)";"(Register, 1)";"(Remove, 1)";"(NewClient, 1)";"(Install, 1)";"(Start, 1)";"(Run, 1)";"(cleanup, 1)";"(New, 2)";"(initOAuthAuthorizationServerMetadataRoute, 2)";"(New, 3)";"(Verifier, 1)";"(Ping, 1)";"(Handle, 2)";}
"(configureKubeConfigForClientCertRotation, 2)" -> {"(RefreshCertificateAfterExpiry, 4)";}
"(RefreshCertificateAfterExpiry, 4)" -> {}
"(New, 5)" -> {"(configureKubeConfigForClientCertRotation, 2)";}
"(Handle, 2)" -> {"(implements, 1)";}
"(pull, 4)" -> {}
"(restartAsStandaloneNode, 2)" -> {"(RestartNode, 1)";}
"(Apply, 1)" -> {"(Apply, 3)";}
"(pullImage, 1)" -> {"(PullImage, 7)";}
"(clientHandshake, 2)" -> {"(newClientTransport, 6)";}
"(simulate, 3)" -> {"(AddPod, 1)";}
"(scaleUp, 5)" -> {"(run, 4)";}
"(Run, 2)" -> {"(Write, 1)";"(Create, 2)";"(implements, 1)";"(run, 1)";"(Apply, 3)";"(create, 1)";"(List, 2)";"(Prepare, 1)";"(Create, 3)";"(Build, 0)";"(Start, 2)";"(Save, 3)";"(StartUpdater, 2)";"(prune, 1)";"(FromURL, 2)";"(RunShort, 2)";"(Start, 1)";"(Execute, 3)";"(Register, 2)";}
"(setupIO, 6)" -> {"(setupProcessPipes, 3)";"(recvtty, 2)";}
"(Prepare, 1)" -> {"(Upgrade, 7)";"(Pull, 8)";"(pullImage, 1)";}
"(withAggregator, 4)" -> {"(createAggregatorServer, 3)";}
"(awaitOpenSlotForRequest, 1)" -> {}
"(New, 2)" -> {"(NewSession, 2)";"(NewServer, 2)";"(Init, 1)";"(Put, 2)";"(loadPlugins, 1)";"(Update, 1)";"(Start, 1)";"(Register, 1)";"(load, 1)";"(Dial, 2)";"(Apply, 1)";}
"(initInformers, 1)" -> {}
"(Remove, 1)" -> {"(Delete, 2)";"(index, 1)";}
"(RoundTrip, 1)" -> {"(DetailedRoundTrip, 1)";"(awaitOpenSlotForRequest, 1)";}
"(NewBroker, 3)" -> {}
"(Build, 1)" -> {"(ForEachPackage, 2)";"(NewKubernetesSource, 1)";}
"(get, 1)" -> {"(Get, 2)";"(Do, 1)";"(load, 1)";"(SetTransportDefaults, 1)";"(Read, 2)";"(Upload, 3)";}
"(waitUntilFreshAndBlock, 2)" -> {}
"(newResourceConsumer, 18)" -> {}
"(newMaster, 1)" -> {"(setUp, 1)";}
"(Save, 3)" -> {"(snapshotPath, 4)";}
"(StartNode, 2)" -> {}
"(serverHandshake, 1)" -> {"(newServerTransport, 4)";}
"(UpdatePod, 2)" -> {"(updatePod, 2)";}
"(NewMainKubelet, 31)" -> {}
"(callSSA, 6)" -> {"(runFrame, 1)";}
"(visitInstr, 2)" -> {}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 07e45cb8ec631f158f6c7c29330a9cdeae9f4a29
|
test
|
hjimmy tg origin source pkg quota image imagestreamimport evaluator test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to is at line may start a goroutine click here to show the line s of go which triggered the analyzer go for is range tc iss isinformer informer getindexer add is click here to show extra information the analyzer produced the following graphviz dot graph describes paths through the callgraph that could lead to a function calling a goroutine digraph g implements build newarchivefromguest restartnode restartnode newtimeoutlistener wraptls newarchivetoguest newsession list waituntilfreshandlist tlsdialwithdialer attachscale scaleup upgrade pull create snapshot create add post newwatchbroadcast newwatchbroadcasts cleanup performcopy put update get new open do accept implements get stat handle validate set geturl put newdaemon exec run createaggregatorserver foreachpackage allpackages pullimage pullimagewithreference servehttp roundtrip removemember serveupgrade validate check handlehttps servestatus tryupgrade register addendpoint put update write create put set createvolume pull pull apply diff restartnode interpret call remove do update watch detachdisk accept run check preparestatement prepare callremotebalancer newservertls newserver newgroup dial newclientconn insert update nodeshouldrundaemonpod simulate destroybricks createdestroyconcurrently removemember terminate newclient newclient ping get replacebrickinvolume createbricks destroybricks destroy terminate update update add add add new get get put replace copy write wait capture create destroy validate createbricks destroybricks checkbrickscanbedestroyed watch put watch format set do dial tlsdial run newdynamicresourceconsumer interpret newclient dial newlocalfakestorage newserver parse init open newarchivefromguest newarchivetoguest dispatch servehttp newasyncidtokenverifier get waituntilfreshandget createbricks createdestroyconcurrently put commit delete delete detach encode rundelete inspect remove delete newwatchbroadcast update build build waituntilfreshandget waituntilfreshandblock getconn hash write validate validate rundelete delete create create create apply update open open new exec preparestatement get get new get set geturl put do search get parse set listimages create execute list dispatch delete delete delete update remove commit register list accept newleaseupdater startkubelet wraptls newtlslistener do run register write create streamcontainerio run update startcontrollers createcontrollercontext newsshtunnellist new buildhandlerchain start withaggregator run push set dorequest updatetransport updatetransport diff diff contains index runframe visitinstr add open time watch appendchild get delete add parse search stat delete copy push insert insert new add has add hash newsshtunnellist start copyconsole copypipes newdaemon shutdowndaemon handlecontrolsocketchange init startupdater newleaseupdater snapshot delete accept stop listimages list run run geturl roundtrip updatetransport newclienttransport updatepod execute executeexecnewpod addnode nodeshouldrundaemonpod commitcontainer commit copy copy open create get start initializeandstartheartbeat callremotebalancer newserver appendchild write fromurl dial get get get do get addreadinesscheckroute write update build update snapshot snapshot newlistener newtimeoutlistener newtlslistener wait list validate configuretransport configuretransport removemember removemember loadplugins init newclientconn clienthandshake launch parse handle build set parse execute validate run addendpoint initializeandstartheartbeat detailedroundtrip getconn watch run watch new parsefiles newauthenticator settransportdefaults newgroup newserver startcontrollers recvtty consolefromfile newserver remove replacebrickinvolume prune update tarwithoptions addpod addpod waituntilfreshandlist waituntilfreshandblock copy execute schedule schedule updatepod addpod addpod cleanup update start start newserver dial read start copypipes start stat stat get new tryupgrade parsepackagefiles parsefiles consolefromfile newmaster openbackend do prepare init init startzfswatcher copy servehttp write newoomcollector handle delete dorequest do snapshotpath newkubernetessource cleanup delete inspect get prioritizenodes runshort mount call callssa add validate remove index do add install delete createandinitkubelet newmainkubelet accept serveupgrade prepare createvolume newserver newserver destroy checkbrickscanbedestroyed streamcontainerio settransportdefaults configuretransport startnode startnode validate new copyconsole runkubelet startkubelet createandinitkubelet newservertransport updatevol put newtcpsocket newlistener copypipes servestatus initoauthauthorizationservermetadataroute write stop newserver restartnode restartasstandalonenode openbackend newservertls startnode setupprocesspipes dial dial dial tlsdial tlsdialwithdialer post roundtrip newwatchbroadcasts register register validate put listimages listimages load parsepackagefiles init listenpipe newtcpsocket configuretransport addconnifneeded allpackages startthinpoolwatcher shutdowndaemon handlecontrolsocketchange executeexecnewpod do roundtrip delete put verifier newasyncidtokenverifier run setupio forward schedule addnode commitcontainer run runkubelet updatetransport update updatepod put save pullimagewithreference upload update listenpipe performcopy exportimage delete get remove do validate run set update delete get get update has get stat contains new newlocalfakestorage read copy createcontrollercontext execute time format index update exportimage commit diff tarwithoptions delete do delete set get delete copyconsole newdynamicresourceconsumer newresourceconsumer setup detach delete detachdisk updatevol encode write put replace execute index apply apply launch stat do get start create apply initinformers check createvolume new create register startthinpoolwatcher startzfswatcher list set list list do validate set do put dial newclientconn newserverconn forward insert delete createdestroyconcurrently addconnifneeded commit register capture update accept handlehttps schedule prioritizenodes create create newserverconn serverhandshake addpod updatepod buildhandlerchain get get do roundtrip mount attachscale new newauthenticator get run newoomcollector newclient addreadinesscheckroute list get read newbroker new update parse get register remove newclient install start run cleanup new initoauthauthorizationservermetadataroute new verifier ping handle configurekubeconfigforclientcertrotation refreshcertificateafterexpiry refreshcertificateafterexpiry new configurekubeconfigforclientcertrotation handle implements pull restartasstandalonenode restartnode apply apply pullimage pullimage clienthandshake newclienttransport simulate addpod scaleup run run write create implements run apply create list prepare create build start save startupdater prune fromurl runshort start execute register setupio setupprocesspipes recvtty prepare upgrade pull pullimage withaggregator createaggregatorserver awaitopenslotforrequest new newsession newserver init put loadplugins update start register load dial apply initinformers remove delete index roundtrip detailedroundtrip awaitopenslotforrequest newbroker build foreachpackage newkubernetessource get get do load settransportdefaults read upload waituntilfreshandblock newresourceconsumer newmaster setup save snapshotpath startnode serverhandshake newservertransport updatepod updatepod newmainkubelet callssa runframe visitinstr leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
| 1
|
442,461
| 30,836,741,262
|
IssuesEvent
|
2023-08-02 07:55:54
|
SRGSSR/pillarbox-documentation
|
https://api.github.com/repos/SRGSSR/pillarbox-documentation
|
closed
|
Adjust support policies
|
documentation
|
**As a developer integrating Pillarbox and Letterbox** I want to clearly understand the respective support policies so that I can better plan migration from Letterbox to Pillarbox.
# Acceptance criteria
- The support policies are documented for all platforms.
# Tasks
- [x] The support policy for Apple platforms is documented.
- [x] The support policy for the Android platform is documented.
- [x] The support policy for the web is documented.
|
1.0
|
Adjust support policies - **As a developer integrating Pillarbox and Letterbox** I want to clearly understand the respective support policies so that I can better plan migration from Letterbox to Pillarbox.
# Acceptance criteria
- The support policies are documented for all platforms.
# Tasks
- [x] The support policy for Apple platforms is documented.
- [x] The support policy for the Android platform is documented.
- [x] The support policy for the web is documented.
|
non_test
|
adjust support policies as a developer integrating pillarbox and letterbox i want to clearly understand the respective support policies so that i can better plan migration from letterbox to pillarbox acceptance criteria the support policies are documented for all platforms tasks the support policy for apple platforms is documented the support policy for the android platform is documented the support policy for the web is documented
| 0
|
267,771
| 23,318,869,473
|
IssuesEvent
|
2022-08-08 14:42:19
|
gravitational/teleport
|
https://api.github.com/repos/gravitational/teleport
|
opened
|
`TestEC2IsInstanceMetadataAvailable` flakiness
|
flaky tests
|
## Failure
#### Link(s) to logs
- https://console.cloud.google.com/cloud-build/builds/52d61a26-7ba6-4982-a10d-4ca10841ff2a;step=0?project=ci-account
#### Relevant snippet
```
=== CONT TestEC2IsInstanceMetadataAvailable/response_with_new_id_format
ec2_test.go:156:
Error Trace: ec2_test.go:156
Error: Should be true
Test: TestEC2IsInstanceMetadataAvailable/response_with_new_id_format
--- FAIL: TestEC2IsInstanceMetadataAvailable/response_with_new_id_format (0.35s)
```
|
1.0
|
`TestEC2IsInstanceMetadataAvailable` flakiness - ## Failure
#### Link(s) to logs
- https://console.cloud.google.com/cloud-build/builds/52d61a26-7ba6-4982-a10d-4ca10841ff2a;step=0?project=ci-account
#### Relevant snippet
```
=== CONT TestEC2IsInstanceMetadataAvailable/response_with_new_id_format
ec2_test.go:156:
Error Trace: ec2_test.go:156
Error: Should be true
Test: TestEC2IsInstanceMetadataAvailable/response_with_new_id_format
--- FAIL: TestEC2IsInstanceMetadataAvailable/response_with_new_id_format (0.35s)
```
|
test
|
flakiness failure link s to logs relevant snippet cont response with new id format test go error trace test go error should be true test response with new id format fail response with new id format
| 1
|
140,659
| 5,413,975,200
|
IssuesEvent
|
2017-03-01 17:57:54
|
zulip/zulipbot
|
https://api.github.com/repos/zulip/zulipbot
|
closed
|
Adding area label support to pull requests
|
in progress priority
|
Mention the corresponding area label team and label PR with area label when PR mentions another issue.
- [ ] Check body for newly created pull request
- [x] Check body for pull request (issue) comment
- [x] Check body for pull request review
- [x] Check body for pull request review comment
|
1.0
|
Adding area label support to pull requests - Mention the corresponding area label team and label PR with area label when PR mentions another issue.
- [ ] Check body for newly created pull request
- [x] Check body for pull request (issue) comment
- [x] Check body for pull request review
- [x] Check body for pull request review comment
|
non_test
|
adding area label support to pull requests mention the corresponding area label team and label pr with area label when pr mentions another issue check body for newly created pull request check body for pull request issue comment check body for pull request review check body for pull request review comment
| 0
|
18,700
| 10,276,625,090
|
IssuesEvent
|
2019-08-24 19:04:55
|
Watemlifts/JCSprout
|
https://api.github.com/repos/Watemlifts/JCSprout
|
opened
|
CVE-2018-7489 (High) detected in jackson-databind-2.8.9.jar
|
security vulnerability
|
## CVE-2018-7489 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.9.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /JCSprout/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.9/jackson-databind-2.8.9.jar</p>
<p>
Dependency Hierarchy:
- kafka_2.11-2.3.0.jar (Root Library)
- :x: **jackson-databind-2.8.9.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Watemlifts/JCSprout/commit/098720ba3a1c36a18362a2b9fb1b5ac47ca77a34">098720ba3a1c36a18362a2b9fb1b5ac47ca77a34</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind before 2.7.9.3, 2.8.x before 2.8.11.1 and 2.9.x before 2.9.5 allows unauthenticated remote code execution because of an incomplete fix for the CVE-2017-7525 deserialization flaw. This is exploitable by sending maliciously crafted JSON input to the readValue method of the ObjectMapper, bypassing a blacklist that is ineffective if the c3p0 libraries are available in the classpath.
<p>Publish Date: 2018-02-26
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-7489>CVE-2018-7489</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-7489">https://nvd.nist.gov/vuln/detail/CVE-2018-7489</a></p>
<p>Release Date: 2018-02-26</p>
<p>Fix Resolution: 2.8.11.1,2.9.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-7489 (High) detected in jackson-databind-2.8.9.jar - ## CVE-2018-7489 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.9.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /JCSprout/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.9/jackson-databind-2.8.9.jar</p>
<p>
Dependency Hierarchy:
- kafka_2.11-2.3.0.jar (Root Library)
- :x: **jackson-databind-2.8.9.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Watemlifts/JCSprout/commit/098720ba3a1c36a18362a2b9fb1b5ac47ca77a34">098720ba3a1c36a18362a2b9fb1b5ac47ca77a34</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind before 2.7.9.3, 2.8.x before 2.8.11.1 and 2.9.x before 2.9.5 allows unauthenticated remote code execution because of an incomplete fix for the CVE-2017-7525 deserialization flaw. This is exploitable by sending maliciously crafted JSON input to the readValue method of the ObjectMapper, bypassing a blacklist that is ineffective if the c3p0 libraries are available in the classpath.
<p>Publish Date: 2018-02-26
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-7489>CVE-2018-7489</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-7489">https://nvd.nist.gov/vuln/detail/CVE-2018-7489</a></p>
<p>Release Date: 2018-02-26</p>
<p>Fix Resolution: 2.8.11.1,2.9.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file jcsprout pom xml path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy kafka jar root library x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind before x before and x before allows unauthenticated remote code execution because of an incomplete fix for the cve deserialization flaw this is exploitable by sending maliciously crafted json input to the readvalue method of the objectmapper bypassing a blacklist that is ineffective if the libraries are available in the classpath publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
106,225
| 23,196,537,243
|
IssuesEvent
|
2022-08-01 16:56:20
|
albermax/innvestigate
|
https://api.github.com/repos/albermax/innvestigate
|
closed
|
Submitting Keras Issue
|
bug code
|
There is most likely a bug in Keras. "run_internal_graph" of the model class does not handle masks correctly for layers with multiple outputs and not mask support. Open an issue and report/make pull request.
At the moment we have a workaround but it would be more principled to fix Keras.
|
1.0
|
Submitting Keras Issue - There is most likely a bug in Keras. "run_internal_graph" of the model class does not handle masks correctly for layers with multiple outputs and not mask support. Open an issue and report/make pull request.
At the moment we have a workaround but it would be more principled to fix Keras.
|
non_test
|
submitting keras issue there is most likely a bug in keras run internal graph of the model class does not handle masks correctly for layers with multiple outputs and not mask support open an issue and report make pull request at the moment we have a workaround but it would be more principled to fix keras
| 0
|
304,409
| 9,331,897,069
|
IssuesEvent
|
2019-03-28 10:49:27
|
mozilla/addons-code-manager
|
https://api.github.com/repos/mozilla/addons-code-manager
|
closed
|
Linter custom API code should use `REACT_APP_API_HOST`
|
priority: p3
|
We have some custom API code for fetching the linter messages, mainly because we are not relying on the addons-server API _per se_. In #460, we did not update this code though, so let's do it!
https://github.com/mozilla/addons-code-manager/blob/c6cad11653b45538292c5d2fd5b65033adb65b11/src/reducers/linter.tsx#L161-L163
|
1.0
|
Linter custom API code should use `REACT_APP_API_HOST` - We have some custom API code for fetching the linter messages, mainly because we are not relying on the addons-server API _per se_. In #460, we did not update this code though, so let's do it!
https://github.com/mozilla/addons-code-manager/blob/c6cad11653b45538292c5d2fd5b65033adb65b11/src/reducers/linter.tsx#L161-L163
|
non_test
|
linter custom api code should use react app api host we have some custom api code for fetching the linter messages mainly because we are not relying on the addons server api per se in we did not update this code though so let s do it
| 0
|
73,975
| 7,371,373,829
|
IssuesEvent
|
2018-03-13 11:32:01
|
LiskHQ/lisk
|
https://api.github.com/repos/LiskHQ/lisk
|
closed
|
Unstable integration tests
|
failing test
|
### Expected behavior
Integration tests should run reliably.
### Actual behavior
Integration tests are failing intermittently with the following error:
```
> lisk@0.9.8 test /home/lisk/workspace/sk-core-integration_PR-1611-TXGXFC5BOTEJ67VOBJHP4URYSVZH7GCXNX7YWKJKWOYJUNPCRYXA
> grunt "mocha:untagged:integration"
Running "mocha:untagged:integration" (mocha) task
Running "exec:mocha:untagged:integration" (exec) task
[0m[0m
[0m given configurations for 10 nodes with address "127.0.0.1", WS ports 500[0-9] and HTTP ports 400[0-9] using separate databases[0m
[0m when every peers contains the others on the peers list[0m
[0m when every peer forges with separate subset of genesis delegates and forging.force = false[0m
[0m when network is set up[0m
[log] 2018-02-27 14:42:55 | Generating PM2 configuration
[log] 2018-02-27 14:42:55 | Recreating databases
[log] 2018-02-27 14:42:56 | Clearing existing logs
[log] 2018-02-27 14:42:56 | Launching network
[log] 2018-02-27 14:42:58 | Waiting for nodes to load the blockchain
[log] 2018-02-27 14:43:21 | Enabling forging with registered delegates
[log] 2018-02-27 14:43:21 | Waiting 20 seconds for nodes to establish connections
[0m when WS connections to all nodes all established[0m
[0m Peers[0m
[0m mutual connections[0m
[31m 1) should return a list of peers mutually interconnected[0m
>> Test failed: should return a list of peers mutually interconnected
PM2 process killed gracefully
[31m 2) "after all" hook[0m
[92m [0m[32m 0 passing[0m[90m (47s)[0m
[31m 2 failing[0m
[0m 1) given configurations for 10 nodes with address "127.0.0.1", WS ports 500[0-9] and HTTP ports 400[0-9] using separate databases
when every peers contains the others on the peers list
when every peer forges with separate subset of genesis delegates and forging.force = false
when network is set up
when WS connections to all nodes all established
Peers
mutual connections
should return a list of peers mutually interconnected:
[0m[31m AssertionError: expected [] not to be empty[0m[90m
at results.forEach.result (test/integration/scenarios/network/peers.js:41:18)
at Array.forEach (native)
at Promise.all.then.results (test/integration/scenarios/network/peers.js:29:14)
at tryCatcher (node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (node_modules/bluebird/js/release/promise.js:512:31)
at Promise._settlePromise (node_modules/bluebird/js/release/promise.js:569:18)
at Promise._settlePromise0 (node_modules/bluebird/js/release/promise.js:614:10)
at Promise._settlePromises (node_modules/bluebird/js/release/promise.js:693:18)
at Promise._fulfill (node_modules/bluebird/js/release/promise.js:638:18)
at PromiseArray._resolve (node_modules/bluebird/js/release/promise_array.js:126:19)
at PromiseArray._promiseFulfilled (node_modules/bluebird/js/release/promise_array.js:144:14)
at Promise._settlePromise (node_modules/bluebird/js/release/promise.js:574:26)
at Promise._settlePromise0 (node_modules/bluebird/js/release/promise.js:614:10)
at Promise._settlePromises (node_modules/bluebird/js/release/promise.js:693:18)
at Async._drainQueue (node_modules/bluebird/js/release/async.js:133:16)
at Async._drainQueues (node_modules/bluebird/js/release/async.js:143:10)
at Immediate.Async.drainQueues (node_modules/bluebird/js/release/async.js:17:14)
[0m
[0m 2) given configurations for 10 nodes with address "127.0.0.1", WS ports 500[0-9] and HTTP ports 400[0-9] using separate databases
when every peers contains the others on the peers list
when every peer forges with separate subset of genesis delegates and forging.force = false
when network is set up
"after all" hook:
[0m[31m AssertionError: expected [] not to be empty[0m[90m
at results.forEach.result (test/integration/scenarios/network/peers.js:41:18)
at Array.forEach (native)
at Promise.all.then.results (test/integration/scenarios/network/peers.js:29:14)
at tryCatcher (node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (node_modules/bluebird/js/release/promise.js:512:31)
at Promise._settlePromise (node_modules/bluebird/js/release/promise.js:569:18)
at Promise._settlePromise0 (node_modules/bluebird/js/release/promise.js:614:10)
at Promise._settlePromises (node_modules/bluebird/js/release/promise.js:693:18)
at Promise._fulfill (node_modules/bluebird/js/release/promise.js:638:18)
at PromiseArray._resolve (node_modules/bluebird/js/release/promise_array.js:126:19)
at PromiseArray._promiseFulfilled (node_modules/bluebird/js/release/promise_array.js:144:14)
at Promise._settlePromise (node_modules/bluebird/js/release/promise.js:574:26)
at Promise._settlePromise0 (node_modules/bluebird/js/release/promise.js:614:10)
at Promise._settlePromises (node_modules/bluebird/js/release/promise.js:693:18)
at Async._drainQueue (node_modules/bluebird/js/release/async.js:133:16)
at Async._drainQueues (node_modules/bluebird/js/release/async.js:143:10)
at Immediate.Async.drainQueues (node_modules/bluebird/js/release/async.js:17:14)
[0m
>> Exited with code: 2.
>> Error executing child process: Error: Process exited with code 2.
Warning: Task "exec:mocha:untagged:integration" failed. Use --force to continue.
Aborted due to warnings.
npm ERR! Test failed. See above for more details.
```
### Steps to reproduce
Run the integration tests on the `1.0.0` branch.
### Which version(s) does this affect? (Environment, OS, etc...)
Version `1.0.0`.
|
1.0
|
Unstable integration tests - ### Expected behavior
Integration tests should run reliably.
### Actual behavior
Integration tests are failing intermittently with the following error:
```
> lisk@0.9.8 test /home/lisk/workspace/sk-core-integration_PR-1611-TXGXFC5BOTEJ67VOBJHP4URYSVZH7GCXNX7YWKJKWOYJUNPCRYXA
> grunt "mocha:untagged:integration"
Running "mocha:untagged:integration" (mocha) task
Running "exec:mocha:untagged:integration" (exec) task
[0m[0m
[0m given configurations for 10 nodes with address "127.0.0.1", WS ports 500[0-9] and HTTP ports 400[0-9] using separate databases[0m
[0m when every peers contains the others on the peers list[0m
[0m when every peer forges with separate subset of genesis delegates and forging.force = false[0m
[0m when network is set up[0m
[log] 2018-02-27 14:42:55 | Generating PM2 configuration
[log] 2018-02-27 14:42:55 | Recreating databases
[log] 2018-02-27 14:42:56 | Clearing existing logs
[log] 2018-02-27 14:42:56 | Launching network
[log] 2018-02-27 14:42:58 | Waiting for nodes to load the blockchain
[log] 2018-02-27 14:43:21 | Enabling forging with registered delegates
[log] 2018-02-27 14:43:21 | Waiting 20 seconds for nodes to establish connections
[0m when WS connections to all nodes all established[0m
[0m Peers[0m
[0m mutual connections[0m
[31m 1) should return a list of peers mutually interconnected[0m
>> Test failed: should return a list of peers mutually interconnected
PM2 process killed gracefully
[31m 2) "after all" hook[0m
[92m [0m[32m 0 passing[0m[90m (47s)[0m
[31m 2 failing[0m
[0m 1) given configurations for 10 nodes with address "127.0.0.1", WS ports 500[0-9] and HTTP ports 400[0-9] using separate databases
when every peers contains the others on the peers list
when every peer forges with separate subset of genesis delegates and forging.force = false
when network is set up
when WS connections to all nodes all established
Peers
mutual connections
should return a list of peers mutually interconnected:
[0m[31m AssertionError: expected [] not to be empty[0m[90m
at results.forEach.result (test/integration/scenarios/network/peers.js:41:18)
at Array.forEach (native)
at Promise.all.then.results (test/integration/scenarios/network/peers.js:29:14)
at tryCatcher (node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (node_modules/bluebird/js/release/promise.js:512:31)
at Promise._settlePromise (node_modules/bluebird/js/release/promise.js:569:18)
at Promise._settlePromise0 (node_modules/bluebird/js/release/promise.js:614:10)
at Promise._settlePromises (node_modules/bluebird/js/release/promise.js:693:18)
at Promise._fulfill (node_modules/bluebird/js/release/promise.js:638:18)
at PromiseArray._resolve (node_modules/bluebird/js/release/promise_array.js:126:19)
at PromiseArray._promiseFulfilled (node_modules/bluebird/js/release/promise_array.js:144:14)
at Promise._settlePromise (node_modules/bluebird/js/release/promise.js:574:26)
at Promise._settlePromise0 (node_modules/bluebird/js/release/promise.js:614:10)
at Promise._settlePromises (node_modules/bluebird/js/release/promise.js:693:18)
at Async._drainQueue (node_modules/bluebird/js/release/async.js:133:16)
at Async._drainQueues (node_modules/bluebird/js/release/async.js:143:10)
at Immediate.Async.drainQueues (node_modules/bluebird/js/release/async.js:17:14)
[0m
[0m 2) given configurations for 10 nodes with address "127.0.0.1", WS ports 500[0-9] and HTTP ports 400[0-9] using separate databases
when every peers contains the others on the peers list
when every peer forges with separate subset of genesis delegates and forging.force = false
when network is set up
"after all" hook:
[0m[31m AssertionError: expected [] not to be empty[0m[90m
at results.forEach.result (test/integration/scenarios/network/peers.js:41:18)
at Array.forEach (native)
at Promise.all.then.results (test/integration/scenarios/network/peers.js:29:14)
at tryCatcher (node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (node_modules/bluebird/js/release/promise.js:512:31)
at Promise._settlePromise (node_modules/bluebird/js/release/promise.js:569:18)
at Promise._settlePromise0 (node_modules/bluebird/js/release/promise.js:614:10)
at Promise._settlePromises (node_modules/bluebird/js/release/promise.js:693:18)
at Promise._fulfill (node_modules/bluebird/js/release/promise.js:638:18)
at PromiseArray._resolve (node_modules/bluebird/js/release/promise_array.js:126:19)
at PromiseArray._promiseFulfilled (node_modules/bluebird/js/release/promise_array.js:144:14)
at Promise._settlePromise (node_modules/bluebird/js/release/promise.js:574:26)
at Promise._settlePromise0 (node_modules/bluebird/js/release/promise.js:614:10)
at Promise._settlePromises (node_modules/bluebird/js/release/promise.js:693:18)
at Async._drainQueue (node_modules/bluebird/js/release/async.js:133:16)
at Async._drainQueues (node_modules/bluebird/js/release/async.js:143:10)
at Immediate.Async.drainQueues (node_modules/bluebird/js/release/async.js:17:14)
[0m
>> Exited with code: 2.
>> Error executing child process: Error: Process exited with code 2.
Warning: Task "exec:mocha:untagged:integration" failed. Use --force to continue.
Aborted due to warnings.
npm ERR! Test failed. See above for more details.
```
### Steps to reproduce
Run the integration tests on the `1.0.0` branch.
### Which version(s) does this affect? (Environment, OS, etc...)
Version `1.0.0`.
|
test
|
unstable integration tests expected behavior integration tests should run reliably actual behavior integration tests are failing intermittently with the following error lisk test home lisk workspace sk core integration pr grunt mocha untagged integration running mocha untagged integration mocha task running exec mocha untagged integration exec task and http ports using separate databases when every peers contains the others on the peers list when every peer forges with separate subset of genesis delegates and forging force false when network is set up generating configuration recreating databases clearing existing logs launching network waiting for nodes to load the blockchain enabling forging with registered delegates waiting seconds for nodes to establish connections when ws connections to all nodes all established peers mutual connections should return a list of peers mutually interconnected test failed should return a list of peers mutually interconnected process killed gracefully after all hook passing failing and http ports using separate databases when every peers contains the others on the peers list when every peer forges with separate subset of genesis delegates and forging force false when network is set up when ws connections to all nodes all established peers mutual connections should return a list of peers mutually interconnected not to be empty at results foreach result test integration scenarios network peers js at array foreach native at promise all then results test integration scenarios network peers js at trycatcher node modules bluebird js release util js at promise settlepromisefromhandler node modules bluebird js release promise js at promise settlepromise node modules bluebird js release promise js at promise node modules bluebird js release promise js at promise settlepromises node modules bluebird js release promise js at promise fulfill node modules bluebird js release promise js at promisearray resolve node modules bluebird js release promise array js at promisearray promisefulfilled node modules bluebird js release promise array js at promise settlepromise node modules bluebird js release promise js at promise node modules bluebird js release promise js at promise settlepromises node modules bluebird js release promise js at async drainqueue node modules bluebird js release async js at async drainqueues node modules bluebird js release async js at immediate async drainqueues node modules bluebird js release async js and http ports using separate databases when every peers contains the others on the peers list when every peer forges with separate subset of genesis delegates and forging force false when network is set up after all hook not to be empty at results foreach result test integration scenarios network peers js at array foreach native at promise all then results test integration scenarios network peers js at trycatcher node modules bluebird js release util js at promise settlepromisefromhandler node modules bluebird js release promise js at promise settlepromise node modules bluebird js release promise js at promise node modules bluebird js release promise js at promise settlepromises node modules bluebird js release promise js at promise fulfill node modules bluebird js release promise js at promisearray resolve node modules bluebird js release promise array js at promisearray promisefulfilled node modules bluebird js release promise array js at promise settlepromise node modules bluebird js release promise js at promise node modules bluebird js release promise js at promise settlepromises node modules bluebird js release promise js at async drainqueue node modules bluebird js release async js at async drainqueues node modules bluebird js release async js at immediate async drainqueues node modules bluebird js release async js exited with code error executing child process error process exited with code warning task exec mocha untagged integration failed use force to continue aborted due to warnings npm err test failed see above for more details steps to reproduce run the integration tests on the branch which version s does this affect environment os etc version
| 1
|
114,199
| 9,692,753,303
|
IssuesEvent
|
2019-05-24 14:30:09
|
italia/spid
|
https://api.github.com/repos/italia/spid
|
closed
|
Controllo metadata - Comune di Gravellona Toce
|
metadata nuovo md test
|
Buongiorno,
per conto del Comune di Gravellona Toce, richiediamo la verifica dei metadata pubblicati all'url:
https://www.comune.gravellonatoce.vb.it/metadataSPID.xml
Grazie e cordiali saluti
Federico Albesano
|
1.0
|
Controllo metadata - Comune di Gravellona Toce - Buongiorno,
per conto del Comune di Gravellona Toce, richiediamo la verifica dei metadata pubblicati all'url:
https://www.comune.gravellonatoce.vb.it/metadataSPID.xml
Grazie e cordiali saluti
Federico Albesano
|
test
|
controllo metadata comune di gravellona toce buongiorno per conto del comune di gravellona toce richiediamo la verifica dei metadata pubblicati all url grazie e cordiali saluti federico albesano
| 1
|
149,809
| 11,924,585,420
|
IssuesEvent
|
2020-04-01 09:46:49
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: jepsen/multi-register/strobe-skews failed
|
C-test-failure O-roachtest O-robot branch-release-19.2 release-blocker
|
[(roachtest).jepsen/multi-register/strobe-skews failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1841499&tab=buildLog) on [release-19.2@63fcb9fa24525c41bc36f1d5dd873828c99a96fe](https://github.com/cockroachdb/cockroach/commits/63fcb9fa24525c41bc36f1d5dd873828c99a96fe):
```
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/jepsen.go:180
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
- error with embedded safe details: %s returned:
stderr:
%s
stdout:
%s
-- arg 1: <string>
-- arg 2: <string>
-- arg 3: <string>
- /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-1841499-1585723639-47-n6cpu4:6 -- bash -e -c "\
cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \
~/lein run test \
--tarball file://${PWD}/cockroach.tgz \
--username ${USER} \
--ssh-private-key ~/.ssh/id_rsa \
--os ubuntu \
--time-limit 300 \
--concurrency 30 \
--recovery-time 25 \
--test-count 1 \
-n 10.128.1.13 -n 10.128.1.107 -n 10.128.1.106 -n 10.128.1.15 -n 10.128.0.43 \
--test multi-register --nemesis strobe-skews \
> invoke.log 2>&1 \
" returned:
stderr:
Error: DEAD_ROACH_PROBLEM:
- error with user detail: Node 6. Command with error:
```
bash -e -c "\
cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \
~/lein run test \
--tarball file://${PWD}/cockroach.tgz \
--username ${USER} \
--ssh-private-key ~/.ssh/id_rsa \
--os ubuntu \
--time-limit 300 \
--concurrency 30 \
--recovery-time 25 \
--test-count 1 \
-n 10.128.1.13 -n 10.128.1.107 -n 10.128.1.106 -n 10.128.1.15 -n 10.128.0.43 \
--test multi-register --nemesis strobe-skews \
> invoke.log 2>&1 \
"
```
- exit status 1
stdout::
- exit status 30
```
<details><summary>More</summary><p>
Artifacts: [/jepsen/multi-register/strobe-skews](https://teamcity.cockroachdb.com/viewLog.html?buildId=1841499&tab=artifacts#/jepsen/multi-register/strobe-skews)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Ajepsen%2Fmulti-register%2Fstrobe-skews.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
roachtest: jepsen/multi-register/strobe-skews failed - [(roachtest).jepsen/multi-register/strobe-skews failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1841499&tab=buildLog) on [release-19.2@63fcb9fa24525c41bc36f1d5dd873828c99a96fe](https://github.com/cockroachdb/cockroach/commits/63fcb9fa24525c41bc36f1d5dd873828c99a96fe):
```
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/jepsen.go:180
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
- error with embedded safe details: %s returned:
stderr:
%s
stdout:
%s
-- arg 1: <string>
-- arg 2: <string>
-- arg 3: <string>
- /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-1841499-1585723639-47-n6cpu4:6 -- bash -e -c "\
cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \
~/lein run test \
--tarball file://${PWD}/cockroach.tgz \
--username ${USER} \
--ssh-private-key ~/.ssh/id_rsa \
--os ubuntu \
--time-limit 300 \
--concurrency 30 \
--recovery-time 25 \
--test-count 1 \
-n 10.128.1.13 -n 10.128.1.107 -n 10.128.1.106 -n 10.128.1.15 -n 10.128.0.43 \
--test multi-register --nemesis strobe-skews \
> invoke.log 2>&1 \
" returned:
stderr:
Error: DEAD_ROACH_PROBLEM:
- error with user detail: Node 6. Command with error:
```
bash -e -c "\
cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \
~/lein run test \
--tarball file://${PWD}/cockroach.tgz \
--username ${USER} \
--ssh-private-key ~/.ssh/id_rsa \
--os ubuntu \
--time-limit 300 \
--concurrency 30 \
--recovery-time 25 \
--test-count 1 \
-n 10.128.1.13 -n 10.128.1.107 -n 10.128.1.106 -n 10.128.1.15 -n 10.128.0.43 \
--test multi-register --nemesis strobe-skews \
> invoke.log 2>&1 \
"
```
- exit status 1
stdout::
- exit status 30
```
<details><summary>More</summary><p>
Artifacts: [/jepsen/multi-register/strobe-skews](https://teamcity.cockroachdb.com/viewLog.html?buildId=1841499&tab=artifacts#/jepsen/multi-register/strobe-skews)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Ajepsen%2Fmulti-register%2Fstrobe-skews.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
test
|
roachtest jepsen multi register strobe skews failed on home agent work go src github com cockroachdb cockroach pkg cmd roachtest jepsen go runtime goexit usr local go src runtime asm s error with embedded safe details s returned stderr s stdout s arg arg arg home agent work go src github com cockroachdb cockroach bin roachprod run teamcity bash e c cd mnt jepsen cockroachdb set eo pipefail lein run test tarball file pwd cockroach tgz username user ssh private key ssh id rsa os ubuntu time limit concurrency recovery time test count n n n n n test multi register nemesis strobe skews invoke log returned stderr error dead roach problem error with user detail node command with error bash e c cd mnt jepsen cockroachdb set eo pipefail lein run test tarball file pwd cockroach tgz username user ssh private key ssh id rsa os ubuntu time limit concurrency recovery time test count n n n n n test multi register nemesis strobe skews invoke log exit status stdout exit status more artifacts powered by
| 1
|
156,981
| 12,342,304,545
|
IssuesEvent
|
2020-05-15 00:20:21
|
apache/lucenenet
|
https://api.github.com/repos/apache/lucenenet
|
opened
|
Fix Random Seed Functionality in TestFramework
|
Lucene.Net.TestFramework
|
The random seed that is set to `StringHelper.GOOD_FAST_HASH_SEED` through the `tests.seed` system property in Java currently is the same as the seed that NUnit uses in `NUnit.Framework.TestContext.CurrentContext.Random`. If we continue using NUnit's `Random` property, more research is needed to figure out how to set that seed the way the NUnit designers intended so it can be documented for end users.
During testing, I noticed that NUnit does set the seed to a consistent pseudo-random value, so the same test can be run over and over with exactly the same random setup. However, the seed is re-generated when the code in the project under test is changed. I noted that NUnit writes the seed to a file named `nunit_random_seed.tmp` in the `bin` folder of the test project when run in Visual Studio.
### A Typical Testing Scenario in Lucene
1. Test fails
2. Test failure generates an error message that includes the random seed as a hexadecimal string (that represents a long)
3. Developer sets the `tests.seed` system property to the same hexadecimal string that caused the test to fail
4. Debugging can take place because the pseudo-randomly generated conditions are exactly the same as they were when the test failed
5. A fix is devised
### Testing in .NET
All we have been able to do is get the same test to fail multiple times, but that breaks as soon as any code is changed. We could probably revert back if we save a copy of the `nunit_random_seed.tmp`, but this is very complicated to do in addition to debugging, and not very intuitive.
We are missing:
1. The test framework reporting the seed that it was using when the test failed
2. The ability to set the random seed
I suspect that although setting the seed cannot be done publicly, we can hack it by using .NET Reflection to set an internal field. There was some [discussion about setting it through an attribute](https://github.com/nunit/nunit/issues/1461), but it doesn't sound like that has been implemented.
I haven't looked into what it will take to extend NUnit to attach the random seed to the test message that is reported. One option (but not a very good one) would be to change the `Lucene.Net.TestFramework.Assert` class to add the random seed to all of test messages. The main thing we need is to read the seed value. The only place it seems to be exposed is in the `bin/nunit_random_seed.tmp` file, but it would be preferable to read it through a property within the test session.
### Testing Lucene.NET Itself
For end users, the above would solve all of the issues. However, for Lucene.NET we often need to generate the same conditions as Java to determine where the execution paths diverge. This is a problem because the NUnit Random class (probably) isn't the same implementation as the Java Random class. Also, in Java the random seed is a `long`, but in .NET, it is an `int`.
We have ported the Java Random class in `J2N`, named [Randomizer](https://github.com/NightOwl888/J2N/blob/master/src/J2N/Randomizer.cs). Ideally, we would use this implementation during testing. However, NUnit doesn't seem to have a way to inject a custom implementation of `Random` nor does it have a way to read the seed it uses to seed a new instance of `Randomizer` for the test framework or a way to override that setting manually during debugging.
### Ideal Solution
1. By default, the test framework uses a seed generated by the same hook that NUnit generates their seed. Alternatively, we could use `System.DateTime.UtcNow.Ticks`.
2. The seed generated would be a `long`.
3. The generated seed would be used to seed the `J2N.Randomizer` class, which the getter of the `LuceneTestFramework.Random` property would provide and cache.
4. The test framework would ensure the seed is always output along with failure test messages, as a hexadecimal string.
5. The "System Properties" feature allows `tests:seed` to be set to a hexadecimal string, which if set would override the auto-generated random seed used in step 3.
Note that the ideal solution doesn't necessarily involve NUnit in the equation. With Java's Random class ported and "System Properties" solved in a way that doesn't involve hacking the system's environment variables when debugging, we are much closer to fixing this to make it compatible with Java Lucene so we can test apples to apples.
The main thing we are missing is writing the random seed out into the test message with hexadecimal formatting (the same string that is used in Lucene). Other than that, setting up the system property to override the automatically generated seed should be fairly straightforward.
|
1.0
|
Fix Random Seed Functionality in TestFramework - The random seed that is set to `StringHelper.GOOD_FAST_HASH_SEED` through the `tests.seed` system property in Java currently is the same as the seed that NUnit uses in `NUnit.Framework.TestContext.CurrentContext.Random`. If we continue using NUnit's `Random` property, more research is needed to figure out how to set that seed the way the NUnit designers intended so it can be documented for end users.
During testing, I noticed that NUnit does set the seed to a consistent pseudo-random value, so the same test can be run over and over with exactly the same random setup. However, the seed is re-generated when the code in the project under test is changed. I noted that NUnit writes the seed to a file named `nunit_random_seed.tmp` in the `bin` folder of the test project when run in Visual Studio.
### A Typical Testing Scenario in Lucene
1. Test fails
2. Test failure generates an error message that includes the random seed as a hexadecimal string (that represents a long)
3. Developer sets the `tests.seed` system property to the same hexadecimal string that caused the test to fail
4. Debugging can take place because the pseudo-randomly generated conditions are exactly the same as they were when the test failed
5. A fix is devised
### Testing in .NET
All we have been able to do is get the same test to fail multiple times, but that breaks as soon as any code is changed. We could probably revert back if we save a copy of the `nunit_random_seed.tmp`, but this is very complicated to do in addition to debugging, and not very intuitive.
We are missing:
1. The test framework reporting the seed that it was using when the test failed
2. The ability to set the random seed
I suspect that although setting the seed cannot be done publicly, we can hack it by using .NET Reflection to set an internal field. There was some [discussion about setting it through an attribute](https://github.com/nunit/nunit/issues/1461), but it doesn't sound like that has been implemented.
I haven't looked into what it will take to extend NUnit to attach the random seed to the test message that is reported. One option (but not a very good one) would be to change the `Lucene.Net.TestFramework.Assert` class to add the random seed to all of test messages. The main thing we need is to read the seed value. The only place it seems to be exposed is in the `bin/nunit_random_seed.tmp` file, but it would be preferable to read it through a property within the test session.
### Testing Lucene.NET Itself
For end users, the above would solve all of the issues. However, for Lucene.NET we often need to generate the same conditions as Java to determine where the execution paths diverge. This is a problem because the NUnit Random class (probably) isn't the same implementation as the Java Random class. Also, in Java the random seed is a `long`, but in .NET, it is an `int`.
We have ported the Java Random class in `J2N`, named [Randomizer](https://github.com/NightOwl888/J2N/blob/master/src/J2N/Randomizer.cs). Ideally, we would use this implementation during testing. However, NUnit doesn't seem to have a way to inject a custom implementation of `Random` nor does it have a way to read the seed it uses to seed a new instance of `Randomizer` for the test framework or a way to override that setting manually during debugging.
### Ideal Solution
1. By default, the test framework uses a seed generated by the same hook that NUnit generates their seed. Alternatively, we could use `System.DateTime.UtcNow.Ticks`.
2. The seed generated would be a `long`.
3. The generated seed would be used to seed the `J2N.Randomizer` class, which the getter of the `LuceneTestFramework.Random` property would provide and cache.
4. The test framework would ensure the seed is always output along with failure test messages, as a hexadecimal string.
5. The "System Properties" feature allows `tests:seed` to be set to a hexadecimal string, which if set would override the auto-generated random seed used in step 3.
Note that the ideal solution doesn't necessarily involve NUnit in the equation. With Java's Random class ported and "System Properties" solved in a way that doesn't involve hacking the system's environment variables when debugging, we are much closer to fixing this to make it compatible with Java Lucene so we can test apples to apples.
The main thing we are missing is writing the random seed out into the test message with hexadecimal formatting (the same string that is used in Lucene). Other than that, setting up the system property to override the automatically generated seed should be fairly straightforward.
|
test
|
fix random seed functionality in testframework the random seed that is set to stringhelper good fast hash seed through the tests seed system property in java currently is the same as the seed that nunit uses in nunit framework testcontext currentcontext random if we continue using nunit s random property more research is needed to figure out how to set that seed the way the nunit designers intended so it can be documented for end users during testing i noticed that nunit does set the seed to a consistent pseudo random value so the same test can be run over and over with exactly the same random setup however the seed is re generated when the code in the project under test is changed i noted that nunit writes the seed to a file named nunit random seed tmp in the bin folder of the test project when run in visual studio a typical testing scenario in lucene test fails test failure generates an error message that includes the random seed as a hexadecimal string that represents a long developer sets the tests seed system property to the same hexadecimal string that caused the test to fail debugging can take place because the pseudo randomly generated conditions are exactly the same as they were when the test failed a fix is devised testing in net all we have been able to do is get the same test to fail multiple times but that breaks as soon as any code is changed we could probably revert back if we save a copy of the nunit random seed tmp but this is very complicated to do in addition to debugging and not very intuitive we are missing the test framework reporting the seed that it was using when the test failed the ability to set the random seed i suspect that although setting the seed cannot be done publicly we can hack it by using net reflection to set an internal field there was some but it doesn t sound like that has been implemented i haven t looked into what it will take to extend nunit to attach the random seed to the test message that is reported one option but not a very good one would be to change the lucene net testframework assert class to add the random seed to all of test messages the main thing we need is to read the seed value the only place it seems to be exposed is in the bin nunit random seed tmp file but it would be preferable to read it through a property within the test session testing lucene net itself for end users the above would solve all of the issues however for lucene net we often need to generate the same conditions as java to determine where the execution paths diverge this is a problem because the nunit random class probably isn t the same implementation as the java random class also in java the random seed is a long but in net it is an int we have ported the java random class in named ideally we would use this implementation during testing however nunit doesn t seem to have a way to inject a custom implementation of random nor does it have a way to read the seed it uses to seed a new instance of randomizer for the test framework or a way to override that setting manually during debugging ideal solution by default the test framework uses a seed generated by the same hook that nunit generates their seed alternatively we could use system datetime utcnow ticks the seed generated would be a long the generated seed would be used to seed the randomizer class which the getter of the lucenetestframework random property would provide and cache the test framework would ensure the seed is always output along with failure test messages as a hexadecimal string the system properties feature allows tests seed to be set to a hexadecimal string which if set would override the auto generated random seed used in step note that the ideal solution doesn t necessarily involve nunit in the equation with java s random class ported and system properties solved in a way that doesn t involve hacking the system s environment variables when debugging we are much closer to fixing this to make it compatible with java lucene so we can test apples to apples the main thing we are missing is writing the random seed out into the test message with hexadecimal formatting the same string that is used in lucene other than that setting up the system property to override the automatically generated seed should be fairly straightforward
| 1
|
197,687
| 6,962,728,103
|
IssuesEvent
|
2017-12-08 14:51:52
|
SoylentNews/rehash
|
https://api.github.com/repos/SoylentNews/rehash
|
closed
|
Implement user/submissions page
|
Feature Request Priority: Low
|
_From @soytakyon on May 12, 2015 20:29_
http://soylentnews.org/~username/submissions
Feature request: Show a list of user submissions, and nothing else, on these pages.
_Copied from original issue: SoylentNews/slashcode#450_
|
1.0
|
Implement user/submissions page - _From @soytakyon on May 12, 2015 20:29_
http://soylentnews.org/~username/submissions
Feature request: Show a list of user submissions, and nothing else, on these pages.
_Copied from original issue: SoylentNews/slashcode#450_
|
non_test
|
implement user submissions page from soytakyon on may feature request show a list of user submissions and nothing else on these pages copied from original issue soylentnews slashcode
| 0
|
29,728
| 4,534,904,521
|
IssuesEvent
|
2016-09-08 15:47:25
|
rust-lang/rust
|
https://api.github.com/repos/rust-lang/rust
|
opened
|
Tracking Issue for Incr. Comp. Regression Testing
|
A-incr-comp A-testsuite metabug
|
Incremental compilation needs good regression testing, which can be split into the following components:
- End-to-end tests for specific common cases, like already present to some extent in [test/incremental](https://github.com/rust-lang/rust/tree/master/src/test/incremental). These will use the `#[rustc_clean]`/`#[rustc_dirty]` and `#![rustc_partition_reused]` infrastructure.
- Tests for the Incremental Compilation Hash (ICH) that we compute for input data (i.e. HIR and imported Metadata). I'll open individual issues for each of the below soon.
- [ ] Test ICH for structs (there'll be an PR for this soon, serving as a blueprint for this kind of test)
- [ ] Test ICH for enums
- [ ] Test ICH for traits
- [ ] Test ICH for functions
- [ ] Test ICH for impls/methods
- [ ] Test ICH for constants and statics
- [x] Test ICH for command line arguments
- Dependency graph oriented tests that make sure that certain edges are present or not present under certain circumstances. These will use the `#[rustc_if_this_changed]`/`#[rustc_then_this_would_need]` infrastructure.
- Git history based tests that will walk the history of some selected crates, building each revision incrementally and from scratch and then comparing the two for equality. These tests will be based on [cargo incremental](https://github.com/nikomatsakis/cargo-incremental)
In order to make it easy to contribute in this area, we'll also want to
- Compile links to the appropriate documentation of the test framework and some of the concepts and inner workings of incremental compilation
- Provide well-documented examples of what test cases in various areas should look like
- Expand the above list with as concrete as possible descriptions of individual test cases we are interested in, and open individual GH issues for them.
|
1.0
|
Tracking Issue for Incr. Comp. Regression Testing - Incremental compilation needs good regression testing, which can be split into the following components:
- End-to-end tests for specific common cases, like already present to some extent in [test/incremental](https://github.com/rust-lang/rust/tree/master/src/test/incremental). These will use the `#[rustc_clean]`/`#[rustc_dirty]` and `#![rustc_partition_reused]` infrastructure.
- Tests for the Incremental Compilation Hash (ICH) that we compute for input data (i.e. HIR and imported Metadata). I'll open individual issues for each of the below soon.
- [ ] Test ICH for structs (there'll be an PR for this soon, serving as a blueprint for this kind of test)
- [ ] Test ICH for enums
- [ ] Test ICH for traits
- [ ] Test ICH for functions
- [ ] Test ICH for impls/methods
- [ ] Test ICH for constants and statics
- [x] Test ICH for command line arguments
- Dependency graph oriented tests that make sure that certain edges are present or not present under certain circumstances. These will use the `#[rustc_if_this_changed]`/`#[rustc_then_this_would_need]` infrastructure.
- Git history based tests that will walk the history of some selected crates, building each revision incrementally and from scratch and then comparing the two for equality. These tests will be based on [cargo incremental](https://github.com/nikomatsakis/cargo-incremental)
In order to make it easy to contribute in this area, we'll also want to
- Compile links to the appropriate documentation of the test framework and some of the concepts and inner workings of incremental compilation
- Provide well-documented examples of what test cases in various areas should look like
- Expand the above list with as concrete as possible descriptions of individual test cases we are interested in, and open individual GH issues for them.
|
test
|
tracking issue for incr comp regression testing incremental compilation needs good regression testing which can be split into the following components end to end tests for specific common cases like already present to some extent in these will use the and infrastructure tests for the incremental compilation hash ich that we compute for input data i e hir and imported metadata i ll open individual issues for each of the below soon test ich for structs there ll be an pr for this soon serving as a blueprint for this kind of test test ich for enums test ich for traits test ich for functions test ich for impls methods test ich for constants and statics test ich for command line arguments dependency graph oriented tests that make sure that certain edges are present or not present under certain circumstances these will use the infrastructure git history based tests that will walk the history of some selected crates building each revision incrementally and from scratch and then comparing the two for equality these tests will be based on in order to make it easy to contribute in this area we ll also want to compile links to the appropriate documentation of the test framework and some of the concepts and inner workings of incremental compilation provide well documented examples of what test cases in various areas should look like expand the above list with as concrete as possible descriptions of individual test cases we are interested in and open individual gh issues for them
| 1
|
81,516
| 7,784,695,347
|
IssuesEvent
|
2018-06-06 13:58:33
|
italia/spid
|
https://api.github.com/repos/italia/spid
|
closed
|
Problema certificati testenv-docker
|
Ambiente di Test
|
Salve, sto facendo esperimenti su spid-rails ed sto avendo un problema con l'IdP locale (https://github.com/italia/spid-testenv-docker), ma non sapendo quale sia l'origine del problema scrivo qui.
Ho scaricato l'app di demo https://github.com/rubynetti/rubynetti-rails e il docker testenv, dentro al quale ho creato un service provider con il nome **ciccio.lvh.me:3000/spid/metadata** ed un nuovo certificato ssl self signed.
Il redirect alla pagina di login del test provider funziona, ma dopo aver inserito le credenziali ottengo un errore e nei log del docker trovo questo messaggio
```
Signature validation failed for the SAML Message : Failed to construct the X509CredentialImpl for the alias iccio.lvh.me:3000/spid/metadata.crt
```
La cosa che noto è che il nome dominio è sbagliato in quanto viene eliminato il primo carattere (iccio anziché ciccio)
Ho provato con un altro nome dominio e succede la stessa cosa, rubynetti.lvh.me e nei log trovavo ubynetti.lvh.me.
C'è qualcosa di sbagliato nella configurazione?
Alleto il metadata del service provider
```
<?xml version="1.0" encoding="UTF-8"?>
<md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" ID="_6d0835f5-a686-4aed-ae06-64743765d31f" entityID="http://ciccio.lvh.me:3000/spid/metadata">
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#" />
<ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256" />
<ds:Reference URI="#_6d0835f5-a686-4aed-ae06-64743765d31f">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature" />
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#">
<ec:InclusiveNamespaces xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="#default samlp saml ds xs xsi md" />
</ds:Transform>
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256" />
<ds:DigestValue>Uf1viqWrfv8hNddtaqXpP1UC/Hf2DtF5Nyem9MW4Z0A=</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>AKsPyWVTUvkO1QsOaylgyjFQuH4l53YLlXiyrBOIP1ejwOu2e72H3rleRthcfdLDOkfEGVf1eN3WPtSZ4x1dJv9MX/C5fePzvSDahHo3mWRk17ifrdxHCo3g7N05J49iNGg7PckzDuADBEzeM4tbVC+RLLYqH4N8z6peExvzDdYR6fMHeeZ8wc6a08jMrNw6KRBjh7mxYOgB7yAshESxCAOZtRwRcl41eN8EgNhuMEg8sYPPq11jFUcf2rGUhpnNFU9Z+59PbbnH2ntle1L0SGh2rI+B5/l8NhJxMj1i1yJKE79cXo+D7sN440tdANgQ5rmGp3BKyiJrSVwKiL6CCg==</ds:SignatureValue>
<ds:KeyInfo>
<ds:X509Data>
<ds:X509Certificate>MIIDsDCCApigAwIBAgIJALtqtaHJZcwPMA0GCSqGSIb3DQEBCwUAMG0xCzAJBgNVBAYTAklUMRAwDgYDVQQIDAdQaXN0b2lhMREwDwYDVQQHDAhCdWdnaWFubzEhMB8GA1UECgwYSW50ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMRYwFAYDVQQDDA1jaWNjaW8ubHZoLm1lMB4XDTE4MDYwNjA4MTUzMloXDTE5MDYwNjA4MTUzMlowbTELMAkGA1UEBhMCSVQxEDAOBgNVBAgMB1Bpc3RvaWExETAPBgNVBAcMCEJ1Z2dpYW5vMSEwHwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQxFjAUBgNVBAMMDWNpY2Npby5sdmgubWUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC6kHy8Ai3u4Wd8byI2HOHtvIry66Bjp8YnuunGzb3yao2Y0ctl3TAV0AkE7gHdMqBE6JnItDfoAnXDr0rSDMTBaXQLTbq1v68kkgpL7aFKNRJ6utrgdJOhnPZt6QBO+U3NEV188a7uVkVKeBiJo+o5yLWN/VWpZW4GYfHUDp+lEWseQQPs8EykGiaHhLJniIbUduemlOujGoZjUZnp4O8tgOvwtYPKEP4U8/D7n/ZXNxA5xqJ/9Rw6vl9QBCNXK2t8zm67TYOjIPmdfIk9FPSZLvjA8eFKbFt1WErgIsrsk/GZoFTpl6XuGG5ZLje9JDJ1ZBFgZp/75SiJPTPKfxPzAgMBAAGjUzBRMB0GA1UdDgQWBBScmUWWK55kPGTymq5Iv8/tLAWEHzAfBgNVHSMEGDAWgBScmUWWK55kPGTymq5Iv8/tLAWEHzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQBmL/am2G5YD95/6mvQX3it9q41b18Af5GWPHwWdHVq9jBrbR8P0GZjLrdy8gS9xqNOjyGsVX73Lcc+MM4NSUCLXU1p5r+k5Fe/7OofJygQYm3KnAlGreWxpawOg+T4HFsDxpPxGjtCFOh5lw4HKD/GdyGDJasj9G45bqw/McSRlgasefVq4uAJsbrLGaHrMvI6qp1i+YI98+Oar3uXP6mFrbgYir02CRs4vBoYxrmRxcZKbfGeTeYTGF4RIaz8wjYrJFXLNX6BD7QmDLKR1gn/4K52/65hGnEPm0m+qtkLDSX0E71UJpnfpKxtOCSyafGMlAess4+h87aHbWMZyLkK</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</ds:Signature>
<md:SPSSODescriptor AuthnRequestsSigned="true" WantAssertionsSigned="true" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
<md:KeyDescriptor use="signing">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>MIIDsDCCApigAwIBAgIJALtqtaHJZcwPMA0GCSqGSIb3DQEBCwUAMG0xCzAJBgNVBAYTAklUMRAwDgYDVQQIDAdQaXN0b2lhMREwDwYDVQQHDAhCdWdnaWFubzEhMB8GA1UECgwYSW50ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMRYwFAYDVQQDDA1jaWNjaW8ubHZoLm1lMB4XDTE4MDYwNjA4MTUzMloXDTE5MDYwNjA4MTUzMlowbTELMAkGA1UEBhMCSVQxEDAOBgNVBAgMB1Bpc3RvaWExETAPBgNVBAcMCEJ1Z2dpYW5vMSEwHwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQxFjAUBgNVBAMMDWNpY2Npby5sdmgubWUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC6kHy8Ai3u4Wd8byI2HOHtvIry66Bjp8YnuunGzb3yao2Y0ctl3TAV0AkE7gHdMqBE6JnItDfoAnXDr0rSDMTBaXQLTbq1v68kkgpL7aFKNRJ6utrgdJOhnPZt6QBO+U3NEV188a7uVkVKeBiJo+o5yLWN/VWpZW4GYfHUDp+lEWseQQPs8EykGiaHhLJniIbUduemlOujGoZjUZnp4O8tgOvwtYPKEP4U8/D7n/ZXNxA5xqJ/9Rw6vl9QBCNXK2t8zm67TYOjIPmdfIk9FPSZLvjA8eFKbFt1WErgIsrsk/GZoFTpl6XuGG5ZLje9JDJ1ZBFgZp/75SiJPTPKfxPzAgMBAAGjUzBRMB0GA1UdDgQWBBScmUWWK55kPGTymq5Iv8/tLAWEHzAfBgNVHSMEGDAWgBScmUWWK55kPGTymq5Iv8/tLAWEHzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQBmL/am2G5YD95/6mvQX3it9q41b18Af5GWPHwWdHVq9jBrbR8P0GZjLrdy8gS9xqNOjyGsVX73Lcc+MM4NSUCLXU1p5r+k5Fe/7OofJygQYm3KnAlGreWxpawOg+T4HFsDxpPxGjtCFOh5lw4HKD/GdyGDJasj9G45bqw/McSRlgasefVq4uAJsbrLGaHrMvI6qp1i+YI98+Oar3uXP6mFrbgYir02CRs4vBoYxrmRxcZKbfGeTeYTGF4RIaz8wjYrJFXLNX6BD7QmDLKR1gn/4K52/65hGnEPm0m+qtkLDSX0E71UJpnfpKxtOCSyafGMlAess4+h87aHbWMZyLkK</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</md:KeyDescriptor>
<md:SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="http://ciccio.lvh.me:3000/spid/slo" ResponseLocation="http://ciccio.lvh.me:3000/spid/slo" />
<md:AssertionConsumerService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="http://ciccio.lvh.me:3000/spid/sso" index="0" isDefault="true" />
</md:SPSSODescriptor>
</md:EntityDescriptor>
```
|
1.0
|
Problema certificati testenv-docker - Salve, sto facendo esperimenti su spid-rails ed sto avendo un problema con l'IdP locale (https://github.com/italia/spid-testenv-docker), ma non sapendo quale sia l'origine del problema scrivo qui.
Ho scaricato l'app di demo https://github.com/rubynetti/rubynetti-rails e il docker testenv, dentro al quale ho creato un service provider con il nome **ciccio.lvh.me:3000/spid/metadata** ed un nuovo certificato ssl self signed.
Il redirect alla pagina di login del test provider funziona, ma dopo aver inserito le credenziali ottengo un errore e nei log del docker trovo questo messaggio
```
Signature validation failed for the SAML Message : Failed to construct the X509CredentialImpl for the alias iccio.lvh.me:3000/spid/metadata.crt
```
La cosa che noto è che il nome dominio è sbagliato in quanto viene eliminato il primo carattere (iccio anziché ciccio)
Ho provato con un altro nome dominio e succede la stessa cosa, rubynetti.lvh.me e nei log trovavo ubynetti.lvh.me.
C'è qualcosa di sbagliato nella configurazione?
Alleto il metadata del service provider
```
<?xml version="1.0" encoding="UTF-8"?>
<md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" ID="_6d0835f5-a686-4aed-ae06-64743765d31f" entityID="http://ciccio.lvh.me:3000/spid/metadata">
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#" />
<ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256" />
<ds:Reference URI="#_6d0835f5-a686-4aed-ae06-64743765d31f">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature" />
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#">
<ec:InclusiveNamespaces xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="#default samlp saml ds xs xsi md" />
</ds:Transform>
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256" />
<ds:DigestValue>Uf1viqWrfv8hNddtaqXpP1UC/Hf2DtF5Nyem9MW4Z0A=</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>AKsPyWVTUvkO1QsOaylgyjFQuH4l53YLlXiyrBOIP1ejwOu2e72H3rleRthcfdLDOkfEGVf1eN3WPtSZ4x1dJv9MX/C5fePzvSDahHo3mWRk17ifrdxHCo3g7N05J49iNGg7PckzDuADBEzeM4tbVC+RLLYqH4N8z6peExvzDdYR6fMHeeZ8wc6a08jMrNw6KRBjh7mxYOgB7yAshESxCAOZtRwRcl41eN8EgNhuMEg8sYPPq11jFUcf2rGUhpnNFU9Z+59PbbnH2ntle1L0SGh2rI+B5/l8NhJxMj1i1yJKE79cXo+D7sN440tdANgQ5rmGp3BKyiJrSVwKiL6CCg==</ds:SignatureValue>
<ds:KeyInfo>
<ds:X509Data>
<ds:X509Certificate>MIIDsDCCApigAwIBAgIJALtqtaHJZcwPMA0GCSqGSIb3DQEBCwUAMG0xCzAJBgNVBAYTAklUMRAwDgYDVQQIDAdQaXN0b2lhMREwDwYDVQQHDAhCdWdnaWFubzEhMB8GA1UECgwYSW50ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMRYwFAYDVQQDDA1jaWNjaW8ubHZoLm1lMB4XDTE4MDYwNjA4MTUzMloXDTE5MDYwNjA4MTUzMlowbTELMAkGA1UEBhMCSVQxEDAOBgNVBAgMB1Bpc3RvaWExETAPBgNVBAcMCEJ1Z2dpYW5vMSEwHwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQxFjAUBgNVBAMMDWNpY2Npby5sdmgubWUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC6kHy8Ai3u4Wd8byI2HOHtvIry66Bjp8YnuunGzb3yao2Y0ctl3TAV0AkE7gHdMqBE6JnItDfoAnXDr0rSDMTBaXQLTbq1v68kkgpL7aFKNRJ6utrgdJOhnPZt6QBO+U3NEV188a7uVkVKeBiJo+o5yLWN/VWpZW4GYfHUDp+lEWseQQPs8EykGiaHhLJniIbUduemlOujGoZjUZnp4O8tgOvwtYPKEP4U8/D7n/ZXNxA5xqJ/9Rw6vl9QBCNXK2t8zm67TYOjIPmdfIk9FPSZLvjA8eFKbFt1WErgIsrsk/GZoFTpl6XuGG5ZLje9JDJ1ZBFgZp/75SiJPTPKfxPzAgMBAAGjUzBRMB0GA1UdDgQWBBScmUWWK55kPGTymq5Iv8/tLAWEHzAfBgNVHSMEGDAWgBScmUWWK55kPGTymq5Iv8/tLAWEHzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQBmL/am2G5YD95/6mvQX3it9q41b18Af5GWPHwWdHVq9jBrbR8P0GZjLrdy8gS9xqNOjyGsVX73Lcc+MM4NSUCLXU1p5r+k5Fe/7OofJygQYm3KnAlGreWxpawOg+T4HFsDxpPxGjtCFOh5lw4HKD/GdyGDJasj9G45bqw/McSRlgasefVq4uAJsbrLGaHrMvI6qp1i+YI98+Oar3uXP6mFrbgYir02CRs4vBoYxrmRxcZKbfGeTeYTGF4RIaz8wjYrJFXLNX6BD7QmDLKR1gn/4K52/65hGnEPm0m+qtkLDSX0E71UJpnfpKxtOCSyafGMlAess4+h87aHbWMZyLkK</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</ds:Signature>
<md:SPSSODescriptor AuthnRequestsSigned="true" WantAssertionsSigned="true" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
<md:KeyDescriptor use="signing">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>MIIDsDCCApigAwIBAgIJALtqtaHJZcwPMA0GCSqGSIb3DQEBCwUAMG0xCzAJBgNVBAYTAklUMRAwDgYDVQQIDAdQaXN0b2lhMREwDwYDVQQHDAhCdWdnaWFubzEhMB8GA1UECgwYSW50ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMRYwFAYDVQQDDA1jaWNjaW8ubHZoLm1lMB4XDTE4MDYwNjA4MTUzMloXDTE5MDYwNjA4MTUzMlowbTELMAkGA1UEBhMCSVQxEDAOBgNVBAgMB1Bpc3RvaWExETAPBgNVBAcMCEJ1Z2dpYW5vMSEwHwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQxFjAUBgNVBAMMDWNpY2Npby5sdmgubWUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC6kHy8Ai3u4Wd8byI2HOHtvIry66Bjp8YnuunGzb3yao2Y0ctl3TAV0AkE7gHdMqBE6JnItDfoAnXDr0rSDMTBaXQLTbq1v68kkgpL7aFKNRJ6utrgdJOhnPZt6QBO+U3NEV188a7uVkVKeBiJo+o5yLWN/VWpZW4GYfHUDp+lEWseQQPs8EykGiaHhLJniIbUduemlOujGoZjUZnp4O8tgOvwtYPKEP4U8/D7n/ZXNxA5xqJ/9Rw6vl9QBCNXK2t8zm67TYOjIPmdfIk9FPSZLvjA8eFKbFt1WErgIsrsk/GZoFTpl6XuGG5ZLje9JDJ1ZBFgZp/75SiJPTPKfxPzAgMBAAGjUzBRMB0GA1UdDgQWBBScmUWWK55kPGTymq5Iv8/tLAWEHzAfBgNVHSMEGDAWgBScmUWWK55kPGTymq5Iv8/tLAWEHzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQBmL/am2G5YD95/6mvQX3it9q41b18Af5GWPHwWdHVq9jBrbR8P0GZjLrdy8gS9xqNOjyGsVX73Lcc+MM4NSUCLXU1p5r+k5Fe/7OofJygQYm3KnAlGreWxpawOg+T4HFsDxpPxGjtCFOh5lw4HKD/GdyGDJasj9G45bqw/McSRlgasefVq4uAJsbrLGaHrMvI6qp1i+YI98+Oar3uXP6mFrbgYir02CRs4vBoYxrmRxcZKbfGeTeYTGF4RIaz8wjYrJFXLNX6BD7QmDLKR1gn/4K52/65hGnEPm0m+qtkLDSX0E71UJpnfpKxtOCSyafGMlAess4+h87aHbWMZyLkK</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</md:KeyDescriptor>
<md:SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="http://ciccio.lvh.me:3000/spid/slo" ResponseLocation="http://ciccio.lvh.me:3000/spid/slo" />
<md:AssertionConsumerService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="http://ciccio.lvh.me:3000/spid/sso" index="0" isDefault="true" />
</md:SPSSODescriptor>
</md:EntityDescriptor>
```
|
test
|
problema certificati testenv docker salve sto facendo esperimenti su spid rails ed sto avendo un problema con l idp locale ma non sapendo quale sia l origine del problema scrivo qui ho scaricato l app di demo e il docker testenv dentro al quale ho creato un service provider con il nome ciccio lvh me spid metadata ed un nuovo certificato ssl self signed il redirect alla pagina di login del test provider funziona ma dopo aver inserito le credenziali ottengo un errore e nei log del docker trovo questo messaggio signature validation failed for the saml message failed to construct the for the alias iccio lvh me spid metadata crt la cosa che noto è che il nome dominio è sbagliato in quanto viene eliminato il primo carattere iccio anziché ciccio ho provato con un altro nome dominio e succede la stessa cosa rubynetti lvh me e nei log trovavo ubynetti lvh me c è qualcosa di sbagliato nella configurazione alleto il metadata del service provider md entitydescriptor xmlns md urn oasis names tc saml metadata id entityid ds signature xmlns ds ds transform algorithm ds keyinfo xmlns ds
| 1
|
300,024
| 25,942,986,521
|
IssuesEvent
|
2022-12-16 20:31:01
|
MPMG-DCC-UFMG/F01
|
https://api.github.com/repos/MPMG-DCC-UFMG/F01
|
closed
|
Teste de generalizacao para a tag Terceiro Setor - Repasses - Ouro Branco
|
generalization test development template - Betha (26) tag - Terceiro Setor subtag - Repasses
|
DoD: Realizar o teste de Generalização do validador da tag Terceiro Setor - Repasses para o Município de Ouro Branco.
|
1.0
|
Teste de generalizacao para a tag Terceiro Setor - Repasses - Ouro Branco - DoD: Realizar o teste de Generalização do validador da tag Terceiro Setor - Repasses para o Município de Ouro Branco.
|
test
|
teste de generalizacao para a tag terceiro setor repasses ouro branco dod realizar o teste de generalização do validador da tag terceiro setor repasses para o município de ouro branco
| 1
|
29,105
| 4,469,566,312
|
IssuesEvent
|
2016-08-25 13:30:39
|
IMA-WorldHealth/bhima-2.X
|
https://api.github.com/repos/IMA-WorldHealth/bhima-2.X
|
closed
|
Transfer in Cash Module needs to be tested
|
needs tests
|
This module needs test for it's balance value and for the registration of transfer in the database
|
1.0
|
Transfer in Cash Module needs to be tested - This module needs test for it's balance value and for the registration of transfer in the database
|
test
|
transfer in cash module needs to be tested this module needs test for it s balance value and for the registration of transfer in the database
| 1
|
316,665
| 27,174,309,949
|
IssuesEvent
|
2023-02-17 22:55:52
|
Uuvana-Studios/longvinter-windows-client
|
https://api.github.com/repos/Uuvana-Studios/longvinter-windows-client
|
closed
|
The game cannot be played due to a bug.
|
Bug Not Tested
|
longvinter id : yanggunjo3 [kr] uuvana pve2
I can't even die because of the kill cool time in the bug, and I can't play the game because I'm constantly kicked out.

|
1.0
|
The game cannot be played due to a bug. - longvinter id : yanggunjo3 [kr] uuvana pve2
I can't even die because of the kill cool time in the bug, and I can't play the game because I'm constantly kicked out.

|
test
|
the game cannot be played due to a bug longvinter id uuvana i can t even die because of the kill cool time in the bug and i can t play the game because i m constantly kicked out
| 1
|
433
| 2,699,608,355
|
IssuesEvent
|
2015-04-03 18:25:46
|
rust-lang/rust
|
https://api.github.com/repos/rust-lang/rust
|
opened
|
Automated ICE issue traging
|
A-infrastructure I-wishlist
|
Hopefully this nice-to-have issue belongs in this tracker.
It would be nice to have an automated triage bot that looks through issues tagged with `I-ICE` and runs the test case with the latest nightly. In case it compiles fine or errors out gracefully, it should mark it as `E-needstest` or perhaps `E-revisit` (as it may very well be still ICEing, just that some changes are necessary). It shouldn't be too hard to identify what the particular code sample is as in most cases it's the only standalone code snippet in the issue description, so no extra process overhead would be necessary for this to work well, I think.
This could be extended further to work with other types of issues but this is perhaps the most immediately obvious application that I can think of.
I'd love to work on this myself but sadly time is scarce so I'm putting up the idea here, at least. :) I know in the perfect world rustc wouldn't ICE so much that we'd need extra process but since that's a reality that is unlikely to change, this may be worth considering.
|
1.0
|
Automated ICE issue traging - Hopefully this nice-to-have issue belongs in this tracker.
It would be nice to have an automated triage bot that looks through issues tagged with `I-ICE` and runs the test case with the latest nightly. In case it compiles fine or errors out gracefully, it should mark it as `E-needstest` or perhaps `E-revisit` (as it may very well be still ICEing, just that some changes are necessary). It shouldn't be too hard to identify what the particular code sample is as in most cases it's the only standalone code snippet in the issue description, so no extra process overhead would be necessary for this to work well, I think.
This could be extended further to work with other types of issues but this is perhaps the most immediately obvious application that I can think of.
I'd love to work on this myself but sadly time is scarce so I'm putting up the idea here, at least. :) I know in the perfect world rustc wouldn't ICE so much that we'd need extra process but since that's a reality that is unlikely to change, this may be worth considering.
|
non_test
|
automated ice issue traging hopefully this nice to have issue belongs in this tracker it would be nice to have an automated triage bot that looks through issues tagged with i ice and runs the test case with the latest nightly in case it compiles fine or errors out gracefully it should mark it as e needstest or perhaps e revisit as it may very well be still iceing just that some changes are necessary it shouldn t be too hard to identify what the particular code sample is as in most cases it s the only standalone code snippet in the issue description so no extra process overhead would be necessary for this to work well i think this could be extended further to work with other types of issues but this is perhaps the most immediately obvious application that i can think of i d love to work on this myself but sadly time is scarce so i m putting up the idea here at least i know in the perfect world rustc wouldn t ice so much that we d need extra process but since that s a reality that is unlikely to change this may be worth considering
| 0
|
159,663
| 13,769,272,276
|
IssuesEvent
|
2020-10-07 18:21:25
|
photosynthesis-team/piq
|
https://api.github.com/repos/photosynthesis-team/piq
|
closed
|
Update README with descriptions of competative advantages
|
documentation
|
**Is your feature request related to a problem? Please describe.**
Potential user of the library need to know why would anyone bother to use this. Hence precise description of such advantages is required.
**Describe the solution you'd like**
@zakajd has already given a great number of examples in comments to #148. Take them from here with minor corrections, maybe add more.
|
1.0
|
Update README with descriptions of competative advantages - **Is your feature request related to a problem? Please describe.**
Potential user of the library need to know why would anyone bother to use this. Hence precise description of such advantages is required.
**Describe the solution you'd like**
@zakajd has already given a great number of examples in comments to #148. Take them from here with minor corrections, maybe add more.
|
non_test
|
update readme with descriptions of competative advantages is your feature request related to a problem please describe potential user of the library need to know why would anyone bother to use this hence precise description of such advantages is required describe the solution you d like zakajd has already given a great number of examples in comments to take them from here with minor corrections maybe add more
| 0
|
185,658
| 14,366,207,213
|
IssuesEvent
|
2020-12-01 03:45:54
|
KhronosGroup/glTF-Blender-IO
|
https://api.github.com/repos/KhronosGroup/glTF-Blender-IO
|
closed
|
Email from CircleCI, any action needed?
|
tests
|
Hi there,
On November 1st, [Docker Hub](https://www.docker.com/blog/scaling-docker-to-serve-millions-more-developers-network-egress/) will begin [limiting](https://docs.docker.com/docker-hub/download-rate-limit) anonymous image pulls. We want to make sure you know how you might be impacted and what you can do to avoid interruptions to your workflow.
Adding Docker authentication to your pipeline config is the easiest way to avoid any service disruptions. If you use the Docker executor or pull Docker images when using the machine executor on CircleCI, we encourage you to authenticate. Because the anonymous API rate limits are based on IP addresses, they will impact CircleCI cloud customers. Authenticated users get higher per-user rate limits, regardless of IP.
We are currently working on a partnership with Docker to minimize the impact of this change for our users and will share more details as we get them.
For more information or to leave a question for us, please head over to Discuss or contact support.
Thanks and happy building,
- The CircleCI team
|
1.0
|
Email from CircleCI, any action needed? - Hi there,
On November 1st, [Docker Hub](https://www.docker.com/blog/scaling-docker-to-serve-millions-more-developers-network-egress/) will begin [limiting](https://docs.docker.com/docker-hub/download-rate-limit) anonymous image pulls. We want to make sure you know how you might be impacted and what you can do to avoid interruptions to your workflow.
Adding Docker authentication to your pipeline config is the easiest way to avoid any service disruptions. If you use the Docker executor or pull Docker images when using the machine executor on CircleCI, we encourage you to authenticate. Because the anonymous API rate limits are based on IP addresses, they will impact CircleCI cloud customers. Authenticated users get higher per-user rate limits, regardless of IP.
We are currently working on a partnership with Docker to minimize the impact of this change for our users and will share more details as we get them.
For more information or to leave a question for us, please head over to Discuss or contact support.
Thanks and happy building,
- The CircleCI team
|
test
|
email from circleci any action needed hi there on november will begin anonymous image pulls we want to make sure you know how you might be impacted and what you can do to avoid interruptions to your workflow adding docker authentication to your pipeline config is the easiest way to avoid any service disruptions if you use the docker executor or pull docker images when using the machine executor on circleci we encourage you to authenticate because the anonymous api rate limits are based on ip addresses they will impact circleci cloud customers authenticated users get higher per user rate limits regardless of ip we are currently working on a partnership with docker to minimize the impact of this change for our users and will share more details as we get them for more information or to leave a question for us please head over to discuss or contact support thanks and happy building the circleci team
| 1
|
16,446
| 3,521,883,734
|
IssuesEvent
|
2016-01-13 05:45:22
|
phonegap/phonegap-plugin-push
|
https://api.github.com/repos/phonegap/phonegap-plugin-push
|
closed
|
notification event not fired on cold start on Android 5
|
android retest
|
I have two Android test-devices, one is on Android 4.4.4 and the other one is running 5.1.1.
If the app is running in foreground then the notification even is fired ok.
~~If I send a notification to those devices the message is received and processed while the app is running in background. No problems there.~~
(CORRECTION: further testing shows that the notification event is also not fired when the app is backgrounded)
If the app is not running at all the notification is received by the device and shown in the statusbar. Clicking it opens the app ok. But only on the Android 4 device the notification event is subsequently delivered to the app. On the Android 5 device the event is not fired.
(I'm building with Cordova 5.4.1 and am using the 1.5.2 version of plugin)
|
1.0
|
notification event not fired on cold start on Android 5 - I have two Android test-devices, one is on Android 4.4.4 and the other one is running 5.1.1.
If the app is running in foreground then the notification even is fired ok.
~~If I send a notification to those devices the message is received and processed while the app is running in background. No problems there.~~
(CORRECTION: further testing shows that the notification event is also not fired when the app is backgrounded)
If the app is not running at all the notification is received by the device and shown in the statusbar. Clicking it opens the app ok. But only on the Android 4 device the notification event is subsequently delivered to the app. On the Android 5 device the event is not fired.
(I'm building with Cordova 5.4.1 and am using the 1.5.2 version of plugin)
|
test
|
notification event not fired on cold start on android i have two android test devices one is on android and the other one is running if the app is running in foreground then the notification even is fired ok if i send a notification to those devices the message is received and processed while the app is running in background no problems there correction further testing shows that the notification event is also not fired when the app is backgrounded if the app is not running at all the notification is received by the device and shown in the statusbar clicking it opens the app ok but only on the android device the notification event is subsequently delivered to the app on the android device the event is not fired i m building with cordova and am using the version of plugin
| 1
|
436,480
| 12,550,711,877
|
IssuesEvent
|
2020-06-06 12:10:41
|
googleapis/elixir-google-api
|
https://api.github.com/repos/googleapis/elixir-google-api
|
opened
|
Synthesis failed for Analytics
|
autosynth failure priority: p1 type: bug
|
Hello! Autosynth couldn't regenerate Analytics. :broken_heart:
Here's the output from running `synth.py`:
```
el/entity_user_links.ex.
Writing Experiment to clients/analytics/lib/google_api/analytics/v3/model/experiment.ex.
Writing ExperimentParentLink to clients/analytics/lib/google_api/analytics/v3/model/experiment_parent_link.ex.
Writing ExperimentVariations to clients/analytics/lib/google_api/analytics/v3/model/experiment_variations.ex.
Writing Experiments to clients/analytics/lib/google_api/analytics/v3/model/experiments.ex.
Writing Filter to clients/analytics/lib/google_api/analytics/v3/model/filter.ex.
Writing FilterAdvancedDetails to clients/analytics/lib/google_api/analytics/v3/model/filter_advanced_details.ex.
Writing FilterExpression to clients/analytics/lib/google_api/analytics/v3/model/filter_expression.ex.
Writing FilterLowercaseDetails to clients/analytics/lib/google_api/analytics/v3/model/filter_lowercase_details.ex.
Writing FilterParentLink to clients/analytics/lib/google_api/analytics/v3/model/filter_parent_link.ex.
Writing FilterRef to clients/analytics/lib/google_api/analytics/v3/model/filter_ref.ex.
Writing FilterSearchAndReplaceDetails to clients/analytics/lib/google_api/analytics/v3/model/filter_search_and_replace_details.ex.
Writing FilterUppercaseDetails to clients/analytics/lib/google_api/analytics/v3/model/filter_uppercase_details.ex.
Writing Filters to clients/analytics/lib/google_api/analytics/v3/model/filters.ex.
Writing GaData to clients/analytics/lib/google_api/analytics/v3/model/ga_data.ex.
Writing GaDataColumnHeaders to clients/analytics/lib/google_api/analytics/v3/model/ga_data_column_headers.ex.
Writing GaDataDataTable to clients/analytics/lib/google_api/analytics/v3/model/ga_data_data_table.ex.
Writing GaDataDataTableCols to clients/analytics/lib/google_api/analytics/v3/model/ga_data_data_table_cols.ex.
Writing GaDataDataTableRows to clients/analytics/lib/google_api/analytics/v3/model/ga_data_data_table_rows.ex.
Writing GaDataDataTableRowsC to clients/analytics/lib/google_api/analytics/v3/model/ga_data_data_table_rows_c.ex.
Writing GaDataProfileInfo to clients/analytics/lib/google_api/analytics/v3/model/ga_data_profile_info.ex.
Writing GaDataQuery to clients/analytics/lib/google_api/analytics/v3/model/ga_data_query.ex.
Writing Goal to clients/analytics/lib/google_api/analytics/v3/model/goal.ex.
Writing GoalEventDetails to clients/analytics/lib/google_api/analytics/v3/model/goal_event_details.ex.
Writing GoalEventDetailsEventConditions to clients/analytics/lib/google_api/analytics/v3/model/goal_event_details_event_conditions.ex.
Writing GoalParentLink to clients/analytics/lib/google_api/analytics/v3/model/goal_parent_link.ex.
Writing GoalUrlDestinationDetails to clients/analytics/lib/google_api/analytics/v3/model/goal_url_destination_details.ex.
Writing GoalUrlDestinationDetailsSteps to clients/analytics/lib/google_api/analytics/v3/model/goal_url_destination_details_steps.ex.
Writing GoalVisitNumPagesDetails to clients/analytics/lib/google_api/analytics/v3/model/goal_visit_num_pages_details.ex.
Writing GoalVisitTimeOnSiteDetails to clients/analytics/lib/google_api/analytics/v3/model/goal_visit_time_on_site_details.ex.
Writing Goals to clients/analytics/lib/google_api/analytics/v3/model/goals.ex.
Writing HashClientIdRequest to clients/analytics/lib/google_api/analytics/v3/model/hash_client_id_request.ex.
Writing HashClientIdResponse to clients/analytics/lib/google_api/analytics/v3/model/hash_client_id_response.ex.
Writing IncludeConditions to clients/analytics/lib/google_api/analytics/v3/model/include_conditions.ex.
Writing LinkedForeignAccount to clients/analytics/lib/google_api/analytics/v3/model/linked_foreign_account.ex.
Writing McfData to clients/analytics/lib/google_api/analytics/v3/model/mcf_data.ex.
Writing McfDataColumnHeaders to clients/analytics/lib/google_api/analytics/v3/model/mcf_data_column_headers.ex.
Writing McfDataProfileInfo to clients/analytics/lib/google_api/analytics/v3/model/mcf_data_profile_info.ex.
Writing McfDataQuery to clients/analytics/lib/google_api/analytics/v3/model/mcf_data_query.ex.
Writing McfDataRows to clients/analytics/lib/google_api/analytics/v3/model/mcf_data_rows.ex.
Writing McfDataRowsConversionPathValue to clients/analytics/lib/google_api/analytics/v3/model/mcf_data_rows_conversion_path_value.ex.
Writing Profile to clients/analytics/lib/google_api/analytics/v3/model/profile.ex.
Writing ProfileChildLink to clients/analytics/lib/google_api/analytics/v3/model/profile_child_link.ex.
Writing ProfileFilterLink to clients/analytics/lib/google_api/analytics/v3/model/profile_filter_link.ex.
Writing ProfileFilterLinks to clients/analytics/lib/google_api/analytics/v3/model/profile_filter_links.ex.
Writing ProfileParentLink to clients/analytics/lib/google_api/analytics/v3/model/profile_parent_link.ex.
Writing ProfilePermissions to clients/analytics/lib/google_api/analytics/v3/model/profile_permissions.ex.
Writing ProfileRef to clients/analytics/lib/google_api/analytics/v3/model/profile_ref.ex.
Writing ProfileSummary to clients/analytics/lib/google_api/analytics/v3/model/profile_summary.ex.
Writing Profiles to clients/analytics/lib/google_api/analytics/v3/model/profiles.ex.
Writing RealtimeData to clients/analytics/lib/google_api/analytics/v3/model/realtime_data.ex.
Writing RealtimeDataColumnHeaders to clients/analytics/lib/google_api/analytics/v3/model/realtime_data_column_headers.ex.
Writing RealtimeDataProfileInfo to clients/analytics/lib/google_api/analytics/v3/model/realtime_data_profile_info.ex.
Writing RealtimeDataQuery to clients/analytics/lib/google_api/analytics/v3/model/realtime_data_query.ex.
Writing RemarketingAudience to clients/analytics/lib/google_api/analytics/v3/model/remarketing_audience.ex.
Writing RemarketingAudienceAudienceDefinition to clients/analytics/lib/google_api/analytics/v3/model/remarketing_audience_audience_definition.ex.
Writing RemarketingAudienceStateBasedAudienceDefinition to clients/analytics/lib/google_api/analytics/v3/model/remarketing_audience_state_based_audience_definition.ex.
Writing RemarketingAudienceStateBasedAudienceDefinitionExcludeConditions to clients/analytics/lib/google_api/analytics/v3/model/remarketing_audience_state_based_audience_definition_exclude_conditions.ex.
Writing RemarketingAudiences to clients/analytics/lib/google_api/analytics/v3/model/remarketing_audiences.ex.
Writing Segment to clients/analytics/lib/google_api/analytics/v3/model/segment.ex.
Writing Segments to clients/analytics/lib/google_api/analytics/v3/model/segments.ex.
Writing UnsampledReport to clients/analytics/lib/google_api/analytics/v3/model/unsampled_report.ex.
Writing UnsampledReportCloudStorageDownloadDetails to clients/analytics/lib/google_api/analytics/v3/model/unsampled_report_cloud_storage_download_details.ex.
Writing UnsampledReportDriveDownloadDetails to clients/analytics/lib/google_api/analytics/v3/model/unsampled_report_drive_download_details.ex.
Writing UnsampledReports to clients/analytics/lib/google_api/analytics/v3/model/unsampled_reports.ex.
Writing Upload to clients/analytics/lib/google_api/analytics/v3/model/upload.ex.
Writing Uploads to clients/analytics/lib/google_api/analytics/v3/model/uploads.ex.
Writing UserDeletionRequest to clients/analytics/lib/google_api/analytics/v3/model/user_deletion_request.ex.
Writing UserDeletionRequestId to clients/analytics/lib/google_api/analytics/v3/model/user_deletion_request_id.ex.
Writing UserRef to clients/analytics/lib/google_api/analytics/v3/model/user_ref.ex.
Writing WebPropertyRef to clients/analytics/lib/google_api/analytics/v3/model/web_property_ref.ex.
Writing WebPropertySummary to clients/analytics/lib/google_api/analytics/v3/model/web_property_summary.ex.
Writing Webproperties to clients/analytics/lib/google_api/analytics/v3/model/webproperties.ex.
Writing Webproperty to clients/analytics/lib/google_api/analytics/v3/model/webproperty.ex.
Writing WebpropertyChildLink to clients/analytics/lib/google_api/analytics/v3/model/webproperty_child_link.ex.
Writing WebpropertyParentLink to clients/analytics/lib/google_api/analytics/v3/model/webproperty_parent_link.ex.
Writing WebpropertyPermissions to clients/analytics/lib/google_api/analytics/v3/model/webproperty_permissions.ex.
Writing Data to clients/analytics/lib/google_api/analytics/v3/api/data.ex.
Writing Management to clients/analytics/lib/google_api/analytics/v3/api/management.ex.
Writing Metadata to clients/analytics/lib/google_api/analytics/v3/api/metadata.ex.
Writing Provisioning to clients/analytics/lib/google_api/analytics/v3/api/provisioning.ex.
Writing UserDeletion to clients/analytics/lib/google_api/analytics/v3/api/user_deletion.ex.
Writing connection.ex.
Writing metadata.ex.
Writing mix.exs
Writing README.md
Writing LICENSE
Writing .gitignore
Writing config/config.exs
Writing test/test_helper.exs
12:10:37.508 [info] Found only discovery_revision and/or formatting changes. Not significant enough for a PR.
fixing file permissions
2020-06-06 05:10:40,564 synthtool [DEBUG] > Wrote metadata to clients/analytics/synth.metadata.
DEBUG:synthtool:Wrote metadata to clients/analytics/synth.metadata.
2020-06-06 05:10:40,594 autosynth [DEBUG] > Running: git clean -fdx
Removing __pycache__/
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 615, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 476, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 555, in _inner_main
).synthesize(base_synth_log_path)
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 121, in synthesize
with open(log_file_path, "rt") as fp:
IsADirectoryError: [Errno 21] Is a directory: '/tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api'
```
Google internal developers can see the full log [here](http://sponge2/results/invocations/567e2ab8-5e4f-4fb0-8bae-9d0cc90aa1af/targets/github%2Fsynthtool;config=default/tests;query=elixir-google-api;failed=false).
|
1.0
|
Synthesis failed for Analytics - Hello! Autosynth couldn't regenerate Analytics. :broken_heart:
Here's the output from running `synth.py`:
```
el/entity_user_links.ex.
Writing Experiment to clients/analytics/lib/google_api/analytics/v3/model/experiment.ex.
Writing ExperimentParentLink to clients/analytics/lib/google_api/analytics/v3/model/experiment_parent_link.ex.
Writing ExperimentVariations to clients/analytics/lib/google_api/analytics/v3/model/experiment_variations.ex.
Writing Experiments to clients/analytics/lib/google_api/analytics/v3/model/experiments.ex.
Writing Filter to clients/analytics/lib/google_api/analytics/v3/model/filter.ex.
Writing FilterAdvancedDetails to clients/analytics/lib/google_api/analytics/v3/model/filter_advanced_details.ex.
Writing FilterExpression to clients/analytics/lib/google_api/analytics/v3/model/filter_expression.ex.
Writing FilterLowercaseDetails to clients/analytics/lib/google_api/analytics/v3/model/filter_lowercase_details.ex.
Writing FilterParentLink to clients/analytics/lib/google_api/analytics/v3/model/filter_parent_link.ex.
Writing FilterRef to clients/analytics/lib/google_api/analytics/v3/model/filter_ref.ex.
Writing FilterSearchAndReplaceDetails to clients/analytics/lib/google_api/analytics/v3/model/filter_search_and_replace_details.ex.
Writing FilterUppercaseDetails to clients/analytics/lib/google_api/analytics/v3/model/filter_uppercase_details.ex.
Writing Filters to clients/analytics/lib/google_api/analytics/v3/model/filters.ex.
Writing GaData to clients/analytics/lib/google_api/analytics/v3/model/ga_data.ex.
Writing GaDataColumnHeaders to clients/analytics/lib/google_api/analytics/v3/model/ga_data_column_headers.ex.
Writing GaDataDataTable to clients/analytics/lib/google_api/analytics/v3/model/ga_data_data_table.ex.
Writing GaDataDataTableCols to clients/analytics/lib/google_api/analytics/v3/model/ga_data_data_table_cols.ex.
Writing GaDataDataTableRows to clients/analytics/lib/google_api/analytics/v3/model/ga_data_data_table_rows.ex.
Writing GaDataDataTableRowsC to clients/analytics/lib/google_api/analytics/v3/model/ga_data_data_table_rows_c.ex.
Writing GaDataProfileInfo to clients/analytics/lib/google_api/analytics/v3/model/ga_data_profile_info.ex.
Writing GaDataQuery to clients/analytics/lib/google_api/analytics/v3/model/ga_data_query.ex.
Writing Goal to clients/analytics/lib/google_api/analytics/v3/model/goal.ex.
Writing GoalEventDetails to clients/analytics/lib/google_api/analytics/v3/model/goal_event_details.ex.
Writing GoalEventDetailsEventConditions to clients/analytics/lib/google_api/analytics/v3/model/goal_event_details_event_conditions.ex.
Writing GoalParentLink to clients/analytics/lib/google_api/analytics/v3/model/goal_parent_link.ex.
Writing GoalUrlDestinationDetails to clients/analytics/lib/google_api/analytics/v3/model/goal_url_destination_details.ex.
Writing GoalUrlDestinationDetailsSteps to clients/analytics/lib/google_api/analytics/v3/model/goal_url_destination_details_steps.ex.
Writing GoalVisitNumPagesDetails to clients/analytics/lib/google_api/analytics/v3/model/goal_visit_num_pages_details.ex.
Writing GoalVisitTimeOnSiteDetails to clients/analytics/lib/google_api/analytics/v3/model/goal_visit_time_on_site_details.ex.
Writing Goals to clients/analytics/lib/google_api/analytics/v3/model/goals.ex.
Writing HashClientIdRequest to clients/analytics/lib/google_api/analytics/v3/model/hash_client_id_request.ex.
Writing HashClientIdResponse to clients/analytics/lib/google_api/analytics/v3/model/hash_client_id_response.ex.
Writing IncludeConditions to clients/analytics/lib/google_api/analytics/v3/model/include_conditions.ex.
Writing LinkedForeignAccount to clients/analytics/lib/google_api/analytics/v3/model/linked_foreign_account.ex.
Writing McfData to clients/analytics/lib/google_api/analytics/v3/model/mcf_data.ex.
Writing McfDataColumnHeaders to clients/analytics/lib/google_api/analytics/v3/model/mcf_data_column_headers.ex.
Writing McfDataProfileInfo to clients/analytics/lib/google_api/analytics/v3/model/mcf_data_profile_info.ex.
Writing McfDataQuery to clients/analytics/lib/google_api/analytics/v3/model/mcf_data_query.ex.
Writing McfDataRows to clients/analytics/lib/google_api/analytics/v3/model/mcf_data_rows.ex.
Writing McfDataRowsConversionPathValue to clients/analytics/lib/google_api/analytics/v3/model/mcf_data_rows_conversion_path_value.ex.
Writing Profile to clients/analytics/lib/google_api/analytics/v3/model/profile.ex.
Writing ProfileChildLink to clients/analytics/lib/google_api/analytics/v3/model/profile_child_link.ex.
Writing ProfileFilterLink to clients/analytics/lib/google_api/analytics/v3/model/profile_filter_link.ex.
Writing ProfileFilterLinks to clients/analytics/lib/google_api/analytics/v3/model/profile_filter_links.ex.
Writing ProfileParentLink to clients/analytics/lib/google_api/analytics/v3/model/profile_parent_link.ex.
Writing ProfilePermissions to clients/analytics/lib/google_api/analytics/v3/model/profile_permissions.ex.
Writing ProfileRef to clients/analytics/lib/google_api/analytics/v3/model/profile_ref.ex.
Writing ProfileSummary to clients/analytics/lib/google_api/analytics/v3/model/profile_summary.ex.
Writing Profiles to clients/analytics/lib/google_api/analytics/v3/model/profiles.ex.
Writing RealtimeData to clients/analytics/lib/google_api/analytics/v3/model/realtime_data.ex.
Writing RealtimeDataColumnHeaders to clients/analytics/lib/google_api/analytics/v3/model/realtime_data_column_headers.ex.
Writing RealtimeDataProfileInfo to clients/analytics/lib/google_api/analytics/v3/model/realtime_data_profile_info.ex.
Writing RealtimeDataQuery to clients/analytics/lib/google_api/analytics/v3/model/realtime_data_query.ex.
Writing RemarketingAudience to clients/analytics/lib/google_api/analytics/v3/model/remarketing_audience.ex.
Writing RemarketingAudienceAudienceDefinition to clients/analytics/lib/google_api/analytics/v3/model/remarketing_audience_audience_definition.ex.
Writing RemarketingAudienceStateBasedAudienceDefinition to clients/analytics/lib/google_api/analytics/v3/model/remarketing_audience_state_based_audience_definition.ex.
Writing RemarketingAudienceStateBasedAudienceDefinitionExcludeConditions to clients/analytics/lib/google_api/analytics/v3/model/remarketing_audience_state_based_audience_definition_exclude_conditions.ex.
Writing RemarketingAudiences to clients/analytics/lib/google_api/analytics/v3/model/remarketing_audiences.ex.
Writing Segment to clients/analytics/lib/google_api/analytics/v3/model/segment.ex.
Writing Segments to clients/analytics/lib/google_api/analytics/v3/model/segments.ex.
Writing UnsampledReport to clients/analytics/lib/google_api/analytics/v3/model/unsampled_report.ex.
Writing UnsampledReportCloudStorageDownloadDetails to clients/analytics/lib/google_api/analytics/v3/model/unsampled_report_cloud_storage_download_details.ex.
Writing UnsampledReportDriveDownloadDetails to clients/analytics/lib/google_api/analytics/v3/model/unsampled_report_drive_download_details.ex.
Writing UnsampledReports to clients/analytics/lib/google_api/analytics/v3/model/unsampled_reports.ex.
Writing Upload to clients/analytics/lib/google_api/analytics/v3/model/upload.ex.
Writing Uploads to clients/analytics/lib/google_api/analytics/v3/model/uploads.ex.
Writing UserDeletionRequest to clients/analytics/lib/google_api/analytics/v3/model/user_deletion_request.ex.
Writing UserDeletionRequestId to clients/analytics/lib/google_api/analytics/v3/model/user_deletion_request_id.ex.
Writing UserRef to clients/analytics/lib/google_api/analytics/v3/model/user_ref.ex.
Writing WebPropertyRef to clients/analytics/lib/google_api/analytics/v3/model/web_property_ref.ex.
Writing WebPropertySummary to clients/analytics/lib/google_api/analytics/v3/model/web_property_summary.ex.
Writing Webproperties to clients/analytics/lib/google_api/analytics/v3/model/webproperties.ex.
Writing Webproperty to clients/analytics/lib/google_api/analytics/v3/model/webproperty.ex.
Writing WebpropertyChildLink to clients/analytics/lib/google_api/analytics/v3/model/webproperty_child_link.ex.
Writing WebpropertyParentLink to clients/analytics/lib/google_api/analytics/v3/model/webproperty_parent_link.ex.
Writing WebpropertyPermissions to clients/analytics/lib/google_api/analytics/v3/model/webproperty_permissions.ex.
Writing Data to clients/analytics/lib/google_api/analytics/v3/api/data.ex.
Writing Management to clients/analytics/lib/google_api/analytics/v3/api/management.ex.
Writing Metadata to clients/analytics/lib/google_api/analytics/v3/api/metadata.ex.
Writing Provisioning to clients/analytics/lib/google_api/analytics/v3/api/provisioning.ex.
Writing UserDeletion to clients/analytics/lib/google_api/analytics/v3/api/user_deletion.ex.
Writing connection.ex.
Writing metadata.ex.
Writing mix.exs
Writing README.md
Writing LICENSE
Writing .gitignore
Writing config/config.exs
Writing test/test_helper.exs
12:10:37.508 [info] Found only discovery_revision and/or formatting changes. Not significant enough for a PR.
fixing file permissions
2020-06-06 05:10:40,564 synthtool [DEBUG] > Wrote metadata to clients/analytics/synth.metadata.
DEBUG:synthtool:Wrote metadata to clients/analytics/synth.metadata.
2020-06-06 05:10:40,594 autosynth [DEBUG] > Running: git clean -fdx
Removing __pycache__/
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 615, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 476, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 555, in _inner_main
).synthesize(base_synth_log_path)
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 121, in synthesize
with open(log_file_path, "rt") as fp:
IsADirectoryError: [Errno 21] Is a directory: '/tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api'
```
Google internal developers can see the full log [here](http://sponge2/results/invocations/567e2ab8-5e4f-4fb0-8bae-9d0cc90aa1af/targets/github%2Fsynthtool;config=default/tests;query=elixir-google-api;failed=false).
|
non_test
|
synthesis failed for analytics hello autosynth couldn t regenerate analytics broken heart here s the output from running synth py el entity user links ex writing experiment to clients analytics lib google api analytics model experiment ex writing experimentparentlink to clients analytics lib google api analytics model experiment parent link ex writing experimentvariations to clients analytics lib google api analytics model experiment variations ex writing experiments to clients analytics lib google api analytics model experiments ex writing filter to clients analytics lib google api analytics model filter ex writing filteradvanceddetails to clients analytics lib google api analytics model filter advanced details ex writing filterexpression to clients analytics lib google api analytics model filter expression ex writing filterlowercasedetails to clients analytics lib google api analytics model filter lowercase details ex writing filterparentlink to clients analytics lib google api analytics model filter parent link ex writing filterref to clients analytics lib google api analytics model filter ref ex writing filtersearchandreplacedetails to clients analytics lib google api analytics model filter search and replace details ex writing filteruppercasedetails to clients analytics lib google api analytics model filter uppercase details ex writing filters to clients analytics lib google api analytics model filters ex writing gadata to clients analytics lib google api analytics model ga data ex writing gadatacolumnheaders to clients analytics lib google api analytics model ga data column headers ex writing gadatadatatable to clients analytics lib google api analytics model ga data data table ex writing gadatadatatablecols to clients analytics lib google api analytics model ga data data table cols ex writing gadatadatatablerows to clients analytics lib google api analytics model ga data data table rows ex writing gadatadatatablerowsc to clients analytics lib google api analytics model ga data data table rows c ex writing gadataprofileinfo to clients analytics lib google api analytics model ga data profile info ex writing gadataquery to clients analytics lib google api analytics model ga data query ex writing goal to clients analytics lib google api analytics model goal ex writing goaleventdetails to clients analytics lib google api analytics model goal event details ex writing goaleventdetailseventconditions to clients analytics lib google api analytics model goal event details event conditions ex writing goalparentlink to clients analytics lib google api analytics model goal parent link ex writing goalurldestinationdetails to clients analytics lib google api analytics model goal url destination details ex writing goalurldestinationdetailssteps to clients analytics lib google api analytics model goal url destination details steps ex writing goalvisitnumpagesdetails to clients analytics lib google api analytics model goal visit num pages details ex writing goalvisittimeonsitedetails to clients analytics lib google api analytics model goal visit time on site details ex writing goals to clients analytics lib google api analytics model goals ex writing hashclientidrequest to clients analytics lib google api analytics model hash client id request ex writing hashclientidresponse to clients analytics lib google api analytics model hash client id response ex writing includeconditions to clients analytics lib google api analytics model include conditions ex writing linkedforeignaccount to clients analytics lib google api analytics model linked foreign account ex writing mcfdata to clients analytics lib google api analytics model mcf data ex writing mcfdatacolumnheaders to clients analytics lib google api analytics model mcf data column headers ex writing mcfdataprofileinfo to clients analytics lib google api analytics model mcf data profile info ex writing mcfdataquery to clients analytics lib google api analytics model mcf data query ex writing mcfdatarows to clients analytics lib google api analytics model mcf data rows ex writing mcfdatarowsconversionpathvalue to clients analytics lib google api analytics model mcf data rows conversion path value ex writing profile to clients analytics lib google api analytics model profile ex writing profilechildlink to clients analytics lib google api analytics model profile child link ex writing profilefilterlink to clients analytics lib google api analytics model profile filter link ex writing profilefilterlinks to clients analytics lib google api analytics model profile filter links ex writing profileparentlink to clients analytics lib google api analytics model profile parent link ex writing profilepermissions to clients analytics lib google api analytics model profile permissions ex writing profileref to clients analytics lib google api analytics model profile ref ex writing profilesummary to clients analytics lib google api analytics model profile summary ex writing profiles to clients analytics lib google api analytics model profiles ex writing realtimedata to clients analytics lib google api analytics model realtime data ex writing realtimedatacolumnheaders to clients analytics lib google api analytics model realtime data column headers ex writing realtimedataprofileinfo to clients analytics lib google api analytics model realtime data profile info ex writing realtimedataquery to clients analytics lib google api analytics model realtime data query ex writing remarketingaudience to clients analytics lib google api analytics model remarketing audience ex writing remarketingaudienceaudiencedefinition to clients analytics lib google api analytics model remarketing audience audience definition ex writing remarketingaudiencestatebasedaudiencedefinition to clients analytics lib google api analytics model remarketing audience state based audience definition ex writing remarketingaudiencestatebasedaudiencedefinitionexcludeconditions to clients analytics lib google api analytics model remarketing audience state based audience definition exclude conditions ex writing remarketingaudiences to clients analytics lib google api analytics model remarketing audiences ex writing segment to clients analytics lib google api analytics model segment ex writing segments to clients analytics lib google api analytics model segments ex writing unsampledreport to clients analytics lib google api analytics model unsampled report ex writing unsampledreportcloudstoragedownloaddetails to clients analytics lib google api analytics model unsampled report cloud storage download details ex writing unsampledreportdrivedownloaddetails to clients analytics lib google api analytics model unsampled report drive download details ex writing unsampledreports to clients analytics lib google api analytics model unsampled reports ex writing upload to clients analytics lib google api analytics model upload ex writing uploads to clients analytics lib google api analytics model uploads ex writing userdeletionrequest to clients analytics lib google api analytics model user deletion request ex writing userdeletionrequestid to clients analytics lib google api analytics model user deletion request id ex writing userref to clients analytics lib google api analytics model user ref ex writing webpropertyref to clients analytics lib google api analytics model web property ref ex writing webpropertysummary to clients analytics lib google api analytics model web property summary ex writing webproperties to clients analytics lib google api analytics model webproperties ex writing webproperty to clients analytics lib google api analytics model webproperty ex writing webpropertychildlink to clients analytics lib google api analytics model webproperty child link ex writing webpropertyparentlink to clients analytics lib google api analytics model webproperty parent link ex writing webpropertypermissions to clients analytics lib google api analytics model webproperty permissions ex writing data to clients analytics lib google api analytics api data ex writing management to clients analytics lib google api analytics api management ex writing metadata to clients analytics lib google api analytics api metadata ex writing provisioning to clients analytics lib google api analytics api provisioning ex writing userdeletion to clients analytics lib google api analytics api user deletion ex writing connection ex writing metadata ex writing mix exs writing readme md writing license writing gitignore writing config config exs writing test test helper exs found only discovery revision and or formatting changes not significant enough for a pr fixing file permissions synthtool wrote metadata to clients analytics synth metadata debug synthtool wrote metadata to clients analytics synth metadata autosynth running git clean fdx removing pycache traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool autosynth synth py line in main file tmpfs src github synthtool autosynth synth py line in main return inner main temp dir file tmpfs src github synthtool autosynth synth py line in inner main synthesize base synth log path file tmpfs src github synthtool autosynth synthesizer py line in synthesize with open log file path rt as fp isadirectoryerror is a directory tmpfs src github synthtool logs googleapis elixir google api google internal developers can see the full log
| 0
|
217,209
| 16,848,836,150
|
IssuesEvent
|
2021-06-20 04:12:49
|
hakehuang/infoflow
|
https://api.github.com/repos/hakehuang/infoflow
|
opened
|
tests-ci :kernel.memory_protection.userspace.domain_add_thread_drop_to_user : zephyr-v2.6.0-286-g46029914a7ac: lpcxpresso55s28: test Timeout
|
area: Tests
|
**Describe the bug**
kernel.memory_protection.userspace.domain_add_thread_drop_to_user test is Timeout on zephyr-v2.6.0-286-g46029914a7ac on lpcxpresso55s28
see logs for details
**To Reproduce**
1.
```
scripts/twister --device-testing --device-serial /dev/ttyACM0 -p lpcxpresso55s28 --testcase-root tests --sub-test kernel.memory_protection
```
2. See error
**Expected behavior**
test pass
**Impact**
**Logs and console output**
```
*** Booting Zephyr OS build zephyr-v2.6.0-286-g46029914a7ac ***
Running test suite userspace
===================================================================
START - test_is_usermode
PASS - test_is_usermode in 0.1 seconds
===================================================================
START - test_write_control
PASS - test_write_control in 0.1 seconds
===================================================================
START - test_disable_mmu_mpu
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
```
**Environment (please complete the following information):**
- OS: (e.g. Linux )
- Toolchain (e.g Zephyr SDK)
- Commit SHA or Version used: zephyr-v2.6.0-286-g46029914a7ac
|
1.0
|
tests-ci :kernel.memory_protection.userspace.domain_add_thread_drop_to_user : zephyr-v2.6.0-286-g46029914a7ac: lpcxpresso55s28: test Timeout
-
**Describe the bug**
kernel.memory_protection.userspace.domain_add_thread_drop_to_user test is Timeout on zephyr-v2.6.0-286-g46029914a7ac on lpcxpresso55s28
see logs for details
**To Reproduce**
1.
```
scripts/twister --device-testing --device-serial /dev/ttyACM0 -p lpcxpresso55s28 --testcase-root tests --sub-test kernel.memory_protection
```
2. See error
**Expected behavior**
test pass
**Impact**
**Logs and console output**
```
*** Booting Zephyr OS build zephyr-v2.6.0-286-g46029914a7ac ***
Running test suite userspace
===================================================================
START - test_is_usermode
PASS - test_is_usermode in 0.1 seconds
===================================================================
START - test_write_control
PASS - test_write_control in 0.1 seconds
===================================================================
START - test_disable_mmu_mpu
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
```
**Environment (please complete the following information):**
- OS: (e.g. Linux )
- Toolchain (e.g Zephyr SDK)
- Commit SHA or Version used: zephyr-v2.6.0-286-g46029914a7ac
|
test
|
tests ci kernel memory protection userspace domain add thread drop to user zephyr test timeout describe the bug kernel memory protection userspace domain add thread drop to user test is timeout on zephyr on see logs for details to reproduce scripts twister device testing device serial dev p testcase root tests sub test kernel memory protection see error expected behavior test pass impact logs and console output booting zephyr os build zephyr running test suite userspace start test is usermode pass test is usermode in seconds start test write control pass test write control in seconds start test disable mmu mpu assertion fail west topdir zephyr arch arm core cortex m fault c esf could not be retrieved successfully shall never occur assertion fail west topdir zephyr arch arm core cortex m fault c esf could not be retrieved successfully shall never occur environment please complete the following information os e g linux toolchain e g zephyr sdk commit sha or version used zephyr
| 1
|
133,632
| 18,298,996,438
|
IssuesEvent
|
2021-10-05 23:51:57
|
bsbtd/Teste
|
https://api.github.com/repos/bsbtd/Teste
|
opened
|
CVE-2018-19361 (High) detected in jackson-databind-2.9.5.jar
|
security vulnerability
|
## CVE-2018-19361 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: Teste/liferay-portal/modules/etl/talend/talend-runtime/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- components-api-0.25.3.jar (Root Library)
- daikon-0.27.0.jar
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bsbtd/Teste/commit/64dde89c50c07496423c4d4a865f2e16b92399ad">64dde89c50c07496423c4d4a865f2e16b92399ad</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.8 might allow attackers to have unspecified impact by leveraging failure to block the openjpa class from polymorphic deserialization.
<p>Publish Date: 2019-01-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-19361>CVE-2018-19361</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19361">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19361</a></p>
<p>Release Date: 2019-01-02</p>
<p>Fix Resolution: 2.9.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-19361 (High) detected in jackson-databind-2.9.5.jar - ## CVE-2018-19361 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: Teste/liferay-portal/modules/etl/talend/talend-runtime/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- components-api-0.25.3.jar (Root Library)
- daikon-0.27.0.jar
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bsbtd/Teste/commit/64dde89c50c07496423c4d4a865f2e16b92399ad">64dde89c50c07496423c4d4a865f2e16b92399ad</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.8 might allow attackers to have unspecified impact by leveraging failure to block the openjpa class from polymorphic deserialization.
<p>Publish Date: 2019-01-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-19361>CVE-2018-19361</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19361">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19361</a></p>
<p>Release Date: 2019-01-02</p>
<p>Fix Resolution: 2.9.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file teste liferay portal modules etl talend talend runtime pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy components api jar root library daikon jar x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before might allow attackers to have unspecified impact by leveraging failure to block the openjpa class from polymorphic deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
800,311
| 28,360,526,766
|
IssuesEvent
|
2023-04-12 10:23:22
|
apache/hudi
|
https://api.github.com/repos/apache/hudi
|
closed
|
[SUPPORT]Unable to read hudi table and got an IllegalArgumentException: For input string: "null"
|
schema-and-data-types priority:major spark-sql version-compatibility
|
**Describe the problem you faced**
I use java and spark 3.3 to read hudi 0.13.0 table following the guide on offical website.
The guide says this will work,but I got an IllegalArgumentException: For input string: "null".
**To Reproduce**
Steps to reproduce the behavior:
1.generate one hudi COW table from mysql table.
2.get access to the COW table through spark sql
3.the IllegalArgumentException: For input string: "null" shows.
4.I have already changed the datasource and the table structure,It has no relationship with this.
5 I use this command line and I am sure there are datas in my parquet file.
./hadoop jar ~/parquet-tools-1.9.0.jar cat hdfs://192.168.5.128:9000/user/spark/hudi/1/2.parquet
**Expected behavior**
the data is shown.
**Environment Description**
* Hudi version :
0.12.2,0.13.0
* Spark version :
3.3.2
* Hive version :
none
* Hadoop version :
3.3.4
* Storage (HDFS/S3/GCS..) :
HDFS
* Running on Docker? (yes/no) :
no.my local laptop
**Additional context**
JDK 1.8
Add any other context about the problem here.
`Map<String, String> hudiConf = new HashMap<>();
hudiConf.put(HoodieWriteConfig.TBL_NAME.key(), "t_yklc_info");
Dataset<Row> demods = getActiveSession().read().options(hudiConf).format("org.apache.hudi").load("/user/spark/hudi/*/*");
demods.createOrReplaceTempView("lcinfo");
demods.printSchema();
logger.info(getActiveSession().conf().get(SQLConf.LEGACY_PARQUET_NANOS_AS_LONG().key()).toString());
logger.info(getActiveSession().conf().get(SQLConf.PARQUET_BINARY_AS_STRING().key()).toString());
logger.info(getActiveSession().conf().get(SQLConf.PARQUET_INT96_AS_TIMESTAMP().key()).toString());
logger.info(getActiveSession().conf().get(SQLConf.CASE_SENSITIVE().key()).toString());
Dataset<Row> ds = getActiveSession().sql("select APP_NO from lcinfo where APP_NO = '1' and STAT_CYCLE = '2'");
ds.printSchema();
ds.show();`
**Stacktrace**
INFO 18:45:03.183 | org.apache.spark.sql.execution.datasources.FileScanRDD | Reading File path: hdfs://192.168.5.128:9000/user/spark/hudi/2/1.parquet, range: 0-3964741, partition values: [empty row]
ERROR 18:45:03.420 | org.apache.spark.executor.Executor | Exception in task 3.0 in stage 1.0 (TID 60)
java.lang.IllegalArgumentException: For input string: "null"
at scala.collection.immutable.StringLike.parseBoolean(StringLike.scala:330) ~[scala-library-2.12.15.jar:?]
at scala.collection.immutable.StringLike.toBoolean(StringLike.scala:289) ~[scala-library-2.12.15.jar:?]
at scala.collection.immutable.StringLike.toBoolean$(StringLike.scala:289) ~[scala-library-2.12.15.jar:?]
at scala.collection.immutable.StringOps.toBoolean(StringOps.scala:33) ~[scala-library-2.12.15.jar:?]
at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.<init>(ParquetSchemaConverter.scala:70) ~[spark-sql_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.sql.execution.datasources.parquet.HoodieParquetFileFormatHelper$.buildImplicitSchemaChangeInfo(HoodieParquetFileFormatHelper.scala:30) ~[hudi-spark3.3-bundle_2.12-0.13.0.jar:3.3.2]
at org.apache.spark.sql.execution.datasources.parquet.Spark32PlusHoodieParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(Spark32PlusHoodieParquetFileFormat.scala:231) ~[hudi-spark3.3-bundle_2.12-0.13.0.jar:3.3.2]
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:209) ~[spark-sql_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:270) ~[spark-sql_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:116) ~[spark-sql_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:561) ~[spark-sql_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source) ~[?:?]
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) ~[?:?]
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) ~[spark-sql_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760) ~[spark-sql_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:364) ~[spark-sql_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:890) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:890) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.scheduler.Task.run(Task.scala:136) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_362]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_362]
at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_362]
|
1.0
|
[SUPPORT]Unable to read hudi table and got an IllegalArgumentException: For input string: "null" - **Describe the problem you faced**
I use java and spark 3.3 to read hudi 0.13.0 table following the guide on offical website.
The guide says this will work,but I got an IllegalArgumentException: For input string: "null".
**To Reproduce**
Steps to reproduce the behavior:
1.generate one hudi COW table from mysql table.
2.get access to the COW table through spark sql
3.the IllegalArgumentException: For input string: "null" shows.
4.I have already changed the datasource and the table structure,It has no relationship with this.
5 I use this command line and I am sure there are datas in my parquet file.
./hadoop jar ~/parquet-tools-1.9.0.jar cat hdfs://192.168.5.128:9000/user/spark/hudi/1/2.parquet
**Expected behavior**
the data is shown.
**Environment Description**
* Hudi version :
0.12.2,0.13.0
* Spark version :
3.3.2
* Hive version :
none
* Hadoop version :
3.3.4
* Storage (HDFS/S3/GCS..) :
HDFS
* Running on Docker? (yes/no) :
no.my local laptop
**Additional context**
JDK 1.8
Add any other context about the problem here.
`Map<String, String> hudiConf = new HashMap<>();
hudiConf.put(HoodieWriteConfig.TBL_NAME.key(), "t_yklc_info");
Dataset<Row> demods = getActiveSession().read().options(hudiConf).format("org.apache.hudi").load("/user/spark/hudi/*/*");
demods.createOrReplaceTempView("lcinfo");
demods.printSchema();
logger.info(getActiveSession().conf().get(SQLConf.LEGACY_PARQUET_NANOS_AS_LONG().key()).toString());
logger.info(getActiveSession().conf().get(SQLConf.PARQUET_BINARY_AS_STRING().key()).toString());
logger.info(getActiveSession().conf().get(SQLConf.PARQUET_INT96_AS_TIMESTAMP().key()).toString());
logger.info(getActiveSession().conf().get(SQLConf.CASE_SENSITIVE().key()).toString());
Dataset<Row> ds = getActiveSession().sql("select APP_NO from lcinfo where APP_NO = '1' and STAT_CYCLE = '2'");
ds.printSchema();
ds.show();`
**Stacktrace**
INFO 18:45:03.183 | org.apache.spark.sql.execution.datasources.FileScanRDD | Reading File path: hdfs://192.168.5.128:9000/user/spark/hudi/2/1.parquet, range: 0-3964741, partition values: [empty row]
ERROR 18:45:03.420 | org.apache.spark.executor.Executor | Exception in task 3.0 in stage 1.0 (TID 60)
java.lang.IllegalArgumentException: For input string: "null"
at scala.collection.immutable.StringLike.parseBoolean(StringLike.scala:330) ~[scala-library-2.12.15.jar:?]
at scala.collection.immutable.StringLike.toBoolean(StringLike.scala:289) ~[scala-library-2.12.15.jar:?]
at scala.collection.immutable.StringLike.toBoolean$(StringLike.scala:289) ~[scala-library-2.12.15.jar:?]
at scala.collection.immutable.StringOps.toBoolean(StringOps.scala:33) ~[scala-library-2.12.15.jar:?]
at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.<init>(ParquetSchemaConverter.scala:70) ~[spark-sql_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.sql.execution.datasources.parquet.HoodieParquetFileFormatHelper$.buildImplicitSchemaChangeInfo(HoodieParquetFileFormatHelper.scala:30) ~[hudi-spark3.3-bundle_2.12-0.13.0.jar:3.3.2]
at org.apache.spark.sql.execution.datasources.parquet.Spark32PlusHoodieParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(Spark32PlusHoodieParquetFileFormat.scala:231) ~[hudi-spark3.3-bundle_2.12-0.13.0.jar:3.3.2]
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:209) ~[spark-sql_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:270) ~[spark-sql_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:116) ~[spark-sql_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:561) ~[spark-sql_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source) ~[?:?]
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) ~[?:?]
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) ~[spark-sql_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760) ~[spark-sql_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:364) ~[spark-sql_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:890) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:890) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.scheduler.Task.run(Task.scala:136) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) ~[spark-core_2.12-3.3.2.jar:3.3.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_362]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_362]
at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_362]
|
non_test
|
unable to read hudi table and got an illegalargumentexception for input string null describe the problem you faced i use java and spark to read hudi table following the guide on offical website the guide says this will work but i got an illegalargumentexception for input string null to reproduce steps to reproduce the behavior generate one hudi cow table from mysql table get access to the cow table through spark sql the illegalargumentexception for input string null shows i have already changed the datasource and the table structure it has no relationship with this i use this command line and i am sure there are datas in my parquet file hadoop jar parquet tools jar cat hdfs user spark hudi parquet expected behavior the data is shown environment description hudi version spark version hive version none hadoop version storage hdfs gcs hdfs running on docker yes no no my local laptop additional context jdk add any other context about the problem here map hudiconf new hashmap hudiconf put hoodiewriteconfig tbl name key t yklc info dataset demods getactivesession read options hudiconf format org apache hudi load user spark hudi demods createorreplacetempview lcinfo demods printschema logger info getactivesession conf get sqlconf legacy parquet nanos as long key tostring logger info getactivesession conf get sqlconf parquet binary as string key tostring logger info getactivesession conf get sqlconf parquet as timestamp key tostring logger info getactivesession conf get sqlconf case sensitive key tostring dataset ds getactivesession sql select app no from lcinfo where app no and stat cycle ds printschema ds show stacktrace info org apache spark sql execution datasources filescanrdd reading file path hdfs user spark hudi parquet range partition values error org apache spark executor executor exception in task in stage tid java lang illegalargumentexception for input string null at scala collection immutable stringlike parseboolean stringlike scala at scala collection immutable stringlike toboolean stringlike scala at scala collection immutable stringlike toboolean stringlike scala at scala collection immutable stringops toboolean stringops scala at org apache spark sql execution datasources parquet parquettosparkschemaconverter parquetschemaconverter scala at org apache spark sql execution datasources parquet hoodieparquetfileformathelper buildimplicitschemachangeinfo hoodieparquetfileformathelper scala at org apache spark sql execution datasources parquet anonfun buildreaderwithpartitionvalues scala at org apache spark sql execution datasources filescanrdd anon org apache spark sql execution datasources filescanrdd anon readcurrentfile filescanrdd scala at org apache spark sql execution datasources filescanrdd anon nextiterator filescanrdd scala at org apache spark sql execution datasources filescanrdd anon hasnext filescanrdd scala at org apache spark sql execution filesourcescanexec anon hasnext datasourcescanexec scala at org apache spark sql catalyst expressions generatedclass columnartorow nextbatch unknown source at org apache spark sql catalyst expressions generatedclass processnext unknown source at org apache spark sql execution bufferedrowiterator hasnext bufferedrowiterator java at org apache spark sql execution wholestagecodegenexec anon hasnext wholestagecodegenexec scala at org apache spark sql execution sparkplan anonfun getbytearrayrdd sparkplan scala at org apache spark rdd rdd anonfun mappartitionsinternal rdd scala at org apache spark rdd rdd anonfun mappartitionsinternal adapted rdd scala at org apache spark rdd mappartitionsrdd compute mappartitionsrdd scala at org apache spark rdd rdd computeorreadcheckpoint rdd scala at org apache spark rdd rdd iterator rdd scala at org apache spark scheduler resulttask runtask resulttask scala at org apache spark scheduler task run task scala at org apache spark executor executor taskrunner anonfun run executor scala at org apache spark util utils trywithsafefinally utils scala at org apache spark executor executor taskrunner run executor scala at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java
| 0
|
215,510
| 16,676,209,383
|
IssuesEvent
|
2021-06-07 16:29:55
|
NixOS/nixpkgs
|
https://api.github.com/repos/NixOS/nixpkgs
|
closed
|
NixOS Plasma test should check that KDED is running
|
0.kind: enhancement 2.status: stale 3.skill: good-first-bug 6.topic: qt/kde 6.topic: testing
|
The NixOS test for Plasma should check that KDED is running. We have had problems in the past with KDED silently crashing. This affects the availability of important services like global shortcut keys.
|
1.0
|
NixOS Plasma test should check that KDED is running - The NixOS test for Plasma should check that KDED is running. We have had problems in the past with KDED silently crashing. This affects the availability of important services like global shortcut keys.
|
test
|
nixos plasma test should check that kded is running the nixos test for plasma should check that kded is running we have had problems in the past with kded silently crashing this affects the availability of important services like global shortcut keys
| 1
|
224,565
| 17,755,834,364
|
IssuesEvent
|
2021-08-28 18:40:34
|
nrwl/nx
|
https://api.github.com/repos/nrwl/nx
|
closed
|
Incorrect behavior in names helper
|
type: bug blocked: retry with latest scope: misc
|
When using the `names` in generators and providing a `path` in the `name` field, for example: `foo/bar/baz` it'll take the whole string instead of the last bit.
```ts
const { className } = names('foo/bar/baz')
// FooBarBaz
```
The expected behavior is to return only the last bit which is the file name:
```ts
const { className } = names('foo/bar/baz')
// Baz
```
|
1.0
|
Incorrect behavior in names helper - When using the `names` in generators and providing a `path` in the `name` field, for example: `foo/bar/baz` it'll take the whole string instead of the last bit.
```ts
const { className } = names('foo/bar/baz')
// FooBarBaz
```
The expected behavior is to return only the last bit which is the file name:
```ts
const { className } = names('foo/bar/baz')
// Baz
```
|
test
|
incorrect behavior in names helper when using the names in generators and providing a path in the name field for example foo bar baz it ll take the whole string instead of the last bit ts const classname names foo bar baz foobarbaz the expected behavior is to return only the last bit which is the file name ts const classname names foo bar baz baz
| 1
|
299,212
| 9,205,232,441
|
IssuesEvent
|
2019-03-08 10:00:11
|
qissue-bot/QGIS
|
https://api.github.com/repos/qissue-bot/QGIS
|
closed
|
on last windows version georeferencer returns error(14) :unable to open database SRS.db
|
Category: C++ Plugins Component: Affected QGIS version Component: Crashes QGIS or corrupts data Component: Easy fix? Component: Operating System Component: Pull Request or Patch supplied Component: Regression? Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Bug report
|
---
Author Name: **pascal-ferrand-tiscali-fr -** (pascal-ferrand-tiscali-fr -)
Original Redmine Issue: 621, https://issues.qgis.org/issues/621
Original Assignee: Magnus Homann
---
When I want to use georeferencer on windows qgis-0.8.1-preview1.exe it's replied "error(14): unable to open database SRS.db" after that the good window is opened the map draws but zoom and others "buttons" don't appear
|
1.0
|
on last windows version georeferencer returns error(14) :unable to open database SRS.db - ---
Author Name: **pascal-ferrand-tiscali-fr -** (pascal-ferrand-tiscali-fr -)
Original Redmine Issue: 621, https://issues.qgis.org/issues/621
Original Assignee: Magnus Homann
---
When I want to use georeferencer on windows qgis-0.8.1-preview1.exe it's replied "error(14): unable to open database SRS.db" after that the good window is opened the map draws but zoom and others "buttons" don't appear
|
non_test
|
on last windows version georeferencer returns error unable to open database srs db author name pascal ferrand tiscali fr pascal ferrand tiscali fr original redmine issue original assignee magnus homann when i want to use georeferencer on windows qgis exe it s replied error unable to open database srs db after that the good window is opened the map draws but zoom and others buttons don t appear
| 0
|
91,180
| 26,288,421,901
|
IssuesEvent
|
2023-01-08 04:39:13
|
sonic-net/sonic-buildimage
|
https://api.github.com/repos/sonic-net/sonic-buildimage
|
closed
|
[Build] Failed to build isc dhcp
|
Build :hammer:
|
<!--
If you are reporting a new issue, make sure that we do not have any duplicates
already open. You can ensure this by searching the issue list for this
repository. If there is a duplicate, please close your issue and add a comment
to the existing issue instead.
If you suspect your issue is a bug, please edit your issue description to
include the BUG REPORT INFORMATION shown below. If you fail to provide this
information within 7 days, we cannot debug your issue and will close it. We
will, however, reopen it if you later provide the information.
For more information about reporting issues, see
https://github.com/Azure/SONiC/wiki#report-issues
---------------------------------------------------
GENERAL SUPPORT INFORMATION
---------------------------------------------------
The GitHub issue tracker is for bug reports and feature requests.
General support can be found at the following locations:
- SONiC Support Forums - https://groups.google.com/forum/#!forum/sonicproject
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST
-->
#### Description
Build failed due to
```
16:51:17 make[1]: Entering directory '/sonic/src/isc-dhcp'
16:51:17 # Remove any stale files
16:51:17 rm -rf ./isc-dhcp-4.4.1
16:51:17 # Get isc-dhcp release, debian files
16:51:17 dget http://deb.debian.org/debian/pool/main/i/isc-dhcp/isc-dhcp_4.4.1-2.3.dsc
16:51:17 pushd ./isc-dhcp-4.4.1
16:51:17 # Create a git repository here for stg to apply patches
16:51:17 git init
16:51:17 git add -f *
16:51:17 git commit -qm "initial commit"
16:51:17 # Apply patches
16:51:17 stg init
16:51:17 stg import -s ../patch/series
16:51:17 # Build source and Debian packages
16:51:17 dpkg-buildpackage -rfakeroot -b -us -uc -j30 --admindir /sonic/dpkg/tmp.lJosSFVurg
16:51:17 popd
16:51:17 # Move the newly-built .deb packages to the destination directory
16:51:17 mv isc-dhcp-relay-dbgsym_4.4.1-2.3_amd64.deb isc-dhcp-relay_4.4.1-2.3_amd64.deb /sonic/target/debs/bullseye/
16:51:17 dget: retrieving http://deb.debian.org/debian/pool/main/i/isc-dhcp/isc-dhcp_4.4.1-2.3.dsc
16:51:17 get_url_version http://deb.debian.org/debian/pool/main/i/isc-dhcp/isc-dhcp_4.4.1-2.3.dsc failed
16:51:17 dget: curl isc-dhcp_4.4.1-2.3.dsc http://deb.debian.org/debian/pool/main/i/isc-dhcp/isc-dhcp_4.4.1-2.3.dsc failed
```
The root cause is that isc-dhcp_4.4.1-2.3.dsc is no longer available.
#### Steps to reproduce the issue:
Build sonic
#### Describe the results you received:
```
dget: curl isc-dhcp_4.4.1-2.3.dsc http://deb.debian.org/debian/pool/main/i/isc-dhcp/isc-dhcp_4.4.1-2.3.dsc failed
```
#### Describe the results you expected:
No error
#### Output of `show version`:
```
(paste your output here)
```
#### Output of `show techsupport`:
```
(paste your output here or download and attach the file here )
```
#### Additional information you deem important (e.g. issue happens only occasionally):
<!--
Also attach debug file produced by `sudo generate_dump`
-->
|
1.0
|
[Build] Failed to build isc dhcp - <!--
If you are reporting a new issue, make sure that we do not have any duplicates
already open. You can ensure this by searching the issue list for this
repository. If there is a duplicate, please close your issue and add a comment
to the existing issue instead.
If you suspect your issue is a bug, please edit your issue description to
include the BUG REPORT INFORMATION shown below. If you fail to provide this
information within 7 days, we cannot debug your issue and will close it. We
will, however, reopen it if you later provide the information.
For more information about reporting issues, see
https://github.com/Azure/SONiC/wiki#report-issues
---------------------------------------------------
GENERAL SUPPORT INFORMATION
---------------------------------------------------
The GitHub issue tracker is for bug reports and feature requests.
General support can be found at the following locations:
- SONiC Support Forums - https://groups.google.com/forum/#!forum/sonicproject
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST
-->
#### Description
Build failed due to
```
16:51:17 make[1]: Entering directory '/sonic/src/isc-dhcp'
16:51:17 # Remove any stale files
16:51:17 rm -rf ./isc-dhcp-4.4.1
16:51:17 # Get isc-dhcp release, debian files
16:51:17 dget http://deb.debian.org/debian/pool/main/i/isc-dhcp/isc-dhcp_4.4.1-2.3.dsc
16:51:17 pushd ./isc-dhcp-4.4.1
16:51:17 # Create a git repository here for stg to apply patches
16:51:17 git init
16:51:17 git add -f *
16:51:17 git commit -qm "initial commit"
16:51:17 # Apply patches
16:51:17 stg init
16:51:17 stg import -s ../patch/series
16:51:17 # Build source and Debian packages
16:51:17 dpkg-buildpackage -rfakeroot -b -us -uc -j30 --admindir /sonic/dpkg/tmp.lJosSFVurg
16:51:17 popd
16:51:17 # Move the newly-built .deb packages to the destination directory
16:51:17 mv isc-dhcp-relay-dbgsym_4.4.1-2.3_amd64.deb isc-dhcp-relay_4.4.1-2.3_amd64.deb /sonic/target/debs/bullseye/
16:51:17 dget: retrieving http://deb.debian.org/debian/pool/main/i/isc-dhcp/isc-dhcp_4.4.1-2.3.dsc
16:51:17 get_url_version http://deb.debian.org/debian/pool/main/i/isc-dhcp/isc-dhcp_4.4.1-2.3.dsc failed
16:51:17 dget: curl isc-dhcp_4.4.1-2.3.dsc http://deb.debian.org/debian/pool/main/i/isc-dhcp/isc-dhcp_4.4.1-2.3.dsc failed
```
The root cause is that isc-dhcp_4.4.1-2.3.dsc is no longer available.
#### Steps to reproduce the issue:
Build sonic
#### Describe the results you received:
```
dget: curl isc-dhcp_4.4.1-2.3.dsc http://deb.debian.org/debian/pool/main/i/isc-dhcp/isc-dhcp_4.4.1-2.3.dsc failed
```
#### Describe the results you expected:
No error
#### Output of `show version`:
```
(paste your output here)
```
#### Output of `show techsupport`:
```
(paste your output here or download and attach the file here )
```
#### Additional information you deem important (e.g. issue happens only occasionally):
<!--
Also attach debug file produced by `sudo generate_dump`
-->
|
non_test
|
failed to build isc dhcp if you are reporting a new issue make sure that we do not have any duplicates already open you can ensure this by searching the issue list for this repository if there is a duplicate please close your issue and add a comment to the existing issue instead if you suspect your issue is a bug please edit your issue description to include the bug report information shown below if you fail to provide this information within days we cannot debug your issue and will close it we will however reopen it if you later provide the information for more information about reporting issues see general support information the github issue tracker is for bug reports and feature requests general support can be found at the following locations sonic support forums bug report information use the commands below to provide key information from your environment you do not have to include this information if this is a feature request description build failed due to make entering directory sonic src isc dhcp remove any stale files rm rf isc dhcp get isc dhcp release debian files dget pushd isc dhcp create a git repository here for stg to apply patches git init git add f git commit qm initial commit apply patches stg init stg import s patch series build source and debian packages dpkg buildpackage rfakeroot b us uc admindir sonic dpkg tmp ljossfvurg popd move the newly built deb packages to the destination directory mv isc dhcp relay dbgsym deb isc dhcp relay deb sonic target debs bullseye dget retrieving get url version failed dget curl isc dhcp dsc failed the root cause is that isc dhcp dsc is no longer available steps to reproduce the issue build sonic describe the results you received dget curl isc dhcp dsc failed describe the results you expected no error output of show version paste your output here output of show techsupport paste your output here or download and attach the file here additional information you deem important e g issue happens only occasionally also attach debug file produced by sudo generate dump
| 0
|
335,298
| 30,021,570,439
|
IssuesEvent
|
2023-06-27 00:11:51
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
Test failure: GooglePFTestFieldTrialOverride.BaseGoogleSearchHasPFForPrefetch
|
QA/No release-notes/exclude ci-concern bot/type/test bot/platform/windows bot/arch/x64 bot/channel/nightly bot/branch/master
|
Greetings human!
Bad news. `GooglePFTestFieldTrialOverride.BaseGoogleSearchHasPFForPrefetch` [failed on windows x64 nightly master](https://ci.brave.com/job/brave-browser-build-windows-x64-asan/530/testReport/junit/(root)/GooglePFTestFieldTrialOverride/windows_x64___test_browser_chromium___BaseGoogleSearchHasPFForPrefetch).
<details>
<summary>Stack trace</summary>
```
[ RUN ] GooglePFTestFieldTrialOverride.BaseGoogleSearchHasPFForPrefetch
[16528:17220:0623/232608.119:WARNING:chrome_main_delegate.cc(594)] This is Chrome version 115.1.55.10 (not a warning)
[16528:17220:0623/232608.131:WARNING:chrome_browser_cloud_management_controller.cc(87)] Could not create policy manager as CBCM is not enabled.
[16528:17220:0623/232608.258:ERROR:chrome_browser_cloud_management_controller.cc(162)] Cloud management controller initialization aborted as CBCM is not enabled.
[16528:17220:0623/232608.500:WARNING:external_provider_impl.cc(513)] Malformed extension dictionary for extension: odbfpeeihdkbihmopkbjmoonfanlbfcl. Key external_update_url has value "", which is not a valid URL.
..\..\chrome\browser\preloading\prefetch\search_prefetch\search_prefetch_service_browsertest.cc(3252): error: Value of: base::Contains(suggest_generated_url, "pf=spp")
Actual: false
Expected: true
Stack trace:
Backtrace:
GooglePFTestFieldTrialOverride_BaseGoogleSearchHasPFForPrefetch_Test::RunTestOnMainThread [0x00007FF7ADADEF19+1321] (C:\jenkins\x64-nightly\src\chrome\browser\preloading\prefetch\search_prefetch\search_prefetch_service_browsertest.cc:3252)
content::BrowserTestBase::ProxyRunTestOnMainThreadLoop [0x00007FF7C2620E3B+2043] (C:\jenkins\x64-nightly\src\content\public\test\browser_test_base.cc:901)
base::internal::Invoker<base::internal::BindState<void (content::BrowserTestBase::*)(),base::internal::UnretainedWrapper<content::BrowserTestBase,base::unretained_traits::MayNotDangle,0> >,void ()>::RunOnce [0x00007FF7C2626750+366] (C:\jenkins\x64-nightly\src\base\functional\bind_internal.h:976)
content::BrowserMainLoop::InterceptMainMessageLoopRun [0x00007FF7BAF902E2+424] (C:\jenkins\x64-nightly\src\content\browser\browser_main_loop.cc:1039)
content::BrowserMainLoop::RunMainMessageLoop [0x00007FF7BAF904D0+222] (C:\jenkins\x64-nightly\src\content\browser\browser_main_loop.cc:1051)
content::BrowserMainRunnerImpl::Run [0x00007FF7BAF975F4+42] (C:\jenkins\x64-nightly\src\content\browser\browser_main_runner_impl.cc:160)
content::BrowserMain [0x00007FF7BAF8846B+535] (C:\jenkins\x64-nightly\src\content\browser\browser_main.cc:34)
content::RunBrowserProcessMain [0x00007FF7BFC59116+650] (C:\jenkins\x64-nightly\src\content\app\content_main_runner_impl.cc:707)
content::ContentMainRunnerImpl::RunBrowser [0x00007FF7BFC5D1D2+1210] (C:\jenkins\x64-nightly\src\content\app\content_main_runner_impl.cc:1284)
content::ContentMainRunnerImpl::Run [0x00007FF7BFC5C91C+2872] (C:\jenkins\x64-nightly\src\content\app\content_main_runner_impl.cc:1142)
content::RunContentProcess [0x00007FF7BFC56EEC+2910] (C:\jenkins\x64-nightly\src\content\app\content_main.cc:326)
content::ContentMain [0x00007FF7BFC57BF6+604] (C:\jenkins\x64-nightly\src\content\app\content_main.cc:343)
content::BrowserTestBase::SetUp [0x00007FF7C261E8CE+5912] (C:\jenkins\x64-nightly\src\content\public\test\browser_test_base.cc:580)
InProcessBrowserTest::SetUp [0x00007FF7C123BAC9+1107] (C:\jenkins\x64-nightly\src\chrome\test\base\in_process_browser_test.cc:492)
..\..\chrome\browser\preloading\prefetch\search_prefetch\search_prefetch_service_browsertest.cc(3261): error: Value of: base::Contains(navigation_generated_url, "pf=npp")
Actual: false
Expected: true
Stack trace:
Backtrace:
GooglePFTestFieldTrialOverride_BaseGoogleSearchHasPFForPrefetch_Test::RunTestOnMainThread [0x00007FF7ADADF2A2+2226] (C:\jenkins\x64-nightly\src\chrome\browser\preloading\prefetch\search_prefetch\search_prefetch_service_browsertest.cc:3261)
content::BrowserTestBase::ProxyRunTestOnMainThreadLoop [0x00007FF7C2620E3B+2043] (C:\jenkins\x64-nightly\src\content\public\test\browser_test_base.cc:901)
base::internal::Invoker<base::internal::BindState<void (content::BrowserTestBase::*)(),base::internal::UnretainedWrapper<content::BrowserTestBase,base::unretained_traits::MayNotDangle,0> >,void ()>::RunOnce [0x00007FF7C2626750+366] (C:\jenkins\x64-nightly\src\base\functional\bind_internal.h:976)
content::BrowserMainLoop::InterceptMainMessageLoopRun [0x00007FF7BAF902E2+424] (C:\jenkins\x64-nightly\src\content\browser\browser_main_loop.cc:1039)
content::BrowserMainLoop::RunMainMessageLoop [0x00007FF7BAF904D0+222] (C:\jenkins\x64-nightly\src\content\browser\browser_main_loop.cc:1051)
content::BrowserMainRunnerImpl::Run [0x00007FF7BAF975F4+42] (C:\jenkins\x64-nightly\src\content\browser\browser_main_runner_impl.cc:160)
content::BrowserMain [0x00007FF7BAF8846B+535] (C:\jenkins\x64-nightly\src\content\browser\browser_main.cc:34)
content::RunBrowserProcessMain [0x00007FF7BFC59116+650] (C:\jenkins\x64-nightly\src\content\app\content_main_runner_impl.cc:707)
content::ContentMainRunnerImpl::RunBrowser [0x00007FF7BFC5D1D2+1210] (C:\jenkins\x64-nightly\src\content\app\content_main_runner_impl.cc:1284)
content::ContentMainRunnerImpl::Run [0x00007FF7BFC5C91C+2872] (C:\jenkins\x64-nightly\src\content\app\content_main_runner_impl.cc:1142)
content::RunContentProcess [0x00007FF7BFC56EEC+2910] (C:\jenkins\x64-nightly\src\content\app\content_main.cc:326)
content::ContentMain [0x00007FF7BFC57BF6+604] (C:\jenkins\x64-nightly\src\content\app\content_main.cc:343)
content::BrowserTestBase::SetUp [0x00007FF7C261E8CE+5912] (C:\jenkins\x64-nightly\src\content\public\test\browser_test_base.cc:580)
InProcessBrowserTest::SetUp [0x00007FF7C123BAC9+1107] (C:\jenkins\x64-nightly\src\chrome\test\base\in_process_browser_test.cc:492)
[16528:17220:0623/232612.419:WARNING:brave_stats_updater_params.cc(129)] Couldn't find the time of first run. This should only happen when running tests, but never in production code.
[ FAILED ] GooglePFTestFieldTrialOverride.BaseGoogleSearchHasPFForPrefetch, where TypeParam = and GetParam() = (4843 ms)
[ FAILED ] GooglePFTestFieldTrialOverride.BaseGoogleSearchHasPFForPrefetch
```
</details>
|
1.0
|
Test failure: GooglePFTestFieldTrialOverride.BaseGoogleSearchHasPFForPrefetch - Greetings human!
Bad news. `GooglePFTestFieldTrialOverride.BaseGoogleSearchHasPFForPrefetch` [failed on windows x64 nightly master](https://ci.brave.com/job/brave-browser-build-windows-x64-asan/530/testReport/junit/(root)/GooglePFTestFieldTrialOverride/windows_x64___test_browser_chromium___BaseGoogleSearchHasPFForPrefetch).
<details>
<summary>Stack trace</summary>
```
[ RUN ] GooglePFTestFieldTrialOverride.BaseGoogleSearchHasPFForPrefetch
[16528:17220:0623/232608.119:WARNING:chrome_main_delegate.cc(594)] This is Chrome version 115.1.55.10 (not a warning)
[16528:17220:0623/232608.131:WARNING:chrome_browser_cloud_management_controller.cc(87)] Could not create policy manager as CBCM is not enabled.
[16528:17220:0623/232608.258:ERROR:chrome_browser_cloud_management_controller.cc(162)] Cloud management controller initialization aborted as CBCM is not enabled.
[16528:17220:0623/232608.500:WARNING:external_provider_impl.cc(513)] Malformed extension dictionary for extension: odbfpeeihdkbihmopkbjmoonfanlbfcl. Key external_update_url has value "", which is not a valid URL.
..\..\chrome\browser\preloading\prefetch\search_prefetch\search_prefetch_service_browsertest.cc(3252): error: Value of: base::Contains(suggest_generated_url, "pf=spp")
Actual: false
Expected: true
Stack trace:
Backtrace:
GooglePFTestFieldTrialOverride_BaseGoogleSearchHasPFForPrefetch_Test::RunTestOnMainThread [0x00007FF7ADADEF19+1321] (C:\jenkins\x64-nightly\src\chrome\browser\preloading\prefetch\search_prefetch\search_prefetch_service_browsertest.cc:3252)
content::BrowserTestBase::ProxyRunTestOnMainThreadLoop [0x00007FF7C2620E3B+2043] (C:\jenkins\x64-nightly\src\content\public\test\browser_test_base.cc:901)
base::internal::Invoker<base::internal::BindState<void (content::BrowserTestBase::*)(),base::internal::UnretainedWrapper<content::BrowserTestBase,base::unretained_traits::MayNotDangle,0> >,void ()>::RunOnce [0x00007FF7C2626750+366] (C:\jenkins\x64-nightly\src\base\functional\bind_internal.h:976)
content::BrowserMainLoop::InterceptMainMessageLoopRun [0x00007FF7BAF902E2+424] (C:\jenkins\x64-nightly\src\content\browser\browser_main_loop.cc:1039)
content::BrowserMainLoop::RunMainMessageLoop [0x00007FF7BAF904D0+222] (C:\jenkins\x64-nightly\src\content\browser\browser_main_loop.cc:1051)
content::BrowserMainRunnerImpl::Run [0x00007FF7BAF975F4+42] (C:\jenkins\x64-nightly\src\content\browser\browser_main_runner_impl.cc:160)
content::BrowserMain [0x00007FF7BAF8846B+535] (C:\jenkins\x64-nightly\src\content\browser\browser_main.cc:34)
content::RunBrowserProcessMain [0x00007FF7BFC59116+650] (C:\jenkins\x64-nightly\src\content\app\content_main_runner_impl.cc:707)
content::ContentMainRunnerImpl::RunBrowser [0x00007FF7BFC5D1D2+1210] (C:\jenkins\x64-nightly\src\content\app\content_main_runner_impl.cc:1284)
content::ContentMainRunnerImpl::Run [0x00007FF7BFC5C91C+2872] (C:\jenkins\x64-nightly\src\content\app\content_main_runner_impl.cc:1142)
content::RunContentProcess [0x00007FF7BFC56EEC+2910] (C:\jenkins\x64-nightly\src\content\app\content_main.cc:326)
content::ContentMain [0x00007FF7BFC57BF6+604] (C:\jenkins\x64-nightly\src\content\app\content_main.cc:343)
content::BrowserTestBase::SetUp [0x00007FF7C261E8CE+5912] (C:\jenkins\x64-nightly\src\content\public\test\browser_test_base.cc:580)
InProcessBrowserTest::SetUp [0x00007FF7C123BAC9+1107] (C:\jenkins\x64-nightly\src\chrome\test\base\in_process_browser_test.cc:492)
..\..\chrome\browser\preloading\prefetch\search_prefetch\search_prefetch_service_browsertest.cc(3261): error: Value of: base::Contains(navigation_generated_url, "pf=npp")
Actual: false
Expected: true
Stack trace:
Backtrace:
GooglePFTestFieldTrialOverride_BaseGoogleSearchHasPFForPrefetch_Test::RunTestOnMainThread [0x00007FF7ADADF2A2+2226] (C:\jenkins\x64-nightly\src\chrome\browser\preloading\prefetch\search_prefetch\search_prefetch_service_browsertest.cc:3261)
content::BrowserTestBase::ProxyRunTestOnMainThreadLoop [0x00007FF7C2620E3B+2043] (C:\jenkins\x64-nightly\src\content\public\test\browser_test_base.cc:901)
base::internal::Invoker<base::internal::BindState<void (content::BrowserTestBase::*)(),base::internal::UnretainedWrapper<content::BrowserTestBase,base::unretained_traits::MayNotDangle,0> >,void ()>::RunOnce [0x00007FF7C2626750+366] (C:\jenkins\x64-nightly\src\base\functional\bind_internal.h:976)
content::BrowserMainLoop::InterceptMainMessageLoopRun [0x00007FF7BAF902E2+424] (C:\jenkins\x64-nightly\src\content\browser\browser_main_loop.cc:1039)
content::BrowserMainLoop::RunMainMessageLoop [0x00007FF7BAF904D0+222] (C:\jenkins\x64-nightly\src\content\browser\browser_main_loop.cc:1051)
content::BrowserMainRunnerImpl::Run [0x00007FF7BAF975F4+42] (C:\jenkins\x64-nightly\src\content\browser\browser_main_runner_impl.cc:160)
content::BrowserMain [0x00007FF7BAF8846B+535] (C:\jenkins\x64-nightly\src\content\browser\browser_main.cc:34)
content::RunBrowserProcessMain [0x00007FF7BFC59116+650] (C:\jenkins\x64-nightly\src\content\app\content_main_runner_impl.cc:707)
content::ContentMainRunnerImpl::RunBrowser [0x00007FF7BFC5D1D2+1210] (C:\jenkins\x64-nightly\src\content\app\content_main_runner_impl.cc:1284)
content::ContentMainRunnerImpl::Run [0x00007FF7BFC5C91C+2872] (C:\jenkins\x64-nightly\src\content\app\content_main_runner_impl.cc:1142)
content::RunContentProcess [0x00007FF7BFC56EEC+2910] (C:\jenkins\x64-nightly\src\content\app\content_main.cc:326)
content::ContentMain [0x00007FF7BFC57BF6+604] (C:\jenkins\x64-nightly\src\content\app\content_main.cc:343)
content::BrowserTestBase::SetUp [0x00007FF7C261E8CE+5912] (C:\jenkins\x64-nightly\src\content\public\test\browser_test_base.cc:580)
InProcessBrowserTest::SetUp [0x00007FF7C123BAC9+1107] (C:\jenkins\x64-nightly\src\chrome\test\base\in_process_browser_test.cc:492)
[16528:17220:0623/232612.419:WARNING:brave_stats_updater_params.cc(129)] Couldn't find the time of first run. This should only happen when running tests, but never in production code.
[ FAILED ] GooglePFTestFieldTrialOverride.BaseGoogleSearchHasPFForPrefetch, where TypeParam = and GetParam() = (4843 ms)
[ FAILED ] GooglePFTestFieldTrialOverride.BaseGoogleSearchHasPFForPrefetch
```
</details>
|
test
|
test failure googlepftestfieldtrialoverride basegooglesearchhaspfforprefetch greetings human bad news googlepftestfieldtrialoverride basegooglesearchhaspfforprefetch stack trace googlepftestfieldtrialoverride basegooglesearchhaspfforprefetch this is chrome version not a warning could not create policy manager as cbcm is not enabled cloud management controller initialization aborted as cbcm is not enabled malformed extension dictionary for extension odbfpeeihdkbihmopkbjmoonfanlbfcl key external update url has value which is not a valid url chrome browser preloading prefetch search prefetch search prefetch service browsertest cc error value of base contains suggest generated url pf spp actual false expected true stack trace backtrace googlepftestfieldtrialoverride basegooglesearchhaspfforprefetch test runtestonmainthread c jenkins nightly src chrome browser preloading prefetch search prefetch search prefetch service browsertest cc content browsertestbase proxyruntestonmainthreadloop c jenkins nightly src content public test browser test base cc base internal invoker void runonce c jenkins nightly src base functional bind internal h content browsermainloop interceptmainmessagelooprun c jenkins nightly src content browser browser main loop cc content browsermainloop runmainmessageloop c jenkins nightly src content browser browser main loop cc content browsermainrunnerimpl run c jenkins nightly src content browser browser main runner impl cc content browsermain c jenkins nightly src content browser browser main cc content runbrowserprocessmain c jenkins nightly src content app content main runner impl cc content contentmainrunnerimpl runbrowser c jenkins nightly src content app content main runner impl cc content contentmainrunnerimpl run c jenkins nightly src content app content main runner impl cc content runcontentprocess c jenkins nightly src content app content main cc content contentmain c jenkins nightly src content app content main cc content browsertestbase setup c jenkins nightly src content public test browser test base cc inprocessbrowsertest setup c jenkins nightly src chrome test base in process browser test cc chrome browser preloading prefetch search prefetch search prefetch service browsertest cc error value of base contains navigation generated url pf npp actual false expected true stack trace backtrace googlepftestfieldtrialoverride basegooglesearchhaspfforprefetch test runtestonmainthread c jenkins nightly src chrome browser preloading prefetch search prefetch search prefetch service browsertest cc content browsertestbase proxyruntestonmainthreadloop c jenkins nightly src content public test browser test base cc base internal invoker void runonce c jenkins nightly src base functional bind internal h content browsermainloop interceptmainmessagelooprun c jenkins nightly src content browser browser main loop cc content browsermainloop runmainmessageloop c jenkins nightly src content browser browser main loop cc content browsermainrunnerimpl run c jenkins nightly src content browser browser main runner impl cc content browsermain c jenkins nightly src content browser browser main cc content runbrowserprocessmain c jenkins nightly src content app content main runner impl cc content contentmainrunnerimpl runbrowser c jenkins nightly src content app content main runner impl cc content contentmainrunnerimpl run c jenkins nightly src content app content main runner impl cc content runcontentprocess c jenkins nightly src content app content main cc content contentmain c jenkins nightly src content app content main cc content browsertestbase setup c jenkins nightly src content public test browser test base cc inprocessbrowsertest setup c jenkins nightly src chrome test base in process browser test cc couldn t find the time of first run this should only happen when running tests but never in production code googlepftestfieldtrialoverride basegooglesearchhaspfforprefetch where typeparam and getparam ms googlepftestfieldtrialoverride basegooglesearchhaspfforprefetch
| 1
|
193,865
| 14,663,533,418
|
IssuesEvent
|
2020-12-29 09:55:08
|
xrobin/pROC
|
https://api.github.com/repos/xrobin/pROC
|
closed
|
Suggesting vdiffr but not using it conditionally
|
test
|
Received an email from CRAN about vdiffr not being properly conditional in tests.
Followup from Lionel Henry:
```
> A simple way to make vdiffr usage conditional is to define your own
> `expect_doppelganger()` as follows:
>
> ```r
> expect_doppelganger <- function(title, fig, path = NULL, ...) {
> testthat::skip_if_not_installed("vdiffr")
> vdiffr::expect_doppelganger(title, fig, path = path, ...)
> }
> ```
>
> You then call `expect_doppelganger()` without the `vdiffr::` prefix.
> See https://github.com/lionel-/ggstance/commit/eac216f6.
```
This needs to be fixed until January 12, 2021.
|
1.0
|
Suggesting vdiffr but not using it conditionally - Received an email from CRAN about vdiffr not being properly conditional in tests.
Followup from Lionel Henry:
```
> A simple way to make vdiffr usage conditional is to define your own
> `expect_doppelganger()` as follows:
>
> ```r
> expect_doppelganger <- function(title, fig, path = NULL, ...) {
> testthat::skip_if_not_installed("vdiffr")
> vdiffr::expect_doppelganger(title, fig, path = path, ...)
> }
> ```
>
> You then call `expect_doppelganger()` without the `vdiffr::` prefix.
> See https://github.com/lionel-/ggstance/commit/eac216f6.
```
This needs to be fixed until January 12, 2021.
|
test
|
suggesting vdiffr but not using it conditionally received an email from cran about vdiffr not being properly conditional in tests followup from lionel henry a simple way to make vdiffr usage conditional is to define your own expect doppelganger as follows r expect doppelganger function title fig path null testthat skip if not installed vdiffr vdiffr expect doppelganger title fig path path you then call expect doppelganger without the vdiffr prefix see this needs to be fixed until january
| 1
|
49,455
| 6,027,034,353
|
IssuesEvent
|
2017-06-08 12:54:30
|
hacklabr/mapasculturais
|
https://api.github.com/repos/hacklabr/mapasculturais
|
closed
|
Problemas com links a partir da planilha no Excel
|
status:test-ready
|
Excel e outras aplicações do Office verificam o link clicado antes de levar o usuário para aquela página. Esse "comportamento" é para ver se o link é para outro documento office que seria aberto na aplicação, e não no navegador.
Detalhes em: http://stackoverflow.com/questions/2653626/why-are-cookies-unrecognized-when-a-link-is-clicked-from-an-external-source-i-e
O problema é que ao fazer essa verificação, ele não está levando em conta os cookies do navegador. Então se o link aponta para uma página que requer autenticação no Mapas, o que ele vai retornar é o link para a página de autenticação (pq o mapas redireciona aquela requisição).
Então o usuário tem a experiência de clicar em um link e cair na página de autenticação, mesmo já estando logado. E ainda por cima, perde o redirect_to, então ele simplesmente não vai para o link que deveria ir.
Isso foi percebido na planilha de inscritos em uma chamada, que traz um link para a Inscrição. Porém esse link fica inacessível a partir do excel.
SOLUÇÕES POSSÍVEIS:
1) Na planilha exportada, não colocar o link em uma tag A clicável. Colocar o link todo em uma coluna, que a pessoa tenha q copiar e colar no navegador.
2) Colocar uma verificação de user agent no Mapas, que verifica se a requisição tá vindo do Office, e aí não redireciona ele imediatamente pra autenticação, de maneira que ele leve o usuário pro lugar certo.
Essa abordagem está descrita aqui: https://github.com/spilliton/fix_microsoft_links/blob/master/lib/fix_microsoft_links.rb
O caminho 1 é mais simples. O caminho 2 tem que avaliar.
|
1.0
|
Problemas com links a partir da planilha no Excel - Excel e outras aplicações do Office verificam o link clicado antes de levar o usuário para aquela página. Esse "comportamento" é para ver se o link é para outro documento office que seria aberto na aplicação, e não no navegador.
Detalhes em: http://stackoverflow.com/questions/2653626/why-are-cookies-unrecognized-when-a-link-is-clicked-from-an-external-source-i-e
O problema é que ao fazer essa verificação, ele não está levando em conta os cookies do navegador. Então se o link aponta para uma página que requer autenticação no Mapas, o que ele vai retornar é o link para a página de autenticação (pq o mapas redireciona aquela requisição).
Então o usuário tem a experiência de clicar em um link e cair na página de autenticação, mesmo já estando logado. E ainda por cima, perde o redirect_to, então ele simplesmente não vai para o link que deveria ir.
Isso foi percebido na planilha de inscritos em uma chamada, que traz um link para a Inscrição. Porém esse link fica inacessível a partir do excel.
SOLUÇÕES POSSÍVEIS:
1) Na planilha exportada, não colocar o link em uma tag A clicável. Colocar o link todo em uma coluna, que a pessoa tenha q copiar e colar no navegador.
2) Colocar uma verificação de user agent no Mapas, que verifica se a requisição tá vindo do Office, e aí não redireciona ele imediatamente pra autenticação, de maneira que ele leve o usuário pro lugar certo.
Essa abordagem está descrita aqui: https://github.com/spilliton/fix_microsoft_links/blob/master/lib/fix_microsoft_links.rb
O caminho 1 é mais simples. O caminho 2 tem que avaliar.
|
test
|
problemas com links a partir da planilha no excel excel e outras aplicações do office verificam o link clicado antes de levar o usuário para aquela página esse comportamento é para ver se o link é para outro documento office que seria aberto na aplicação e não no navegador detalhes em o problema é que ao fazer essa verificação ele não está levando em conta os cookies do navegador então se o link aponta para uma página que requer autenticação no mapas o que ele vai retornar é o link para a página de autenticação pq o mapas redireciona aquela requisição então o usuário tem a experiência de clicar em um link e cair na página de autenticação mesmo já estando logado e ainda por cima perde o redirect to então ele simplesmente não vai para o link que deveria ir isso foi percebido na planilha de inscritos em uma chamada que traz um link para a inscrição porém esse link fica inacessível a partir do excel soluções possíveis na planilha exportada não colocar o link em uma tag a clicável colocar o link todo em uma coluna que a pessoa tenha q copiar e colar no navegador colocar uma verificação de user agent no mapas que verifica se a requisição tá vindo do office e aí não redireciona ele imediatamente pra autenticação de maneira que ele leve o usuário pro lugar certo essa abordagem está descrita aqui o caminho é mais simples o caminho tem que avaliar
| 1
|
351,585
| 25,032,990,910
|
IssuesEvent
|
2022-11-04 13:54:55
|
wagtail/wagtail
|
https://api.github.com/repos/wagtail/wagtail
|
closed
|
Changes to some instructions in the tutorial.md file
|
Documentation Outreachy
|
Link to pull request with suggested changes: #9329
|
1.0
|
Changes to some instructions in the tutorial.md file - Link to pull request with suggested changes: #9329
|
non_test
|
changes to some instructions in the tutorial md file link to pull request with suggested changes
| 0
|
575,936
| 17,066,672,365
|
IssuesEvent
|
2021-07-07 08:14:20
|
threefoldfoundation/www_threefold_io
|
https://api.github.com/repos/threefoldfoundation/www_threefold_io
|
opened
|
Add Google Console
|
priority_critical
|
Add google console html file to the threefold.io
It's still not working.
File provided in telegram.
|
1.0
|
Add Google Console - Add google console html file to the threefold.io
It's still not working.
File provided in telegram.
|
non_test
|
add google console add google console html file to the threefold io it s still not working file provided in telegram
| 0
|
173,709
| 13,438,699,230
|
IssuesEvent
|
2020-09-07 18:49:58
|
aerogear/offix
|
https://api.github.com/repos/aerogear/offix
|
opened
|
Apollo Hooks vs Offix DataStore hooks
|
alpha-testing datasync
|
## Feature Request
```
From the call:
Offix datastore hooks are confusing when you used to Apollo hooks.
Hooks will get remote changes and local changes as well but it is not simple to tell if change that was received from subscription is remote or local. It will be nice if there is a way to force wait for network connection/replication to succeed
```
|
1.0
|
Apollo Hooks vs Offix DataStore hooks - ## Feature Request
```
From the call:
Offix datastore hooks are confusing when you used to Apollo hooks.
Hooks will get remote changes and local changes as well but it is not simple to tell if change that was received from subscription is remote or local. It will be nice if there is a way to force wait for network connection/replication to succeed
```
|
test
|
apollo hooks vs offix datastore hooks feature request from the call offix datastore hooks are confusing when you used to apollo hooks hooks will get remote changes and local changes as well but it is not simple to tell if change that was received from subscription is remote or local it will be nice if there is a way to force wait for network connection replication to succeed
| 1
|
55,983
| 6,497,586,072
|
IssuesEvent
|
2017-08-22 14:29:03
|
GemsTracker/gemstracker-library
|
https://api.github.com/repos/GemsTracker/gemstracker-library
|
closed
|
Redo for a survey that is no longer valid
|
feature request testable
|
When a survey is no longer valid and you want to redo it, it would be nice if the valid until date was changed to today so you can still answer the survey without changing the date manually. This is confusing for end users.
Possible options:
- just add a descriptive warning
- change and notify
- warn and allow to change
|
1.0
|
Redo for a survey that is no longer valid - When a survey is no longer valid and you want to redo it, it would be nice if the valid until date was changed to today so you can still answer the survey without changing the date manually. This is confusing for end users.
Possible options:
- just add a descriptive warning
- change and notify
- warn and allow to change
|
test
|
redo for a survey that is no longer valid when a survey is no longer valid and you want to redo it it would be nice if the valid until date was changed to today so you can still answer the survey without changing the date manually this is confusing for end users possible options just add a descriptive warning change and notify warn and allow to change
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.