Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
166,673
26,393,805,163
IssuesEvent
2023-01-12 17:35:34
coreos/afterburn
https://api.github.com/repos/coreos/afterburn
closed
Add Daemon Mode
kind/design status/on-hold
# Feature Request # Create a daemon mode for afterburn for clouds where having a reactive/scheduled pattern makes sense instead of oneshot runs. ## Environment ## TBD ## Desired Feature ## Allow specific providers to have a daemon mode which runs consistently and reacts to specific changes. ## Other Information ## @darkmuggle has done some initial plumbing work.
1.0
Add Daemon Mode - # Feature Request # Create a daemon mode for afterburn for clouds where having a reactive/scheduled pattern makes sense instead of oneshot runs. ## Environment ## TBD ## Desired Feature ## Allow specific providers to have a daemon mode which runs consistently and reacts to specific changes. ## Other Information ## @darkmuggle has done some initial plumbing work.
non_process
add daemon mode feature request create a daemon mode for afterburn for clouds where having a reactive scheduled pattern makes sense instead of oneshot runs environment tbd desired feature allow specific providers to have a daemon mode which runs consistently and reacts to specific changes other information darkmuggle has done some initial plumbing work
0
678,246
23,190,843,802
IssuesEvent
2022-08-01 12:32:16
SAP/xsk
https://api.github.com/repos/SAP/xsk
closed
[IDE] Database perspective to switch to result after execution
bug wontfix priority-low IDE shadow incomplete
**Describe the bug** After executing an SQL statement, the `Result` view is not put in focus. > What version of the XSK are you using? 0.9.3 **To Reproduce** Steps to reproduce the behavior: 1. Go to 'Database Perspective' 2. Switch view to something else than 'Result' 3. Execute some SQL statement (ctrl+x/cmd+x) 4. See that the `Result` view is not put in focus **Expected behavior** There's indication what happened after executing the statement. The easiest would be to see the `Result view`
1.0
[IDE] Database perspective to switch to result after execution - **Describe the bug** After executing an SQL statement, the `Result` view is not put in focus. > What version of the XSK are you using? 0.9.3 **To Reproduce** Steps to reproduce the behavior: 1. Go to 'Database Perspective' 2. Switch view to something else than 'Result' 3. Execute some SQL statement (ctrl+x/cmd+x) 4. See that the `Result` view is not put in focus **Expected behavior** There's indication what happened after executing the statement. The easiest would be to see the `Result view`
non_process
database perspective to switch to result after execution describe the bug after executing an sql statement the result view is not put in focus what version of the xsk are you using to reproduce steps to reproduce the behavior go to database perspective switch view to something else than result execute some sql statement ctrl x cmd x see that the result view is not put in focus expected behavior there s indication what happened after executing the statement the easiest would be to see the result view
0
6,448
9,546,277,562
IssuesEvent
2019-05-01 19:28:03
openopps/openopps-platform
https://api.github.com/repos/openopps/openopps-platform
closed
Internship email: Application received
Apply Process Approved Email Requirements Ready State Dept.
Who: Student What: Email to tell the student that their application has been received Why: As a student I want to know that my application has been received A/C Subject:– Thank you for your application to the U.S. Department of State Student Internship Program (Unpaid) Trigger: Student applies to an internship opportunity Audience: Applicant/Student - Insert the following content into the U.S. Department of State Student Internship Program (Unpaid) community email template #3397 Content: Hello [User name], Thank you for submitting your application to the U.S. Department of State Student Internship Program (Unpaid) through Open Opportunities. You applied to the following internship opportunities: [Insert internship opportunity 1] (Link to opportunity detail page) [Insert internship opportunity 2] (Link to opportunity detail page) [Insert internship opportunity 3] (Link to opportunity detail page) View your application in your dashboard [insert link to student’s dashboard]. What happens next? (this is a header ) The U.S. Department of State Student Internship Program will: 1. Review all applications. (bold) They will begin to review applications after the application period closes on [insert application closing date]. 2. Set up interviews. (bold) They will contact you if you’re selected for an interview. However, some offices don’t conduct interviews. 3. Send offer emails. (bold) They will send you an email with a conditional offer, if you’re selected for an internship. What happens if I’m not selected? (this is a header) After all internships are filled, you will receive an email if you’re not selected. [Learn more about the U.S. Department of State Student Internship Program (Unpaid) selection process](https://careers.state.gov/intern/student-internships/selection-process/). Thanks, The Open Opportunities Team -The email will come from noreply@openopps.usajobs.gov  - This email will be responsive - Design teams definition of responsive : Responsive, in any context, be it email or browser, does mean responding to the viewport size (which is roughly equivalent to screen size; it's the width of the browser or email reader). These particular emails can be responsive because they include HTML, with a fallback to text for email readers that don't support HTML.
1.0
Internship email: Application received - Who: Student What: Email to tell the student that their application has been received Why: As a student I want to know that my application has been received A/C Subject:– Thank you for your application to the U.S. Department of State Student Internship Program (Unpaid) Trigger: Student applies to an internship opportunity Audience: Applicant/Student - Insert the following content into the U.S. Department of State Student Internship Program (Unpaid) community email template #3397 Content: Hello [User name], Thank you for submitting your application to the U.S. Department of State Student Internship Program (Unpaid) through Open Opportunities. You applied to the following internship opportunities: [Insert internship opportunity 1] (Link to opportunity detail page) [Insert internship opportunity 2] (Link to opportunity detail page) [Insert internship opportunity 3] (Link to opportunity detail page) View your application in your dashboard [insert link to student’s dashboard]. What happens next? (this is a header ) The U.S. Department of State Student Internship Program will: 1. Review all applications. (bold) They will begin to review applications after the application period closes on [insert application closing date]. 2. Set up interviews. (bold) They will contact you if you’re selected for an interview. However, some offices don’t conduct interviews. 3. Send offer emails. (bold) They will send you an email with a conditional offer, if you’re selected for an internship. What happens if I’m not selected? (this is a header) After all internships are filled, you will receive an email if you’re not selected. [Learn more about the U.S. Department of State Student Internship Program (Unpaid) selection process](https://careers.state.gov/intern/student-internships/selection-process/). Thanks, The Open Opportunities Team -The email will come from noreply@openopps.usajobs.gov  - This email will be responsive - Design teams definition of responsive : Responsive, in any context, be it email or browser, does mean responding to the viewport size (which is roughly equivalent to screen size; it's the width of the browser or email reader). These particular emails can be responsive because they include HTML, with a fallback to text for email readers that don't support HTML.
process
internship email application received who student what email to tell the student that their application has been received why as a student i want to know that my application has been received a c subject – thank you for your application to the u s department of state student internship program unpaid trigger student applies to an internship opportunity audience applicant student insert the following content into the u s department of state student internship program unpaid community email template content hello thank you for submitting your application to the u s department of state student internship program unpaid through open opportunities you applied to the following internship opportunities link to opportunity detail page link to opportunity detail page link to opportunity detail page view your application in your dashboard what happens next this is a header the u s department of state student internship program will review all applications bold they will begin to review applications after the application period closes on set up interviews bold they will contact you if you’re selected for an interview however some offices don’t conduct interviews send offer emails bold they will send you an email with a conditional offer if you’re selected for an internship what happens if i’m not selected this is a header after all internships are filled you will receive an email if you’re not selected thanks the open opportunities team the email will come from noreply openopps usajobs gov  this email will be responsive design teams definition of responsive responsive in any context be it email or browser does mean responding to the viewport size which is roughly equivalent to screen size it s the width of the browser or email reader these particular emails can be responsive because they include html with a fallback to text for email readers that don t support html
1
23,606
6,444,848,536
IssuesEvent
2017-08-12 18:00:20
joomla/joomla-cms
https://api.github.com/repos/joomla/joomla-cms
closed
[3.8 Beta] Fatal error: Undefined class constant 'MAJOR_VERSION'
No Code Attached Yet
### Steps to reproduce the issue Open Url: .../administrator/index.php?option=com_languages&view=installed ### Expected result The view should display installed languages. ### Actual result Fatal error: Undefined class constant 'MAJOR_VERSION' in ...\administrator\components\com_languages\views\installed\tmpl\default.php on line 103 ### System information (as much as possible) ### Additional comments
1.0
[3.8 Beta] Fatal error: Undefined class constant 'MAJOR_VERSION' - ### Steps to reproduce the issue Open Url: .../administrator/index.php?option=com_languages&view=installed ### Expected result The view should display installed languages. ### Actual result Fatal error: Undefined class constant 'MAJOR_VERSION' in ...\administrator\components\com_languages\views\installed\tmpl\default.php on line 103 ### System information (as much as possible) ### Additional comments
non_process
fatal error undefined class constant major version steps to reproduce the issue open url administrator index php option com languages view installed expected result the view should display installed languages actual result fatal error undefined class constant major version in administrator components com languages views installed tmpl default php on line system information as much as possible additional comments
0
3,434
6,533,730,484
IssuesEvent
2017-08-31 07:52:48
zero-os/0-stor
https://api.github.com/repos/zero-os/0-stor
closed
Use protobuf for metadata encoding
process_wontfix type_investigation
**See next comment,a bug in the benchmark make the number here false** I did some comparison between capnp and protobuf v3 to see if it really make sense to use capnp as encoding system for the metadata. It appears that protobuf is more efficient in space even if we use capnp in packed mode. Regarding the speed of encoding, capnp is better if we don't packed. But once we use the packed encoding, protobuf is faster Code for the tests : https://github.com/zero-os/0-stor/blob/af3da5ce2a386cceda2f6ed1ae7a80fdd6a4e0ef/client/meta/meta_test.go Here are the numbers: ``` size capnp: 10360 size capnp-packed: 9537 size protobuf: 8382 ``` encoding speed: ``` BenchmarkEncoding/capnp-4 200000 12133 ns/op BenchmarkEncoding/capnp-packed-4 50000 26413 ns/op BenchmarkEncoding/protobuf-4 100000 16176 ns/op ``` Regarding these numbers I propose to switch from capnp to protobuf for the metadata. The code generated by protobuf is much cleaner then capnp. Also we use grpc which already leverage protobuf. So we remove a completely dependency by removing capnp
1.0
Use protobuf for metadata encoding - **See next comment,a bug in the benchmark make the number here false** I did some comparison between capnp and protobuf v3 to see if it really make sense to use capnp as encoding system for the metadata. It appears that protobuf is more efficient in space even if we use capnp in packed mode. Regarding the speed of encoding, capnp is better if we don't packed. But once we use the packed encoding, protobuf is faster Code for the tests : https://github.com/zero-os/0-stor/blob/af3da5ce2a386cceda2f6ed1ae7a80fdd6a4e0ef/client/meta/meta_test.go Here are the numbers: ``` size capnp: 10360 size capnp-packed: 9537 size protobuf: 8382 ``` encoding speed: ``` BenchmarkEncoding/capnp-4 200000 12133 ns/op BenchmarkEncoding/capnp-packed-4 50000 26413 ns/op BenchmarkEncoding/protobuf-4 100000 16176 ns/op ``` Regarding these numbers I propose to switch from capnp to protobuf for the metadata. The code generated by protobuf is much cleaner then capnp. Also we use grpc which already leverage protobuf. So we remove a completely dependency by removing capnp
process
use protobuf for metadata encoding see next comment a bug in the benchmark make the number here false i did some comparison between capnp and protobuf to see if it really make sense to use capnp as encoding system for the metadata it appears that protobuf is more efficient in space even if we use capnp in packed mode regarding the speed of encoding capnp is better if we don t packed but once we use the packed encoding protobuf is faster code for the tests here are the numbers size capnp size capnp packed size protobuf encoding speed benchmarkencoding capnp ns op benchmarkencoding capnp packed ns op benchmarkencoding protobuf ns op regarding these numbers i propose to switch from capnp to protobuf for the metadata the code generated by protobuf is much cleaner then capnp also we use grpc which already leverage protobuf so we remove a completely dependency by removing capnp
1
445
2,873,899,059
IssuesEvent
2015-06-08 19:33:32
K0zka/kerub
https://api.github.com/repos/K0zka/kerub
closed
tolerate the dmiencode output in bochs emulator
bug component:data processing priority: normal
Not a tipical production environment, but it is annoying at testing, no harware information is extracted
1.0
tolerate the dmiencode output in bochs emulator - Not a tipical production environment, but it is annoying at testing, no harware information is extracted
process
tolerate the dmiencode output in bochs emulator not a tipical production environment but it is annoying at testing no harware information is extracted
1
68,615
3,291,552,511
IssuesEvent
2015-10-30 09:48:24
YetiForceCompany/YetiForceCRM
https://api.github.com/repos/YetiForceCompany/YetiForceCRM
closed
[question] funtion of Convert lead mapping
Label::Logic Priority::#1 Low Type::Discussion
What is the idea with converting leads to contacts -function, when it obviously is not working? Or have I not understood the clue. ![conv](https://cloud.githubusercontent.com/assets/10330264/9326841/78a35936-45a5-11e5-977e-d46781a31983.png)
1.0
[question] funtion of Convert lead mapping - What is the idea with converting leads to contacts -function, when it obviously is not working? Or have I not understood the clue. ![conv](https://cloud.githubusercontent.com/assets/10330264/9326841/78a35936-45a5-11e5-977e-d46781a31983.png)
non_process
funtion of convert lead mapping what is the idea with converting leads to contacts function when it obviously is not working or have i not understood the clue
0
146,166
11,728,314,938
IssuesEvent
2020-03-10 17:20:34
department-of-veterans-affairs/caseflow
https://api.github.com/repos/department-of-veterans-affairs/caseflow
opened
[Flaky Test] TaskPager.filtered_tasks when there are a variety of task assigned to the current organization when filter includes TranslationTasks and FoiaTasks returns all translation and FOIA tasks assigned to the current organization
Eng: Flaky Test Type: Tech-Improvement
## Background/context/resources ``` TaskPager.filtered_tasks when there are a variety of task assigned to the current organization when filter includes TranslationTasks and FoiaTasks returns all translation and FOIA tasks assigned to the current organization - spec.models.task_pager_spec spec/models/task_pager_spec.rb Failure/Error: expect(subject.map(&:type).uniq).to match_array([TranslationTask.name, FoiaTask.name]) expected collection contained: ["FoiaTask", "TranslationTask"] actual collection contained: [] the missing elements were: ["FoiaTask", "TranslationTask"] ./spec/models/task_pager_spec.rb:436:in `block (5 levels) in <top (required)>' ``` - Circle CI Error: https://app.circleci.com/pipelines/github/department-of-veterans-affairs/caseflow/28547/workflows/0a6bd38c-a803-469b-8ce2-3b98c4cdfd96/jobs/105616/tests - Has the test already been skipped in the code? - [ ] Skipped - [x] Not Skipped - Related Flakes + <!-- list any suspected related flaky test GH issues / CI links --> ## Approach <!-- Has our agreed upon default approach for tackling flaky tests. --> Time box this investigation and fix. Remember that if a test has been skipped for a decent amount of time, it may no longer map to the exact code. If you reach the end of your time box and don't feel like the solution is in sight: - [ ] document the work you've done, including dead ends and research - [ ] skip the test in the code - [ ] file a follow on ticket - [ ] close this issue
1.0
[Flaky Test] TaskPager.filtered_tasks when there are a variety of task assigned to the current organization when filter includes TranslationTasks and FoiaTasks returns all translation and FOIA tasks assigned to the current organization - ## Background/context/resources ``` TaskPager.filtered_tasks when there are a variety of task assigned to the current organization when filter includes TranslationTasks and FoiaTasks returns all translation and FOIA tasks assigned to the current organization - spec.models.task_pager_spec spec/models/task_pager_spec.rb Failure/Error: expect(subject.map(&:type).uniq).to match_array([TranslationTask.name, FoiaTask.name]) expected collection contained: ["FoiaTask", "TranslationTask"] actual collection contained: [] the missing elements were: ["FoiaTask", "TranslationTask"] ./spec/models/task_pager_spec.rb:436:in `block (5 levels) in <top (required)>' ``` - Circle CI Error: https://app.circleci.com/pipelines/github/department-of-veterans-affairs/caseflow/28547/workflows/0a6bd38c-a803-469b-8ce2-3b98c4cdfd96/jobs/105616/tests - Has the test already been skipped in the code? - [ ] Skipped - [x] Not Skipped - Related Flakes + <!-- list any suspected related flaky test GH issues / CI links --> ## Approach <!-- Has our agreed upon default approach for tackling flaky tests. --> Time box this investigation and fix. Remember that if a test has been skipped for a decent amount of time, it may no longer map to the exact code. If you reach the end of your time box and don't feel like the solution is in sight: - [ ] document the work you've done, including dead ends and research - [ ] skip the test in the code - [ ] file a follow on ticket - [ ] close this issue
non_process
taskpager filtered tasks when there are a variety of task assigned to the current organization when filter includes translationtasks and foiatasks returns all translation and foia tasks assigned to the current organization background context resources taskpager filtered tasks when there are a variety of task assigned to the current organization when filter includes translationtasks and foiatasks returns all translation and foia tasks assigned to the current organization spec models task pager spec spec models task pager spec rb failure error expect subject map type uniq to match array expected collection contained actual collection contained the missing elements were spec models task pager spec rb in block levels in circle ci error has the test already been skipped in the code skipped not skipped related flakes approach time box this investigation and fix remember that if a test has been skipped for a decent amount of time it may no longer map to the exact code if you reach the end of your time box and don t feel like the solution is in sight document the work you ve done including dead ends and research skip the test in the code file a follow on ticket close this issue
0
10,682
13,463,754,550
IssuesEvent
2020-09-09 18:06:43
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
lower() expression does not work correctly with predefined variables
Pri2 devops-cicd-process/tech devops/prod product-feedback
According to https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-variables-by-using-expressions, it should be possible to pass predefined variables to an expression for evaluation. This does not appear to work correctly for the `lower()` expression. Given the following trivial pipeline: ``` trigger: none pool: vmImage: 'windows-latest' variables: a_static_variable: ALL_CAPS foo: $[lower('STRINGLITERAL')] bar: $[lower(variables['a_static_variable'])] quux: $[lower(variables['Build.Repository.Name'])] frink: $(Build.Repository.Name) bong: $[lower(variables['frink'])] steps: - script: | set echo foo is $(foo) echo bar is $(bar) echo quux is $(quux) echo frink is $(frink) echo bong is $(bong) ``` The following output results: ``` BAR=all_caps BONG=Completely.Useless.Test.Repository FOO=stringliteral FRINK=Completely.Useless.Test.Repository QUUX= foo is stringliteral bar is all_caps quux is frink is Completely.Useless.Test.Repository bong is Completely.Useless.Test.Repository ``` --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6 * Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18 * Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops) * Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/expressions.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
lower() expression does not work correctly with predefined variables - According to https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-variables-by-using-expressions, it should be possible to pass predefined variables to an expression for evaluation. This does not appear to work correctly for the `lower()` expression. Given the following trivial pipeline: ``` trigger: none pool: vmImage: 'windows-latest' variables: a_static_variable: ALL_CAPS foo: $[lower('STRINGLITERAL')] bar: $[lower(variables['a_static_variable'])] quux: $[lower(variables['Build.Repository.Name'])] frink: $(Build.Repository.Name) bong: $[lower(variables['frink'])] steps: - script: | set echo foo is $(foo) echo bar is $(bar) echo quux is $(quux) echo frink is $(frink) echo bong is $(bong) ``` The following output results: ``` BAR=all_caps BONG=Completely.Useless.Test.Repository FOO=stringliteral FRINK=Completely.Useless.Test.Repository QUUX= foo is stringliteral bar is all_caps quux is frink is Completely.Useless.Test.Repository bong is Completely.Useless.Test.Repository ``` --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6 * Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18 * Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops) * Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/expressions.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
lower expression does not work correctly with predefined variables according to it should be possible to pass predefined variables to an expression for evaluation this does not appear to work correctly for the lower expression given the following trivial pipeline trigger none pool vmimage windows latest variables a static variable all caps foo bar quux frink build repository name bong steps script set echo foo is foo echo bar is bar echo quux is quux echo frink is frink echo bong is bong the following output results bar all caps bong completely useless test repository foo stringliteral frink completely useless test repository quux foo is stringliteral bar is all caps quux is frink is completely useless test repository bong is completely useless test repository document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
19,368
25,497,069,672
IssuesEvent
2022-11-27 20:09:52
david-palm/DeepSudoku
https://api.github.com/repos/david-palm/DeepSudoku
closed
Find intersections
feature integral feature image processing
The coordinates of the intersections of the lines are used to cut the individual cells.
1.0
Find intersections - The coordinates of the intersections of the lines are used to cut the individual cells.
process
find intersections the coordinates of the intersections of the lines are used to cut the individual cells
1
21,526
29,809,624,350
IssuesEvent
2023-06-16 14:10:24
UnitTestBot/UTBotJava
https://api.github.com/repos/UnitTestBot/UTBotJava
opened
No Docker Compose file found exception in Spring Unit tests generation on macOs
ctg-bug comp-instrumented-process comp-spring
**Description** `IllegalStateException: No Docker Compose file found` in instrumented process logs from SpringAnalyzerProcess. Spring Unit tests were being generated. **To Reproduce** 1. Install [UnitTestBot plugin built from main](https://github.com/UnitTestBot/UTBotJava/actions/runs/5277843850) in IntelliJ IDEA 2. Open spring-petclinic project 3. Set JDK 17 4. Generate tests for Owner entity-class: select `PetClinicApplication`, `Unit tests` and leave defaults for all other settings. **Expected behavior** No instrumented process failures are expected. **Actual behavior** `IllegalStateException: No Docker Compose file found` in instrumented process logs from SpringAnalyzerProcess. **Screenshots, logs** ~~~java ///region Test suites for executable org.springframework.samples.petclinic.owner.Owner.toString ///region Errors report for toString public void testToString_errors() { // Couldn't generate some tests. List of errors: // // 1 occurrences of: // <Throwable with empty message> } ///endregion ///endregion ~~~ ~~~java 14:34:53.154 | INFO | EngineProcessMain | ----------------------------------------------------------------------- 14:34:53.157 | INFO | EngineProcessMain | -------------------NEW ENGINE PROCESS STARTED-------------------------- 14:34:53.157 | INFO | EngineProcessMain | ----------------------------------------------------------------------- 14:34:55.260 | INFO | SpringAnalyzerProcess | Spring Analyzer process started with PID = 87409 14:34:56.247 | INFO | SpringAnalyzerProcess | RdCategory: SpringAnalyzerProcessMain | ----------------------------------------------------------------------- 14:34:56.248 | INFO | SpringAnalyzerProcess | RdCategory: SpringAnalyzerProcessMain | ------------------NEW SPRING ANALYZER PROCESS STARTED------------------ 14:34:56.248 | INFO | SpringAnalyzerProcess | RdCategory: SpringAnalyzerProcessMain | ----------------------------------------------------------------------- 14:34:56.255 | INFO | SpringAnalyzerProcess | RdCategory: SourceFinder | Using java Spring configuration 14:34:56.258 | INFO | SpringAnalyzerProcess | RdCategory: SpringApplicationInstantiatorFacade | Current Java version is: 17.0.6 14:34:56.260 | INFO | SpringAnalyzerProcess | RdCategory: SpringApplicationInstantiatorFacade | Current Spring version is: 6.0.9 14:34:56.261 | INFO | SpringAnalyzerProcess | RdCategory: SpringApplicationInstantiatorFacade | Current Spring Boot version is: 3.1.0 14:34:56.301 | INFO | SpringAnalyzerProcess | RdCategory: SpringApplicationInstantiatorFacade | Instantiating with org.utbot.spring.instantiator.SpringBootApplicationInstantiator@1d8a8c80 14:34:58.175 | INFO | SpringAnalyzerProcess | RdCategory: SpringApplication | Starting application using Java 17.0.6 with PID 87409 (started by alenalisevych in /private/var/folders/3b/hdmt5d356wb48ryz0tg9b13c0000gn/T/UTBot/spring-analyzer2454592816807074907) 14:34:58.176 | INFO | SpringAnalyzerProcess | RdCategory: SpringApplication | The following 1 profile is active: "default" 14:34:58.930 | ERROR | SpringAnalyzerProcess | RdCategory: SpringApplication | Application run failed | java.lang.IllegalStateException: No Docker Compose file found in directory '/private/var/folders/3b/hdmt5d356wb48ryz0tg9b13c0000gn/T/UTBot/spring-analyzer2454592816807074907/.' at org.springframework.util.Assert.state(Assert.java:97) at org.springframework.boot.docker.compose.lifecycle.DockerComposeLifecycleManager.getComposeFile(DockerComposeLifecycleManager.java:135) at org.springframework.boot.docker.compose.lifecycle.DockerComposeLifecycleManager.start(DockerComposeLifecycleManager.java:103) at org.springframework.boot.docker.compose.lifecycle.DockerComposeListener.onApplicationEvent(DockerComposeListener.java:53) at org.springframework.boot.docker.compose.lifecycle.DockerComposeListener.onApplicationEvent(DockerComposeListener.java:35) ~~~ **Environment** OS - macOS Ventura 13.2.1 (22D68) IntelliJ IDEA version - 2023.1.2 CE Project - spring-petclinic JDK - 17
1.0
No Docker Compose file found exception in Spring Unit tests generation on macOs - **Description** `IllegalStateException: No Docker Compose file found` in instrumented process logs from SpringAnalyzerProcess. Spring Unit tests were being generated. **To Reproduce** 1. Install [UnitTestBot plugin built from main](https://github.com/UnitTestBot/UTBotJava/actions/runs/5277843850) in IntelliJ IDEA 2. Open spring-petclinic project 3. Set JDK 17 4. Generate tests for Owner entity-class: select `PetClinicApplication`, `Unit tests` and leave defaults for all other settings. **Expected behavior** No instrumented process failures are expected. **Actual behavior** `IllegalStateException: No Docker Compose file found` in instrumented process logs from SpringAnalyzerProcess. **Screenshots, logs** ~~~java ///region Test suites for executable org.springframework.samples.petclinic.owner.Owner.toString ///region Errors report for toString public void testToString_errors() { // Couldn't generate some tests. List of errors: // // 1 occurrences of: // <Throwable with empty message> } ///endregion ///endregion ~~~ ~~~java 14:34:53.154 | INFO | EngineProcessMain | ----------------------------------------------------------------------- 14:34:53.157 | INFO | EngineProcessMain | -------------------NEW ENGINE PROCESS STARTED-------------------------- 14:34:53.157 | INFO | EngineProcessMain | ----------------------------------------------------------------------- 14:34:55.260 | INFO | SpringAnalyzerProcess | Spring Analyzer process started with PID = 87409 14:34:56.247 | INFO | SpringAnalyzerProcess | RdCategory: SpringAnalyzerProcessMain | ----------------------------------------------------------------------- 14:34:56.248 | INFO | SpringAnalyzerProcess | RdCategory: SpringAnalyzerProcessMain | ------------------NEW SPRING ANALYZER PROCESS STARTED------------------ 14:34:56.248 | INFO | SpringAnalyzerProcess | RdCategory: SpringAnalyzerProcessMain | ----------------------------------------------------------------------- 14:34:56.255 | INFO | SpringAnalyzerProcess | RdCategory: SourceFinder | Using java Spring configuration 14:34:56.258 | INFO | SpringAnalyzerProcess | RdCategory: SpringApplicationInstantiatorFacade | Current Java version is: 17.0.6 14:34:56.260 | INFO | SpringAnalyzerProcess | RdCategory: SpringApplicationInstantiatorFacade | Current Spring version is: 6.0.9 14:34:56.261 | INFO | SpringAnalyzerProcess | RdCategory: SpringApplicationInstantiatorFacade | Current Spring Boot version is: 3.1.0 14:34:56.301 | INFO | SpringAnalyzerProcess | RdCategory: SpringApplicationInstantiatorFacade | Instantiating with org.utbot.spring.instantiator.SpringBootApplicationInstantiator@1d8a8c80 14:34:58.175 | INFO | SpringAnalyzerProcess | RdCategory: SpringApplication | Starting application using Java 17.0.6 with PID 87409 (started by alenalisevych in /private/var/folders/3b/hdmt5d356wb48ryz0tg9b13c0000gn/T/UTBot/spring-analyzer2454592816807074907) 14:34:58.176 | INFO | SpringAnalyzerProcess | RdCategory: SpringApplication | The following 1 profile is active: "default" 14:34:58.930 | ERROR | SpringAnalyzerProcess | RdCategory: SpringApplication | Application run failed | java.lang.IllegalStateException: No Docker Compose file found in directory '/private/var/folders/3b/hdmt5d356wb48ryz0tg9b13c0000gn/T/UTBot/spring-analyzer2454592816807074907/.' at org.springframework.util.Assert.state(Assert.java:97) at org.springframework.boot.docker.compose.lifecycle.DockerComposeLifecycleManager.getComposeFile(DockerComposeLifecycleManager.java:135) at org.springframework.boot.docker.compose.lifecycle.DockerComposeLifecycleManager.start(DockerComposeLifecycleManager.java:103) at org.springframework.boot.docker.compose.lifecycle.DockerComposeListener.onApplicationEvent(DockerComposeListener.java:53) at org.springframework.boot.docker.compose.lifecycle.DockerComposeListener.onApplicationEvent(DockerComposeListener.java:35) ~~~ **Environment** OS - macOS Ventura 13.2.1 (22D68) IntelliJ IDEA version - 2023.1.2 CE Project - spring-petclinic JDK - 17
process
no docker compose file found exception in spring unit tests generation on macos description illegalstateexception no docker compose file found in instrumented process logs from springanalyzerprocess spring unit tests were being generated to reproduce install in intellij idea open spring petclinic project set jdk generate tests for owner entity class select petclinicapplication unit tests and leave defaults for all other settings expected behavior no instrumented process failures are expected actual behavior illegalstateexception no docker compose file found in instrumented process logs from springanalyzerprocess screenshots logs java region test suites for executable org springframework samples petclinic owner owner tostring region errors report for tostring public void testtostring errors couldn t generate some tests list of errors occurrences of endregion endregion java info engineprocessmain info engineprocessmain new engine process started info engineprocessmain info springanalyzerprocess spring analyzer process started with pid info springanalyzerprocess rdcategory springanalyzerprocessmain info springanalyzerprocess rdcategory springanalyzerprocessmain new spring analyzer process started info springanalyzerprocess rdcategory springanalyzerprocessmain info springanalyzerprocess rdcategory sourcefinder using java spring configuration info springanalyzerprocess rdcategory springapplicationinstantiatorfacade current java version is info springanalyzerprocess rdcategory springapplicationinstantiatorfacade current spring version is info springanalyzerprocess rdcategory springapplicationinstantiatorfacade current spring boot version is info springanalyzerprocess rdcategory springapplicationinstantiatorfacade instantiating with org utbot spring instantiator springbootapplicationinstantiator info springanalyzerprocess rdcategory springapplication starting application using java with pid started by alenalisevych in private var folders t utbot spring info springanalyzerprocess rdcategory springapplication the following profile is active default error springanalyzerprocess rdcategory springapplication application run failed java lang illegalstateexception no docker compose file found in directory private var folders t utbot spring at org springframework util assert state assert java at org springframework boot docker compose lifecycle dockercomposelifecyclemanager getcomposefile dockercomposelifecyclemanager java at org springframework boot docker compose lifecycle dockercomposelifecyclemanager start dockercomposelifecyclemanager java at org springframework boot docker compose lifecycle dockercomposelistener onapplicationevent dockercomposelistener java at org springframework boot docker compose lifecycle dockercomposelistener onapplicationevent dockercomposelistener java environment os macos ventura intellij idea version ce project spring petclinic jdk
1
10,730
12,692,984,214
IssuesEvent
2020-06-22 01:33:22
storybookjs/storybook
https://api.github.com/repos/storybookjs/storybook
closed
Addon Docs and Controls not working with Yarn 2.
addon: controls addon: docs compatibility with other tools question / support yarn / npm
**Describe the bug** Hi. I have a monorepo setup with Yarn 2 and the latest storybook and addons. The Doc page shows an error when I navigate to it, and the controls section shows not setup, despite following the examples shown in the Docs. **To Reproduce** Steps to reproduce the behavior: 1. Create New Repo with Yarn 2 2. Try to setup Addon Docs and Controls 3. Observe errors in screenshots below **Expected behavior** Docs page and Controls tabs work as specified in docs **Screenshots** <img width="1056" alt="Screen Shot 2020-06-18 at 5 59 01 PM" src="https://user-images.githubusercontent.com/1761197/85085862-f361cf00-b18d-11ea-99ba-e7210115ebd5.png"> <img width="898" alt="Screen Shot 2020-06-18 at 5 58 00 PM" src="https://user-images.githubusercontent.com/1761197/85085865-f5c42900-b18d-11ea-901e-903500e45fb5.png"> **Code snippets** If applicable, add code samples to help explain your problem. ***main.js file*** ```const { readFileSync } = require('fs'); const { resolve, join } = require('path'); const lessToJS = require('less-vars-to-js'); const UNIVERSAL_PATH = join(__dirname, '../../universal/src'); if (typeof require !== 'undefined') { // eslint-disable-next-line @typescript-eslint/no-unused-vars require.extensions['.less'] = (file) => {}; } const themeVariables = lessToJS( readFileSync( resolve(__dirname, `${UNIVERSAL_PATH}/theme/antdThemeVariables.less`), 'utf8' ) ); module.exports = { // stories: [`${SRC_PATH}/**/*.stories.tsx`], stories: ['../src/**/*.stories.tsx'], addons: [ '@storybook/addon-links', '@storybook/addon-viewport', '@storybook/addon-backgrounds', { name: '@storybook/addon-docs', options: { configureJSX: true, }, }, '@storybook/addon-controls', '@storybook/addon-actions', ], webpackFinal: async (config, { configType }) => { // `configType` has a value of 'DEVELOPMENT' or 'PRODUCTION' // You can change the configuration based on that. // 'PRODUCTION' is used when building the static version of storybook. // Make whatever fine-grained changes you need const isDEV = configType === 'DEVELOPMENT'; const isPROD = configType !== 'DEVELOPMENT'; // config.resolve.extensions.push('.ts', '.tsx'); config.module.rules.push({ test: /\.css$/, use: [ { loader: 'postcss-loader', options: { plugins: process.env.NODE_ENV === 'production' ? [ require('postcss-import'), // use with https://github.com/postcss/postcss-url in prod require('tailwindcss')('../universal/tailwind.config.js'), require('postcss-flexbugs-fixes'), require('postcss-preset-env')({ autoprefixer: { flexbox: 'no-2009', grid: 'autoplace', }, stage: 3, features: { 'custom-properties': false, }, }), ] : [ require('tailwindcss')('../universal/tailwind.config.js'), require('postcss-preset-env'), ], }, }, ], include: [ resolve(__dirname, '../'), resolve(__dirname, `${UNIVERSAL_PATH}/`), ], }); config.module.rules.push({ test: /\.scss$/, use: ['style-loader', 'css-loader', 'sass-loader'], include: [ resolve(__dirname, '../'), resolve(__dirname, `${UNIVERSAL_PATH}/styles/main.scss`), ], }); config.module.rules.push({ test: /\.less$/, use: [ { loader: 'style-loader', }, { loader: 'css-loader', // translates CSS into CommonJS }, { loader: 'less-loader', // compiles Less to CSS options: { javascriptEnabled: true, modifyVars: themeVariables, }, }, ], }); // Return the altered config return config; }, };``` ***preview.js file*** ```export const parameters = { actions: { argTypesRegex: '^on.*' }, };``` **System:** Using Yarn 2, Node 12, Storybook versions: ``` "@storybook/addon-actions": "^6.0.0-beta.31", "@storybook/addon-backgrounds": "^6.0.0-beta.31", "@storybook/addon-controls": "^6.0.0-beta.31", "@storybook/addon-docs": "^6.0.0-beta.31", "@storybook/addon-essentials": "^6.0.0-beta.31", "@storybook/addon-knobs": "^6.0.0-beta.31", "@storybook/addon-links": "^6.0.0-beta.31", "@storybook/addon-storyshots": "^6.0.0-beta.31", "@storybook/addon-viewport": "^6.0.0-beta.31", "@storybook/addons": "^6.0.0-beta.31", "@storybook/client-api": "^6.0.0-beta.31", "@storybook/client-logger": "^6.0.0-beta.31", "@storybook/react": "^6.0.0-beta.31", ```
True
Addon Docs and Controls not working with Yarn 2. - **Describe the bug** Hi. I have a monorepo setup with Yarn 2 and the latest storybook and addons. The Doc page shows an error when I navigate to it, and the controls section shows not setup, despite following the examples shown in the Docs. **To Reproduce** Steps to reproduce the behavior: 1. Create New Repo with Yarn 2 2. Try to setup Addon Docs and Controls 3. Observe errors in screenshots below **Expected behavior** Docs page and Controls tabs work as specified in docs **Screenshots** <img width="1056" alt="Screen Shot 2020-06-18 at 5 59 01 PM" src="https://user-images.githubusercontent.com/1761197/85085862-f361cf00-b18d-11ea-99ba-e7210115ebd5.png"> <img width="898" alt="Screen Shot 2020-06-18 at 5 58 00 PM" src="https://user-images.githubusercontent.com/1761197/85085865-f5c42900-b18d-11ea-901e-903500e45fb5.png"> **Code snippets** If applicable, add code samples to help explain your problem. ***main.js file*** ```const { readFileSync } = require('fs'); const { resolve, join } = require('path'); const lessToJS = require('less-vars-to-js'); const UNIVERSAL_PATH = join(__dirname, '../../universal/src'); if (typeof require !== 'undefined') { // eslint-disable-next-line @typescript-eslint/no-unused-vars require.extensions['.less'] = (file) => {}; } const themeVariables = lessToJS( readFileSync( resolve(__dirname, `${UNIVERSAL_PATH}/theme/antdThemeVariables.less`), 'utf8' ) ); module.exports = { // stories: [`${SRC_PATH}/**/*.stories.tsx`], stories: ['../src/**/*.stories.tsx'], addons: [ '@storybook/addon-links', '@storybook/addon-viewport', '@storybook/addon-backgrounds', { name: '@storybook/addon-docs', options: { configureJSX: true, }, }, '@storybook/addon-controls', '@storybook/addon-actions', ], webpackFinal: async (config, { configType }) => { // `configType` has a value of 'DEVELOPMENT' or 'PRODUCTION' // You can change the configuration based on that. // 'PRODUCTION' is used when building the static version of storybook. // Make whatever fine-grained changes you need const isDEV = configType === 'DEVELOPMENT'; const isPROD = configType !== 'DEVELOPMENT'; // config.resolve.extensions.push('.ts', '.tsx'); config.module.rules.push({ test: /\.css$/, use: [ { loader: 'postcss-loader', options: { plugins: process.env.NODE_ENV === 'production' ? [ require('postcss-import'), // use with https://github.com/postcss/postcss-url in prod require('tailwindcss')('../universal/tailwind.config.js'), require('postcss-flexbugs-fixes'), require('postcss-preset-env')({ autoprefixer: { flexbox: 'no-2009', grid: 'autoplace', }, stage: 3, features: { 'custom-properties': false, }, }), ] : [ require('tailwindcss')('../universal/tailwind.config.js'), require('postcss-preset-env'), ], }, }, ], include: [ resolve(__dirname, '../'), resolve(__dirname, `${UNIVERSAL_PATH}/`), ], }); config.module.rules.push({ test: /\.scss$/, use: ['style-loader', 'css-loader', 'sass-loader'], include: [ resolve(__dirname, '../'), resolve(__dirname, `${UNIVERSAL_PATH}/styles/main.scss`), ], }); config.module.rules.push({ test: /\.less$/, use: [ { loader: 'style-loader', }, { loader: 'css-loader', // translates CSS into CommonJS }, { loader: 'less-loader', // compiles Less to CSS options: { javascriptEnabled: true, modifyVars: themeVariables, }, }, ], }); // Return the altered config return config; }, };``` ***preview.js file*** ```export const parameters = { actions: { argTypesRegex: '^on.*' }, };``` **System:** Using Yarn 2, Node 12, Storybook versions: ``` "@storybook/addon-actions": "^6.0.0-beta.31", "@storybook/addon-backgrounds": "^6.0.0-beta.31", "@storybook/addon-controls": "^6.0.0-beta.31", "@storybook/addon-docs": "^6.0.0-beta.31", "@storybook/addon-essentials": "^6.0.0-beta.31", "@storybook/addon-knobs": "^6.0.0-beta.31", "@storybook/addon-links": "^6.0.0-beta.31", "@storybook/addon-storyshots": "^6.0.0-beta.31", "@storybook/addon-viewport": "^6.0.0-beta.31", "@storybook/addons": "^6.0.0-beta.31", "@storybook/client-api": "^6.0.0-beta.31", "@storybook/client-logger": "^6.0.0-beta.31", "@storybook/react": "^6.0.0-beta.31", ```
non_process
addon docs and controls not working with yarn describe the bug hi i have a monorepo setup with yarn and the latest storybook and addons the doc page shows an error when i navigate to it and the controls section shows not setup despite following the examples shown in the docs to reproduce steps to reproduce the behavior create new repo with yarn try to setup addon docs and controls observe errors in screenshots below expected behavior docs page and controls tabs work as specified in docs screenshots img width alt screen shot at pm src img width alt screen shot at pm src code snippets if applicable add code samples to help explain your problem main js file const readfilesync require fs const resolve join require path const lesstojs require less vars to js const universal path join dirname universal src if typeof require undefined eslint disable next line typescript eslint no unused vars require extensions file const themevariables lesstojs readfilesync resolve dirname universal path theme antdthemevariables less module exports stories stories addons storybook addon links storybook addon viewport storybook addon backgrounds name storybook addon docs options configurejsx true storybook addon controls storybook addon actions webpackfinal async config configtype configtype has a value of development or production you can change the configuration based on that production is used when building the static version of storybook make whatever fine grained changes you need const isdev configtype development const isprod configtype development config resolve extensions push ts tsx config module rules push test css use loader postcss loader options plugins process env node env production require postcss import use with in prod require tailwindcss universal tailwind config js require postcss flexbugs fixes require postcss preset env autoprefixer flexbox no grid autoplace stage features custom properties false require tailwindcss universal tailwind config js require postcss preset env include resolve dirname resolve dirname universal path config module rules push test scss use include resolve dirname resolve dirname universal path styles main scss config module rules push test less use loader style loader loader css loader translates css into commonjs loader less loader compiles less to css options javascriptenabled true modifyvars themevariables return the altered config return config preview js file export const parameters actions argtypesregex on system using yarn node storybook versions storybook addon actions beta storybook addon backgrounds beta storybook addon controls beta storybook addon docs beta storybook addon essentials beta storybook addon knobs beta storybook addon links beta storybook addon storyshots beta storybook addon viewport beta storybook addons beta storybook client api beta storybook client logger beta storybook react beta
0
16,864
23,214,620,439
IssuesEvent
2022-08-02 13:10:25
isXander/Debugify
https://api.github.com/repos/isXander/Debugify
closed
[Mod Incompatibility] [Forge 1.18.2] Pressing e with jei installed on the jei search bar force closes your inventory
bug mod incompatibility
Reproduce: 1. Download JEI mod 2. CLick on the jei search bar 3. type the letter E 4. observe that the inventory closes despite its search bar being selected Since JEI is pretty much in every single modpack to ever exist, I think fixing this bug would benefit a lot of players :) I've no clue which bug out of the 60 that this mod fixes is causing this, though. Sorry about that.
True
[Mod Incompatibility] [Forge 1.18.2] Pressing e with jei installed on the jei search bar force closes your inventory - Reproduce: 1. Download JEI mod 2. CLick on the jei search bar 3. type the letter E 4. observe that the inventory closes despite its search bar being selected Since JEI is pretty much in every single modpack to ever exist, I think fixing this bug would benefit a lot of players :) I've no clue which bug out of the 60 that this mod fixes is causing this, though. Sorry about that.
non_process
pressing e with jei installed on the jei search bar force closes your inventory reproduce download jei mod click on the jei search bar type the letter e observe that the inventory closes despite its search bar being selected since jei is pretty much in every single modpack to ever exist i think fixing this bug would benefit a lot of players i ve no clue which bug out of the that this mod fixes is causing this though sorry about that
0
41,218
16,669,802,196
IssuesEvent
2021-06-07 09:25:51
kyma-project/kyma
https://api.github.com/repos/kyma-project/kyma
closed
Don't have permission to patch
area/security area/service-catalog stale
Hi Kyma team, I notice that I don't have "patch" or edit permission in a namespace I created on the "servicesinstances" object. e.g. I cannot do kubectl edit serviceinstance1 serviceinstances -n custom_namespace. User yy@zzz.com" cannot patch resource "serviceinstances" in API group "servicecatalog.k8s.io" in the namespace "custom_namespace" Regards, Sundar
1.0
Don't have permission to patch - Hi Kyma team, I notice that I don't have "patch" or edit permission in a namespace I created on the "servicesinstances" object. e.g. I cannot do kubectl edit serviceinstance1 serviceinstances -n custom_namespace. User yy@zzz.com" cannot patch resource "serviceinstances" in API group "servicecatalog.k8s.io" in the namespace "custom_namespace" Regards, Sundar
non_process
don t have permission to patch hi kyma team i notice that i don t have patch or edit permission in a namespace i created on the servicesinstances object e g i cannot do kubectl edit serviceinstances n custom namespace user yy zzz com cannot patch resource serviceinstances in api group servicecatalog io in the namespace custom namespace regards sundar
0
123,286
17,772,202,493
IssuesEvent
2021-08-30 14:51:00
kapseliboi/bitmidi.com
https://api.github.com/repos/kapseliboi/bitmidi.com
opened
CVE-2020-28500 (Medium) detected in lodash-4.17.20.tgz
security vulnerability
## CVE-2020-28500 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.20.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz</a></p> <p>Path to dependency file: bitmidi.com/package.json</p> <p>Path to vulnerable library: bitmidi.com/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - core-7.12.9.tgz (Root Library) - :x: **lodash-4.17.20.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kapseliboi/bitmidi.com/commit/8bfbb9b5b1cc23e87e14304a565f6b849e11bdb2">8bfbb9b5b1cc23e87e14304a565f6b849e11bdb2</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p> <p>Release Date: 2021-02-15</p> <p>Fix Resolution: lodash-4.17.21</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-28500 (Medium) detected in lodash-4.17.20.tgz - ## CVE-2020-28500 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.20.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz</a></p> <p>Path to dependency file: bitmidi.com/package.json</p> <p>Path to vulnerable library: bitmidi.com/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - core-7.12.9.tgz (Root Library) - :x: **lodash-4.17.20.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kapseliboi/bitmidi.com/commit/8bfbb9b5b1cc23e87e14304a565f6b849e11bdb2">8bfbb9b5b1cc23e87e14304a565f6b849e11bdb2</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p> <p>Release Date: 2021-02-15</p> <p>Fix Resolution: lodash-4.17.21</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in lodash tgz cve medium severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file bitmidi com package json path to vulnerable library bitmidi com node modules lodash package json dependency hierarchy core tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash versions prior to are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource
0
11,624
14,484,686,857
IssuesEvent
2020-12-10 16:39:08
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Consuming output variables of deployment jobs
Pri2 devops-cicd-process/tech devops/prod doc-bug
The syntax for consuming output variables of another jobs (`$[ dependencies.A.outputs['ProduceVar.MyVar'] ]`) does not work if `A` is a deployment job. Could you please provide some explanation on how to use such variables? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
Consuming output variables of deployment jobs - The syntax for consuming output variables of another jobs (`$[ dependencies.A.outputs['ProduceVar.MyVar'] ]`) does not work if `A` is a deployment job. Could you please provide some explanation on how to use such variables? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
consuming output variables of deployment jobs the syntax for consuming output variables of another jobs does not work if a is a deployment job could you please provide some explanation on how to use such variables document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id bcdb content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
128,469
17,538,853,458
IssuesEvent
2021-08-12 09:35:27
emory-libraries/blacklight-catalog
https://api.github.com/repos/emory-libraries/blacklight-catalog
opened
Placeholder - Revise wireframes for search results
UI Design
Based on outcomes from meeting with ExLibris and the proposed plan for requesting, I am putting in this placeholder ticket to revise the wireframes for the search results. It is expected the search results incorporating availability and requesting information and wireframes for incorporating ArticlesPlus Gem that would be listed as a bento box after three results are shown similar to the gem implemented by Stanford University ![StanfordArticlesPlusGem.png](https://images.zenhubusercontent.com/5c4a1fe1e9b5fb46bcedb9fc/bfa8a32e-5bd3-4943-a452-05d0c8ab1eb1) The details can be finalized during the next wireframes meeting. The details of the actions will be updated in this ticket along with the requirements to create a new ticket for the app development team to implement the changes.
1.0
Placeholder - Revise wireframes for search results - Based on outcomes from meeting with ExLibris and the proposed plan for requesting, I am putting in this placeholder ticket to revise the wireframes for the search results. It is expected the search results incorporating availability and requesting information and wireframes for incorporating ArticlesPlus Gem that would be listed as a bento box after three results are shown similar to the gem implemented by Stanford University ![StanfordArticlesPlusGem.png](https://images.zenhubusercontent.com/5c4a1fe1e9b5fb46bcedb9fc/bfa8a32e-5bd3-4943-a452-05d0c8ab1eb1) The details can be finalized during the next wireframes meeting. The details of the actions will be updated in this ticket along with the requirements to create a new ticket for the app development team to implement the changes.
non_process
placeholder revise wireframes for search results based on outcomes from meeting with exlibris and the proposed plan for requesting i am putting in this placeholder ticket to revise the wireframes for the search results it is expected the search results incorporating availability and requesting information and wireframes for incorporating articlesplus gem that would be listed as a bento box after three results are shown similar to the gem implemented by stanford university the details can be finalized during the next wireframes meeting the details of the actions will be updated in this ticket along with the requirements to create a new ticket for the app development team to implement the changes
0
130,411
18,071,677,573
IssuesEvent
2021-09-21 04:08:28
SAP/fundamental-ngx
https://api.github.com/repos/SAP/fundamental-ngx
opened
Bug: On Standard List - the "Filter and Sort List" example is showing abnormal behavior.
bug core design_team
#### Is this a bug, enhancement, or feature request? Bug #### Briefly describe your proposal. ![image](https://user-images.githubusercontent.com/71797052/134109937-9bf79ef3-ff73-4e62-a739-df729b9fbf63.png) #### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.) v0.32.1-rc44 #### If this is a bug, please provide steps for reproducing it. Go to https://fundamental-ngx.netlify.app/#/core/list Scroll down to "Filter and Sort List" Delete one item from the list Then start selecting each item and see abnormal behavior
1.0
Bug: On Standard List - the "Filter and Sort List" example is showing abnormal behavior. - #### Is this a bug, enhancement, or feature request? Bug #### Briefly describe your proposal. ![image](https://user-images.githubusercontent.com/71797052/134109937-9bf79ef3-ff73-4e62-a739-df729b9fbf63.png) #### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.) v0.32.1-rc44 #### If this is a bug, please provide steps for reproducing it. Go to https://fundamental-ngx.netlify.app/#/core/list Scroll down to "Filter and Sort List" Delete one item from the list Then start selecting each item and see abnormal behavior
non_process
bug on standard list the filter and sort list example is showing abnormal behavior is this a bug enhancement or feature request bug briefly describe your proposal which versions of angular and fundamental library for angular are affected if this is a feature request use current version if this is a bug please provide steps for reproducing it go to scroll down to filter and sort list delete one item from the list then start selecting each item and see abnormal behavior
0
587,036
17,602,951,265
IssuesEvent
2021-08-17 13:56:41
inverse-inc/packetfence
https://api.github.com/repos/inverse-inc/packetfence
closed
v11: roles inherited from switch groups are not visible on switches
Type: Bug Priority: High
**Describe the bug** It's not possible to see which ID has been assigned to a role when you browse a switch which inherited values from a switch group. **To Reproduce** Steps to reproduce the behavior: 1. Create a switch group and set `guest=15` 2. Create a switch, member of switch group created at step 1. 3. Browse your switch (Roles tab) => You don't see value of `15` for `guest` role **Expected behavior** See value of `guest` role inherited from switch group **Additional context** When I click on a switch, I didn't see any GET API calls to `/api/v1/config/IP_OF_SWITCH`, only OPTIONS calls. If I did a manual API call to `/api/v1/config/IP_OF_SWITCH`, information is correctly returned by API.
1.0
v11: roles inherited from switch groups are not visible on switches - **Describe the bug** It's not possible to see which ID has been assigned to a role when you browse a switch which inherited values from a switch group. **To Reproduce** Steps to reproduce the behavior: 1. Create a switch group and set `guest=15` 2. Create a switch, member of switch group created at step 1. 3. Browse your switch (Roles tab) => You don't see value of `15` for `guest` role **Expected behavior** See value of `guest` role inherited from switch group **Additional context** When I click on a switch, I didn't see any GET API calls to `/api/v1/config/IP_OF_SWITCH`, only OPTIONS calls. If I did a manual API call to `/api/v1/config/IP_OF_SWITCH`, information is correctly returned by API.
non_process
roles inherited from switch groups are not visible on switches describe the bug it s not possible to see which id has been assigned to a role when you browse a switch which inherited values from a switch group to reproduce steps to reproduce the behavior create a switch group and set guest create a switch member of switch group created at step browse your switch roles tab you don t see value of for guest role expected behavior see value of guest role inherited from switch group additional context when i click on a switch i didn t see any get api calls to api config ip of switch only options calls if i did a manual api call to api config ip of switch information is correctly returned by api
0
19,139
25,202,344,942
IssuesEvent
2022-11-13 09:01:37
googleapis/google-cloud-node
https://api.github.com/repos/googleapis/google-cloud-node
closed
Your .repo-metadata.json files have a problem 🤒
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json files: Result of scan 📈: * release_level must be equal to one of the allowed values in packages/gapic-node-templating/templates/bootstrap-templates/.repo-metadata.json * api_shortname field missing from packages/gapic-node-templating/templates/bootstrap-templates/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-api-apikeys/.repo-metadata.json * api_shortname field missing from packages/google-api-apikeys/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-batch/.repo-metadata.json * api_shortname field missing from packages/google-cloud-batch/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-appconnections/.repo-metadata.json * api_shortname field missing from packages/google-cloud-beyondcorp-appconnections/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-appconnectors/.repo-metadata.json * api_shortname field missing from packages/google-cloud-beyondcorp-appconnectors/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-appgateways/.repo-metadata.json * api_shortname field missing from packages/google-cloud-beyondcorp-appgateways/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-clientconnectorservices/.repo-metadata.json * api_shortname field missing from packages/google-cloud-beyondcorp-clientconnectorservices/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-clientgateways/.repo-metadata.json * api_shortname field missing from packages/google-cloud-beyondcorp-clientgateways/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-bigquery-analyticshub/.repo-metadata.json * api_shortname field missing from packages/google-cloud-bigquery-analyticshub/.repo-metadata.json * api_shortname field missing from packages/google-cloud-bigquery-dataexchange/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-bigquery-datapolicies/.repo-metadata.json * api_shortname field missing from packages/google-cloud-bigquery-datapolicies/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-contentwarehouse/.repo-metadata.json * api_shortname field missing from packages/google-cloud-contentwarehouse/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-discoveryengine/.repo-metadata.json * api_shortname field missing from packages/google-cloud-discoveryengine/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-gkemulticloud/.repo-metadata.json * api_shortname field missing from packages/google-cloud-gkemulticloud/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-security-publicca/.repo-metadata.json * api_shortname field missing from packages/google-cloud-security-publicca/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-iam/.repo-metadata.json * api_shortname field missing from packages/google-iam/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-maps-addressvalidation/.repo-metadata.json * api_shortname field missing from packages/google-maps-addressvalidation/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-maps-routing/.repo-metadata.json * api_shortname field missing from packages/google-maps-routing/.repo-metadata.json ☝️ Once you address these problems, you can close this issue. ### Need help? * [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field. * [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**. * Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json files have a problem 🤒 - You have a problem with your .repo-metadata.json files: Result of scan 📈: * release_level must be equal to one of the allowed values in packages/gapic-node-templating/templates/bootstrap-templates/.repo-metadata.json * api_shortname field missing from packages/gapic-node-templating/templates/bootstrap-templates/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-api-apikeys/.repo-metadata.json * api_shortname field missing from packages/google-api-apikeys/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-batch/.repo-metadata.json * api_shortname field missing from packages/google-cloud-batch/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-appconnections/.repo-metadata.json * api_shortname field missing from packages/google-cloud-beyondcorp-appconnections/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-appconnectors/.repo-metadata.json * api_shortname field missing from packages/google-cloud-beyondcorp-appconnectors/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-appgateways/.repo-metadata.json * api_shortname field missing from packages/google-cloud-beyondcorp-appgateways/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-clientconnectorservices/.repo-metadata.json * api_shortname field missing from packages/google-cloud-beyondcorp-clientconnectorservices/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-clientgateways/.repo-metadata.json * api_shortname field missing from packages/google-cloud-beyondcorp-clientgateways/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-bigquery-analyticshub/.repo-metadata.json * api_shortname field missing from packages/google-cloud-bigquery-analyticshub/.repo-metadata.json * api_shortname field missing from packages/google-cloud-bigquery-dataexchange/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-bigquery-datapolicies/.repo-metadata.json * api_shortname field missing from packages/google-cloud-bigquery-datapolicies/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-contentwarehouse/.repo-metadata.json * api_shortname field missing from packages/google-cloud-contentwarehouse/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-discoveryengine/.repo-metadata.json * api_shortname field missing from packages/google-cloud-discoveryengine/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-gkemulticloud/.repo-metadata.json * api_shortname field missing from packages/google-cloud-gkemulticloud/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-cloud-security-publicca/.repo-metadata.json * api_shortname field missing from packages/google-cloud-security-publicca/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-iam/.repo-metadata.json * api_shortname field missing from packages/google-iam/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-maps-addressvalidation/.repo-metadata.json * api_shortname field missing from packages/google-maps-addressvalidation/.repo-metadata.json * release_level must be equal to one of the allowed values in packages/google-maps-routing/.repo-metadata.json * api_shortname field missing from packages/google-maps-routing/.repo-metadata.json ☝️ Once you address these problems, you can close this issue. ### Need help? * [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field. * [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**. * Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json files have a problem 🤒 you have a problem with your repo metadata json files result of scan 📈 release level must be equal to one of the allowed values in packages gapic node templating templates bootstrap templates repo metadata json api shortname field missing from packages gapic node templating templates bootstrap templates repo metadata json release level must be equal to one of the allowed values in packages google api apikeys repo metadata json api shortname field missing from packages google api apikeys repo metadata json release level must be equal to one of the allowed values in packages google cloud batch repo metadata json api shortname field missing from packages google cloud batch repo metadata json release level must be equal to one of the allowed values in packages google cloud beyondcorp appconnections repo metadata json api shortname field missing from packages google cloud beyondcorp appconnections repo metadata json release level must be equal to one of the allowed values in packages google cloud beyondcorp appconnectors repo metadata json api shortname field missing from packages google cloud beyondcorp appconnectors repo metadata json release level must be equal to one of the allowed values in packages google cloud beyondcorp appgateways repo metadata json api shortname field missing from packages google cloud beyondcorp appgateways repo metadata json release level must be equal to one of the allowed values in packages google cloud beyondcorp clientconnectorservices repo metadata json api shortname field missing from packages google cloud beyondcorp clientconnectorservices repo metadata json release level must be equal to one of the allowed values in packages google cloud beyondcorp clientgateways repo metadata json api shortname field missing from packages google cloud beyondcorp clientgateways repo metadata json release level must be equal to one of the allowed values in packages google cloud bigquery analyticshub repo metadata json api shortname field missing from packages google cloud bigquery analyticshub repo metadata json api shortname field missing from packages google cloud bigquery dataexchange repo metadata json release level must be equal to one of the allowed values in packages google cloud bigquery datapolicies repo metadata json api shortname field missing from packages google cloud bigquery datapolicies repo metadata json release level must be equal to one of the allowed values in packages google cloud contentwarehouse repo metadata json api shortname field missing from packages google cloud contentwarehouse repo metadata json release level must be equal to one of the allowed values in packages google cloud discoveryengine repo metadata json api shortname field missing from packages google cloud discoveryengine repo metadata json release level must be equal to one of the allowed values in packages google cloud gkemulticloud repo metadata json api shortname field missing from packages google cloud gkemulticloud repo metadata json release level must be equal to one of the allowed values in packages google cloud security publicca repo metadata json api shortname field missing from packages google cloud security publicca repo metadata json release level must be equal to one of the allowed values in packages google iam repo metadata json api shortname field missing from packages google iam repo metadata json release level must be equal to one of the allowed values in packages google maps addressvalidation repo metadata json api shortname field missing from packages google maps addressvalidation repo metadata json release level must be equal to one of the allowed values in packages google maps routing repo metadata json api shortname field missing from packages google maps routing repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
1
144
2,577,421,927
IssuesEvent
2015-02-12 16:56:13
Graylog2/graylog2-server
https://api.github.com/repos/Graylog2/graylog2-server
closed
RFC5424 structured data parsing
bug processing
Base on Jochen recommendation https://groups.google.com/forum/#!searchin/graylog2/5424/graylog2/KO91vcZIOXo/vcEPEwAT6e4J I would like to create issue: I have an application which creates log in RFC5424 and send them to my central rsyslog server. Logs are resend to graylog2 via syslog protocol. Graylog2 runs local INPUT tcp syslog on 10514. Configuration of rsyslog resending looks like: ``` $template GRAYLOGRFC5424,"<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %STRUCTURED-DATA% %msg%\n" *.* @@172.100.100.100:10514;GRAYLOGRFC5424 ``` all logs are received by graylog2 without any issue but they are not parsed properly. They are parsed like basic RFC5424 message. Structured data are handled like "message". the full message looks like: ``` <190>1 2015-01-06T20:56:33.287Z app-1 app - [mdc@18060 ip="::ffff:132.123.15.30" logger="{c.corp.Handler}" session="4ot7" user="cybermedi@yahoo.com" user-agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/600.2.5 (KHTML, like Gecko) Version/7.1.2 Safari/537.85.11"] User page 13 requested ``` I recieve message fields like: <b>application_name</b> app <b>facility</b> local7 <b>level</b> Info [6] <b>message</b> [mdc@18060 ip="::ffff:132.123.15.30" logger="{c.corp.Handler}" session="4ot7" user="cybermedi@yahoo.com" user-agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/600.2.5 (KHTML, like Gecko) Version/7.1.2 Safari/537.85.11"] User page 13 requested <b>source</b> app-1 but there should be other ones based on structured data. Like: <i>ip, logger, session, user, user-agent</i> and in the message should be just: <i>User page 13 requested</i>
1.0
RFC5424 structured data parsing - Base on Jochen recommendation https://groups.google.com/forum/#!searchin/graylog2/5424/graylog2/KO91vcZIOXo/vcEPEwAT6e4J I would like to create issue: I have an application which creates log in RFC5424 and send them to my central rsyslog server. Logs are resend to graylog2 via syslog protocol. Graylog2 runs local INPUT tcp syslog on 10514. Configuration of rsyslog resending looks like: ``` $template GRAYLOGRFC5424,"<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %STRUCTURED-DATA% %msg%\n" *.* @@172.100.100.100:10514;GRAYLOGRFC5424 ``` all logs are received by graylog2 without any issue but they are not parsed properly. They are parsed like basic RFC5424 message. Structured data are handled like "message". the full message looks like: ``` <190>1 2015-01-06T20:56:33.287Z app-1 app - [mdc@18060 ip="::ffff:132.123.15.30" logger="{c.corp.Handler}" session="4ot7" user="cybermedi@yahoo.com" user-agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/600.2.5 (KHTML, like Gecko) Version/7.1.2 Safari/537.85.11"] User page 13 requested ``` I recieve message fields like: <b>application_name</b> app <b>facility</b> local7 <b>level</b> Info [6] <b>message</b> [mdc@18060 ip="::ffff:132.123.15.30" logger="{c.corp.Handler}" session="4ot7" user="cybermedi@yahoo.com" user-agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/600.2.5 (KHTML, like Gecko) Version/7.1.2 Safari/537.85.11"] User page 13 requested <b>source</b> app-1 but there should be other ones based on structured data. Like: <i>ip, logger, session, user, user-agent</i> and in the message should be just: <i>User page 13 requested</i>
process
structured data parsing base on jochen recommendation i would like to create issue i have an application which creates log in and send them to my central rsyslog server logs are resend to via syslog protocol runs local input tcp syslog on configuration of rsyslog resending looks like template protocol version timestamp date hostname app name procid structured data msg n all logs are received by without any issue but they are not parsed properly they are parsed like basic message structured data are handled like message the full message looks like app app user page requested i recieve message fields like application name app facility level info message user page requested source app but there should be other ones based on structured data like ip logger session user user agent and in the message should be just user page requested
1
3,689
6,554,456,761
IssuesEvent
2017-09-06 06:01:07
gizmecano/opencart-2-fr
https://api.github.com/repos/gizmecano/opencart-2-fr
closed
Check compatibility with versions released in the 2.0.x series
compatibility
Check compatibility with versions released in the 2.0.x series: - [x] [2.0.2.0](https://github.com/opencart/opencart/releases/tag/2.0.2.0) - Rename `default` files (#5) - Add new files in `admin` folder (#6) - Add new files in `catalog` folder (#7) - Revise current files in `admin` folder (#8) - Revise current files in `catalog` folder (#9) - [x] [2.0.3.0](https://github.com/opencart/opencart/releases/tag/2.0.3.0) - Add 4 new files in `admin` folder (#12) - Add 1 new file in `catalog` folder (#13) - Revise 8 files in `admin` folder (#14) - Revise 1 file in `catalog` folder (#15) - [x] [2.0.3.1](https://github.com/opencart/opencart/releases/tag/2.0.3.1) - Add new file in `catalog` folder (#19) - Revise current file in `admin` folder (#20)
True
Check compatibility with versions released in the 2.0.x series - Check compatibility with versions released in the 2.0.x series: - [x] [2.0.2.0](https://github.com/opencart/opencart/releases/tag/2.0.2.0) - Rename `default` files (#5) - Add new files in `admin` folder (#6) - Add new files in `catalog` folder (#7) - Revise current files in `admin` folder (#8) - Revise current files in `catalog` folder (#9) - [x] [2.0.3.0](https://github.com/opencart/opencart/releases/tag/2.0.3.0) - Add 4 new files in `admin` folder (#12) - Add 1 new file in `catalog` folder (#13) - Revise 8 files in `admin` folder (#14) - Revise 1 file in `catalog` folder (#15) - [x] [2.0.3.1](https://github.com/opencart/opencart/releases/tag/2.0.3.1) - Add new file in `catalog` folder (#19) - Revise current file in `admin` folder (#20)
non_process
check compatibility with versions released in the x series check compatibility with versions released in the x series rename default files add new files in admin folder add new files in catalog folder revise current files in admin folder revise current files in catalog folder add new files in admin folder add new file in catalog folder revise files in admin folder revise file in catalog folder add new file in catalog folder revise current file in admin folder
0
21,617
30,022,524,875
IssuesEvent
2023-06-27 01:34:51
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Undefined linker symbols related to indirect use of objc_library targets and cc_test
P4 type: support / not a bug (process) team-Rules-CPP stale
### Description of the problem / feature request: I get linker errors (undefined symbols) when trying to indirectly test with gTest a c++ object that depends on an objc_library. Depending on the target directly, not using gTest, or excluding the Objective C details all work properly (but don't solve my specific problem). ### Feature requests: what underlying problem are you trying to solve with this feature? Build the Juce library on OSX (and test downstream targets), which uses Objective C libraries and involves a lot of complicated header configuration. More info here: https://github.com/chetgnegy/bazel_juce/blob/master/ThirdParty/Juce/juce.bzl ### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. Pull this repo: https://github.com/chetgnegy/bazel_issue_9_4_2020 Verifies the issue is related to wrapped objc_libraries: blaze clean && blaze run -c opt MinimalExample:MyLibBrokenTest # Linker error. blaze clean && blaze run -c opt MinimalExample:MyLibWorkingBinary # Correctly outputs 42. blaze clean && blaze run -c opt MinimalExample:MyLibWorkingTest. # Correctly outputs 42. Substitute TARGET_UNDER_TEST in the BUILD file and run the commands again to verify that the issue is related to objc_library. ### What operating system are you running Bazel on? Catalina. This is my first time trying to build my codebase since getting a new machine. I'll note that I didn't have this issue on an older version of ~bazel 0.29.1 on OSX Mountain Lion, even though I know how useless that info might be. So it could have been introduced more recently. It could also depend on the gtest version. I no longer have that machine to do a side-by-side test. ### What's the output of `bazel info release`? release 3.5.0 though the issue exists as far back as 0.29.1 ### If `bazel info release` returns "development version" or "(@non-git)", tell us how you built Bazel. bazel-3.5.0-installer-darwin-x86_64.sh ### What's the output of `git remote get-url origin ; git rev-parse master ; git rev-parse HEAD` ? https://github.com/chetgnegy/bazel_issue_9_4_2020.git 691f2dc9e4dfb4eeb7527d3cd0a7967350807e73 691f2dc9e4dfb4eeb7527d3cd0a7967350807e73 ### Have you found anything relevant by searching the web? I haven't. This feels like a very specific issue. ### Any other information, logs, or outputs that you want to share? None. The repo should tell you everything I know.
1.0
Undefined linker symbols related to indirect use of objc_library targets and cc_test - ### Description of the problem / feature request: I get linker errors (undefined symbols) when trying to indirectly test with gTest a c++ object that depends on an objc_library. Depending on the target directly, not using gTest, or excluding the Objective C details all work properly (but don't solve my specific problem). ### Feature requests: what underlying problem are you trying to solve with this feature? Build the Juce library on OSX (and test downstream targets), which uses Objective C libraries and involves a lot of complicated header configuration. More info here: https://github.com/chetgnegy/bazel_juce/blob/master/ThirdParty/Juce/juce.bzl ### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. Pull this repo: https://github.com/chetgnegy/bazel_issue_9_4_2020 Verifies the issue is related to wrapped objc_libraries: blaze clean && blaze run -c opt MinimalExample:MyLibBrokenTest # Linker error. blaze clean && blaze run -c opt MinimalExample:MyLibWorkingBinary # Correctly outputs 42. blaze clean && blaze run -c opt MinimalExample:MyLibWorkingTest. # Correctly outputs 42. Substitute TARGET_UNDER_TEST in the BUILD file and run the commands again to verify that the issue is related to objc_library. ### What operating system are you running Bazel on? Catalina. This is my first time trying to build my codebase since getting a new machine. I'll note that I didn't have this issue on an older version of ~bazel 0.29.1 on OSX Mountain Lion, even though I know how useless that info might be. So it could have been introduced more recently. It could also depend on the gtest version. I no longer have that machine to do a side-by-side test. ### What's the output of `bazel info release`? release 3.5.0 though the issue exists as far back as 0.29.1 ### If `bazel info release` returns "development version" or "(@non-git)", tell us how you built Bazel. bazel-3.5.0-installer-darwin-x86_64.sh ### What's the output of `git remote get-url origin ; git rev-parse master ; git rev-parse HEAD` ? https://github.com/chetgnegy/bazel_issue_9_4_2020.git 691f2dc9e4dfb4eeb7527d3cd0a7967350807e73 691f2dc9e4dfb4eeb7527d3cd0a7967350807e73 ### Have you found anything relevant by searching the web? I haven't. This feels like a very specific issue. ### Any other information, logs, or outputs that you want to share? None. The repo should tell you everything I know.
process
undefined linker symbols related to indirect use of objc library targets and cc test description of the problem feature request i get linker errors undefined symbols when trying to indirectly test with gtest a c object that depends on an objc library depending on the target directly not using gtest or excluding the objective c details all work properly but don t solve my specific problem feature requests what underlying problem are you trying to solve with this feature build the juce library on osx and test downstream targets which uses objective c libraries and involves a lot of complicated header configuration more info here bugs what s the simplest easiest way to reproduce this bug please provide a minimal example if possible pull this repo verifies the issue is related to wrapped objc libraries blaze clean blaze run c opt minimalexample mylibbrokentest linker error blaze clean blaze run c opt minimalexample mylibworkingbinary correctly outputs blaze clean blaze run c opt minimalexample mylibworkingtest correctly outputs substitute target under test in the build file and run the commands again to verify that the issue is related to objc library what operating system are you running bazel on catalina this is my first time trying to build my codebase since getting a new machine i ll note that i didn t have this issue on an older version of bazel on osx mountain lion even though i know how useless that info might be so it could have been introduced more recently it could also depend on the gtest version i no longer have that machine to do a side by side test what s the output of bazel info release release though the issue exists as far back as if bazel info release returns development version or non git tell us how you built bazel bazel installer darwin sh what s the output of git remote get url origin git rev parse master git rev parse head have you found anything relevant by searching the web i haven t this feels like a very specific issue any other information logs or outputs that you want to share none the repo should tell you everything i know
1
22,373
31,142,280,849
IssuesEvent
2023-08-16 01:43:50
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
Flaky test: Different value of snapshot "e2e plugins fails when there is an async error inside an event handler"
stage: backlog process: flaky test topic: flake ❄️ stage: flake stale
### Link to dashboard or CircleCI failure https://app.circleci.com/pipelines/github/cypress-io/cypress/42294/workflows/9555cf90-6d34-4b5e-b561-d698d4f63f11/jobs/1756834/tests#failed-test-0 ### Link to failing test in GitHub https://github.com/cypress-io/cypress/blob/develop/system-tests/test/plugins_spec.js#L29 ### Analysis <img width="974" alt="Screen Shot 2022-08-22 at 8 51 23 AM" src="https://user-images.githubusercontent.com/26726429/185964306-5f9151db-673b-40c6-9d3d-29ec41676b23.png"> ### Cypress Version 10.5.0 ### Other Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
1.0
Flaky test: Different value of snapshot "e2e plugins fails when there is an async error inside an event handler" - ### Link to dashboard or CircleCI failure https://app.circleci.com/pipelines/github/cypress-io/cypress/42294/workflows/9555cf90-6d34-4b5e-b561-d698d4f63f11/jobs/1756834/tests#failed-test-0 ### Link to failing test in GitHub https://github.com/cypress-io/cypress/blob/develop/system-tests/test/plugins_spec.js#L29 ### Analysis <img width="974" alt="Screen Shot 2022-08-22 at 8 51 23 AM" src="https://user-images.githubusercontent.com/26726429/185964306-5f9151db-673b-40c6-9d3d-29ec41676b23.png"> ### Cypress Version 10.5.0 ### Other Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
process
flaky test different value of snapshot plugins fails when there is an async error inside an event handler link to dashboard or circleci failure link to failing test in github analysis img width alt screen shot at am src cypress version other search for this issue number in the codebase to find the test s skipped until this issue is fixed
1
17,754
23,670,346,480
IssuesEvent
2022-08-27 08:38:20
bjorkgard/public-secretary
https://api.github.com/repos/bjorkgard/public-secretary
closed
Widget: Populära exporter
Widget User in process
Lista de vanligaste exporterna en användare gör och gör en snabblänk till dessa
1.0
Widget: Populära exporter - Lista de vanligaste exporterna en användare gör och gör en snabblänk till dessa
process
widget populära exporter lista de vanligaste exporterna en användare gör och gör en snabblänk till dessa
1
101,356
4,113,427,950
IssuesEvent
2016-06-07 14:06:36
rathena/rathena
https://api.github.com/repos/rathena/rathena
closed
Knuckle Arrow Bug
bug:skill mode:prerenewal mode:renewal priority:low server:map status:confirmed
im using latest revision renewal mode and 20130807 client and i found this bug on knuckle arrow. In my test when i use knuckle arrow on player(pvp) my character position stack with the targeted player like body relocation(snap) behavior i test in iro it shouldnt edit: server git hash daa9e01
1.0
Knuckle Arrow Bug - im using latest revision renewal mode and 20130807 client and i found this bug on knuckle arrow. In my test when i use knuckle arrow on player(pvp) my character position stack with the targeted player like body relocation(snap) behavior i test in iro it shouldnt edit: server git hash daa9e01
non_process
knuckle arrow bug im using latest revision renewal mode and client and i found this bug on knuckle arrow in my test when i use knuckle arrow on player pvp my character position stack with the targeted player like body relocation snap behavior i test in iro it shouldnt edit server git hash
0
14,707
17,892,727,224
IssuesEvent
2021-09-08 03:04:20
medic/cht-core
https://api.github.com/repos/medic/cht-core
closed
Release 3.12.0
Type: Internal process
# Planning - Product Manager - [x] Create a repo milestone and add this issue to it. We use [semver](http://semver.org) so if there are breaking changes increment the major, otherwise if there are new features increment the minor, otherwise increment the service pack. Breaking changes in our case relate to updated software requirements (egs: CouchDB, node, minimum browser versions), broken backwards compatibility in an api, or a major visual update that requires user retraining. - [x] Add all the issues to be worked on to the milestone. Ideally each minor release will have one or two features, a handful of improvements, and plenty of bug fixes. - [ ] Identify any features and improvements in the release that need end-user documentation (beyond eng team documentation improvements) and create corresponding issues in the cht-docs repo - [x] Assign an engineer as Release Manager for this release. - [ ] Assign product team members to complete end-user documentation improvements for this release. # Development - Release Manager When development is ready to begin one of the engineers should be nominated as a Release Manager. They will be responsible for making sure the following tasks are completed though not necessarily completing them. - [x] Set the version number in `package.json` and `package-lock.json` and submit a PR. The easiest way to do this is to use `npm --no-git-tag-version version <major|minor>`. - [x] Raise a new issue called `Update dependencies for <version>` with a description that links to [the documentation](https://docs.communityhealthtoolkit.org/core/guides/update-dependencies/). This should be done early in the release cycle so find a volunteer to take this on and assign it to them. - [x] Go through all features and improvements scheduled for this release and raise cht-docs issues for product education to be written where appropriate. If in doubt, check with the product manager. - [x] Write an update in the weekly Product Team call agenda summarising development and acceptance testing progress and identifying any blockers. The release manager is to update this every week until the version is released. - [x] Announce the kickoff of development for the release on the [CHT forum](https://forum.communityhealthtoolkit.org), under the "Product - Releases" category. # Releasing - Release Manager Once all issues have passed acceptance testing and have been merged into `master` release testing can begin. - [x] Create a new release branch from `master` named `<major>.<minor>.x` in `cht-core`. Post a message to #development using this template: ``` @core_devs I've just created the `<major>.<minor>.x` release branch. Please be aware that any further changes intended for this release will have to be merged to `master` then backported. Thanks! ``` - [x] Build a beta named `<major>.<minor>.<patch>-beta.1` by pushing a git tag and when CI completes successfully notify the QA team that it's ready for release testing. - [x] [Import translations keys](https://docs.communityhealthtoolkit.org/core/overview/translations/#adding-new-keys) into POE and notify the #translations Slack channel translate new and updated values, for example: ``` @channel I've just updated the translations in POE. These keys have been added: "<added-list>", and these keys have been updated: "<updated-list>" ``` - [x] Create a new document in the [release-notes folder](https://github.com/medic/cht-core/tree/master/release-notes) in `master`. Ensure all issues are in the GH Milestone, that they're correctly labelled (in particular: they have the right Type, "UI/UX" if they change the UI, and "Breaking change" if appropriate), and have human readable descriptions. Use [this script](https://github.com/medic/cht-core/blob/master/scripts/release-notes/index.js) to export the issues into our changelog format. Manually document any known migration steps and known issues. Provide description, screenshots, videos, and anything else to help communicate particularly important changes. Document any required or recommended upgrades to our other products (eg: medic-conf, medic-gateway, medic-android). Assign the PR to a) the Director of Technology, and b) an SRE to review and confirm the documentation on upgrade instructions and breaking changes is sufficient. - [x] Until release testing passes, make sure regressions are fixed in `master`, cherry-pick them into the release branch, and release another beta. - [x] [Export the translations](https://docs.communityhealthtoolkit.org/core/overview/translations/#exporting-changes-from-poeditor-to-github), delete empty translation files and commit to `master`. Cherry-pick the commit into the release branch. - [x] Create a release in GitHub from the release branch so it shows up under the [Releases tab](https://github.com/medic/cht-core/releases) with the naming convention `<major>.<minor>.<patch>`. This will create the git tag automatically. Link to the release notes in the description of the release. - [x] Confirm the release build completes successfully and the new release is available on the [market](https://staging.dev.medicmobile.org/builds/releases). Make sure that the document has new entry with `id: medic:medic:<major>.<minor>.<patch>` - [x] Upgrade the `demo-cht.dev` instance to this version. - [x] Add the release to the [Supported versions](https://docs.communityhealthtoolkit.org/core/overview/supported-software/) and update the EOL date and status of previous releases. # Communicating - Product Manager - [ ] Ask the Product Designer to create release artwork - [ ] Create a DRAFT blog post on medic.org Wordpress site promoting the release using on the release notes and artwork above. Once it's ready ask the Comms Officer to review and publish it. - [x] Announce the release in #products using this template: ``` @channel *We're excited to announce the release of {{version}}* New features include {{key_features}}. We've also implemented loads of other improvements and fixed a heap of bugs. Read the release notes for full details: {{url}} Following our support policy, versions {{versions}} are no longer supported. Projects running these versions should start planning to upgrade in the near future. For more details read our software support documentation: https://docs.communityhealthtoolkit.org/core/overview/supported-software/ See what's scheduled for the next releases: https://github.com/medic/cht-core/milestones ``` - [x] Announce the release on the [CHT forum](https://forum.communityhealthtoolkit.org/c/product/releases/26), under the "Product - Releases" category. You can use the previous message and omit `@channel`. - [ ] Announce the release of new documentation for features and improvements in the release on the [CHT Forum](https://forum.communityhealthtoolkit.org/c/product/documentation/28), under the "Product - Documentation" category. - [ ] Schedule a Release communication call to educate stakeholders on product and documentation improvements - [ ] Mark this issue "done" and close the milestone.
1.0
Release 3.12.0 - # Planning - Product Manager - [x] Create a repo milestone and add this issue to it. We use [semver](http://semver.org) so if there are breaking changes increment the major, otherwise if there are new features increment the minor, otherwise increment the service pack. Breaking changes in our case relate to updated software requirements (egs: CouchDB, node, minimum browser versions), broken backwards compatibility in an api, or a major visual update that requires user retraining. - [x] Add all the issues to be worked on to the milestone. Ideally each minor release will have one or two features, a handful of improvements, and plenty of bug fixes. - [ ] Identify any features and improvements in the release that need end-user documentation (beyond eng team documentation improvements) and create corresponding issues in the cht-docs repo - [x] Assign an engineer as Release Manager for this release. - [ ] Assign product team members to complete end-user documentation improvements for this release. # Development - Release Manager When development is ready to begin one of the engineers should be nominated as a Release Manager. They will be responsible for making sure the following tasks are completed though not necessarily completing them. - [x] Set the version number in `package.json` and `package-lock.json` and submit a PR. The easiest way to do this is to use `npm --no-git-tag-version version <major|minor>`. - [x] Raise a new issue called `Update dependencies for <version>` with a description that links to [the documentation](https://docs.communityhealthtoolkit.org/core/guides/update-dependencies/). This should be done early in the release cycle so find a volunteer to take this on and assign it to them. - [x] Go through all features and improvements scheduled for this release and raise cht-docs issues for product education to be written where appropriate. If in doubt, check with the product manager. - [x] Write an update in the weekly Product Team call agenda summarising development and acceptance testing progress and identifying any blockers. The release manager is to update this every week until the version is released. - [x] Announce the kickoff of development for the release on the [CHT forum](https://forum.communityhealthtoolkit.org), under the "Product - Releases" category. # Releasing - Release Manager Once all issues have passed acceptance testing and have been merged into `master` release testing can begin. - [x] Create a new release branch from `master` named `<major>.<minor>.x` in `cht-core`. Post a message to #development using this template: ``` @core_devs I've just created the `<major>.<minor>.x` release branch. Please be aware that any further changes intended for this release will have to be merged to `master` then backported. Thanks! ``` - [x] Build a beta named `<major>.<minor>.<patch>-beta.1` by pushing a git tag and when CI completes successfully notify the QA team that it's ready for release testing. - [x] [Import translations keys](https://docs.communityhealthtoolkit.org/core/overview/translations/#adding-new-keys) into POE and notify the #translations Slack channel translate new and updated values, for example: ``` @channel I've just updated the translations in POE. These keys have been added: "<added-list>", and these keys have been updated: "<updated-list>" ``` - [x] Create a new document in the [release-notes folder](https://github.com/medic/cht-core/tree/master/release-notes) in `master`. Ensure all issues are in the GH Milestone, that they're correctly labelled (in particular: they have the right Type, "UI/UX" if they change the UI, and "Breaking change" if appropriate), and have human readable descriptions. Use [this script](https://github.com/medic/cht-core/blob/master/scripts/release-notes/index.js) to export the issues into our changelog format. Manually document any known migration steps and known issues. Provide description, screenshots, videos, and anything else to help communicate particularly important changes. Document any required or recommended upgrades to our other products (eg: medic-conf, medic-gateway, medic-android). Assign the PR to a) the Director of Technology, and b) an SRE to review and confirm the documentation on upgrade instructions and breaking changes is sufficient. - [x] Until release testing passes, make sure regressions are fixed in `master`, cherry-pick them into the release branch, and release another beta. - [x] [Export the translations](https://docs.communityhealthtoolkit.org/core/overview/translations/#exporting-changes-from-poeditor-to-github), delete empty translation files and commit to `master`. Cherry-pick the commit into the release branch. - [x] Create a release in GitHub from the release branch so it shows up under the [Releases tab](https://github.com/medic/cht-core/releases) with the naming convention `<major>.<minor>.<patch>`. This will create the git tag automatically. Link to the release notes in the description of the release. - [x] Confirm the release build completes successfully and the new release is available on the [market](https://staging.dev.medicmobile.org/builds/releases). Make sure that the document has new entry with `id: medic:medic:<major>.<minor>.<patch>` - [x] Upgrade the `demo-cht.dev` instance to this version. - [x] Add the release to the [Supported versions](https://docs.communityhealthtoolkit.org/core/overview/supported-software/) and update the EOL date and status of previous releases. # Communicating - Product Manager - [ ] Ask the Product Designer to create release artwork - [ ] Create a DRAFT blog post on medic.org Wordpress site promoting the release using on the release notes and artwork above. Once it's ready ask the Comms Officer to review and publish it. - [x] Announce the release in #products using this template: ``` @channel *We're excited to announce the release of {{version}}* New features include {{key_features}}. We've also implemented loads of other improvements and fixed a heap of bugs. Read the release notes for full details: {{url}} Following our support policy, versions {{versions}} are no longer supported. Projects running these versions should start planning to upgrade in the near future. For more details read our software support documentation: https://docs.communityhealthtoolkit.org/core/overview/supported-software/ See what's scheduled for the next releases: https://github.com/medic/cht-core/milestones ``` - [x] Announce the release on the [CHT forum](https://forum.communityhealthtoolkit.org/c/product/releases/26), under the "Product - Releases" category. You can use the previous message and omit `@channel`. - [ ] Announce the release of new documentation for features and improvements in the release on the [CHT Forum](https://forum.communityhealthtoolkit.org/c/product/documentation/28), under the "Product - Documentation" category. - [ ] Schedule a Release communication call to educate stakeholders on product and documentation improvements - [ ] Mark this issue "done" and close the milestone.
process
release planning product manager create a repo milestone and add this issue to it we use so if there are breaking changes increment the major otherwise if there are new features increment the minor otherwise increment the service pack breaking changes in our case relate to updated software requirements egs couchdb node minimum browser versions broken backwards compatibility in an api or a major visual update that requires user retraining add all the issues to be worked on to the milestone ideally each minor release will have one or two features a handful of improvements and plenty of bug fixes identify any features and improvements in the release that need end user documentation beyond eng team documentation improvements and create corresponding issues in the cht docs repo assign an engineer as release manager for this release assign product team members to complete end user documentation improvements for this release development release manager when development is ready to begin one of the engineers should be nominated as a release manager they will be responsible for making sure the following tasks are completed though not necessarily completing them set the version number in package json and package lock json and submit a pr the easiest way to do this is to use npm no git tag version version raise a new issue called update dependencies for with a description that links to this should be done early in the release cycle so find a volunteer to take this on and assign it to them go through all features and improvements scheduled for this release and raise cht docs issues for product education to be written where appropriate if in doubt check with the product manager write an update in the weekly product team call agenda summarising development and acceptance testing progress and identifying any blockers the release manager is to update this every week until the version is released announce the kickoff of development for the release on the under the product releases category releasing release manager once all issues have passed acceptance testing and have been merged into master release testing can begin create a new release branch from master named x in cht core post a message to development using this template core devs i ve just created the x release branch please be aware that any further changes intended for this release will have to be merged to master then backported thanks build a beta named beta by pushing a git tag and when ci completes successfully notify the qa team that it s ready for release testing into poe and notify the translations slack channel translate new and updated values for example channel i ve just updated the translations in poe these keys have been added and these keys have been updated create a new document in the in master ensure all issues are in the gh milestone that they re correctly labelled in particular they have the right type ui ux if they change the ui and breaking change if appropriate and have human readable descriptions use to export the issues into our changelog format manually document any known migration steps and known issues provide description screenshots videos and anything else to help communicate particularly important changes document any required or recommended upgrades to our other products eg medic conf medic gateway medic android assign the pr to a the director of technology and b an sre to review and confirm the documentation on upgrade instructions and breaking changes is sufficient until release testing passes make sure regressions are fixed in master cherry pick them into the release branch and release another beta delete empty translation files and commit to master cherry pick the commit into the release branch create a release in github from the release branch so it shows up under the with the naming convention this will create the git tag automatically link to the release notes in the description of the release confirm the release build completes successfully and the new release is available on the make sure that the document has new entry with id medic medic upgrade the demo cht dev instance to this version add the release to the and update the eol date and status of previous releases communicating product manager ask the product designer to create release artwork create a draft blog post on medic org wordpress site promoting the release using on the release notes and artwork above once it s ready ask the comms officer to review and publish it announce the release in products using this template channel we re excited to announce the release of version new features include key features we ve also implemented loads of other improvements and fixed a heap of bugs read the release notes for full details url following our support policy versions versions are no longer supported projects running these versions should start planning to upgrade in the near future for more details read our software support documentation see what s scheduled for the next releases announce the release on the under the product releases category you can use the previous message and omit channel announce the release of new documentation for features and improvements in the release on the under the product documentation category schedule a release communication call to educate stakeholders on product and documentation improvements mark this issue done and close the milestone
1
216,287
7,302,901,745
IssuesEvent
2018-02-27 11:12:36
python/mypy
https://api.github.com/repos/python/mypy
closed
UnboundLocalError handling
feature needs discussion priority-1-normal
Mypy currently doesn't find any problem with the following code: ```py3 def f(x: int) -> str: if x > 0: result = "larger" elif x < 0: result = "smaller" return result ``` It *will* find a related but different type problem if you add an extra line: ```py3 def f(x: int) -> str: result = None if x > 0: result = "larger" elif x < 0: result = "smaller" return result # Incompatible return value type (got "Optional[str]", expected "str") ``` AFAICT, a local defined in a branch could be viewed as a Union of its proper type and Unbound. In which case the first example could raise a Mypy error about result being potentially unbound (as it is for x=0). Thoughts?
1.0
UnboundLocalError handling - Mypy currently doesn't find any problem with the following code: ```py3 def f(x: int) -> str: if x > 0: result = "larger" elif x < 0: result = "smaller" return result ``` It *will* find a related but different type problem if you add an extra line: ```py3 def f(x: int) -> str: result = None if x > 0: result = "larger" elif x < 0: result = "smaller" return result # Incompatible return value type (got "Optional[str]", expected "str") ``` AFAICT, a local defined in a branch could be viewed as a Union of its proper type and Unbound. In which case the first example could raise a Mypy error about result being potentially unbound (as it is for x=0). Thoughts?
non_process
unboundlocalerror handling mypy currently doesn t find any problem with the following code def f x int str if x result larger elif x result smaller return result it will find a related but different type problem if you add an extra line def f x int str result none if x result larger elif x result smaller return result incompatible return value type got optional expected str afaict a local defined in a branch could be viewed as a union of its proper type and unbound in which case the first example could raise a mypy error about result being potentially unbound as it is for x thoughts
0
7,834
11,011,707,166
IssuesEvent
2019-12-04 16:46:56
90301/TextReplace
https://api.github.com/repos/90301/TextReplace
closed
Plus Base () operation
Log Processor
Plus base () is basically a 2nd program that concatenates output to the previous programs. EX: ``` wordSearch(ID) plusBase() wordSearch(Number) Output example: polId polNumber ```
1.0
Plus Base () operation - Plus base () is basically a 2nd program that concatenates output to the previous programs. EX: ``` wordSearch(ID) plusBase() wordSearch(Number) Output example: polId polNumber ```
process
plus base operation plus base is basically a program that concatenates output to the previous programs ex wordsearch id plusbase wordsearch number output example polid polnumber
1
2,309
5,126,147,713
IssuesEvent
2017-01-10 00:32:44
triplea-game/triplea
https://api.github.com/repos/triplea-game/triplea
closed
How to reach me (veqryn)
Process
I should have written this earlier, but never late then never. Due to the high activity in the triplea github repo (good job all!) and the fact that I had it 'watched', meant that inbox folder where all triplea stuff goes to was getting 10s of messages per day. I haven't been active in a year or two now, and couldn't keep up with all the messages, so I basically put them off and never ended up reading most. I've now changed to that to the following: I'm only sent notifications when someone does `@veqryn` or when I directly post in an issue/pr. So if you want to reach me or involve me, please @veqryn and I will at least read that thread up to your message (`@veqryn` again later if you want me to read any new stuff). Or you can just email me directly. I've also silenced gitter, since I'm not involved in the day-to-day, so I don't believe I will receive any messages from it. (I am not sure if I will get any messages from other repo's besides the main "triplea" one, as I do not have them watched, so please email me if I don't respond within a couple days.)
1.0
How to reach me (veqryn) - I should have written this earlier, but never late then never. Due to the high activity in the triplea github repo (good job all!) and the fact that I had it 'watched', meant that inbox folder where all triplea stuff goes to was getting 10s of messages per day. I haven't been active in a year or two now, and couldn't keep up with all the messages, so I basically put them off and never ended up reading most. I've now changed to that to the following: I'm only sent notifications when someone does `@veqryn` or when I directly post in an issue/pr. So if you want to reach me or involve me, please @veqryn and I will at least read that thread up to your message (`@veqryn` again later if you want me to read any new stuff). Or you can just email me directly. I've also silenced gitter, since I'm not involved in the day-to-day, so I don't believe I will receive any messages from it. (I am not sure if I will get any messages from other repo's besides the main "triplea" one, as I do not have them watched, so please email me if I don't respond within a couple days.)
process
how to reach me veqryn i should have written this earlier but never late then never due to the high activity in the triplea github repo good job all and the fact that i had it watched meant that inbox folder where all triplea stuff goes to was getting of messages per day i haven t been active in a year or two now and couldn t keep up with all the messages so i basically put them off and never ended up reading most i ve now changed to that to the following i m only sent notifications when someone does veqryn or when i directly post in an issue pr so if you want to reach me or involve me please veqryn and i will at least read that thread up to your message veqryn again later if you want me to read any new stuff or you can just email me directly i ve also silenced gitter since i m not involved in the day to day so i don t believe i will receive any messages from it i am not sure if i will get any messages from other repo s besides the main triplea one as i do not have them watched so please email me if i don t respond within a couple days
1
68,044
21,442,810,779
IssuesEvent
2022-04-25 00:22:31
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Visual hierarchy in Settings is not distinct enough
T-Defect X-Needs-Design
Particularly on Dark theme, the headings & sections for settings jumble together badly. It needs more whitespace - possibly vertical between the sections; possibly indenting the contents. <img width="575" alt="Screenshot 2020-02-21 at 14 07 31" src="https://user-images.githubusercontent.com/1294269/75041287-31870480-54b4-11ea-8c8d-4aa33e6d3a7f.png">
1.0
Visual hierarchy in Settings is not distinct enough - Particularly on Dark theme, the headings & sections for settings jumble together badly. It needs more whitespace - possibly vertical between the sections; possibly indenting the contents. <img width="575" alt="Screenshot 2020-02-21 at 14 07 31" src="https://user-images.githubusercontent.com/1294269/75041287-31870480-54b4-11ea-8c8d-4aa33e6d3a7f.png">
non_process
visual hierarchy in settings is not distinct enough particularly on dark theme the headings sections for settings jumble together badly it needs more whitespace possibly vertical between the sections possibly indenting the contents img width alt screenshot at src
0
144,004
5,533,952,674
IssuesEvent
2017-03-21 14:28:49
robotology/yarp
https://api.github.com/repos/robotology/yarp
opened
yarp::os::Subscriber segfault when using callback
Component: ROS Integration Priority: Normal Type: Bug
The following code leads to a segfault (the first time data are received) ``` m_dataReader->new DataReaderClass; m_dataReader->useCallback(); m_dataReader->topic("/ros_topic"); ``` while the following is ok: ``` m_dataReader->new DataReaderClass; m_dataReader->topic("/ros_topic"); m_dataReader->useCallback(); ``` Being DataReaderClass defined as: ``` class DataReaderClass: public yarp::os::Subscriber<ros_msg_type> ``` The issue could be related to the fact that `topic()` automatically creates a connection? (The connection is typically created later when using a standard buffered port with callback, instead).
1.0
yarp::os::Subscriber segfault when using callback - The following code leads to a segfault (the first time data are received) ``` m_dataReader->new DataReaderClass; m_dataReader->useCallback(); m_dataReader->topic("/ros_topic"); ``` while the following is ok: ``` m_dataReader->new DataReaderClass; m_dataReader->topic("/ros_topic"); m_dataReader->useCallback(); ``` Being DataReaderClass defined as: ``` class DataReaderClass: public yarp::os::Subscriber<ros_msg_type> ``` The issue could be related to the fact that `topic()` automatically creates a connection? (The connection is typically created later when using a standard buffered port with callback, instead).
non_process
yarp os subscriber segfault when using callback the following code leads to a segfault the first time data are received m datareader new datareaderclass m datareader usecallback m datareader topic ros topic while the following is ok m datareader new datareaderclass m datareader topic ros topic m datareader usecallback being datareaderclass defined as class datareaderclass public yarp os subscriber the issue could be related to the fact that topic automatically creates a connection the connection is typically created later when using a standard buffered port with callback instead
0
374,250
26,108,298,094
IssuesEvent
2022-12-27 15:59:12
lugenx/ecohabit
https://api.github.com/repos/lugenx/ecohabit
closed
Rename "how-to-contribute.md" file to "CONTRIBUTING.md" and move it to the root folder.
documentation help wanted good first issue
This will make GitHub recognize the file and show the users wherever it's needed most.
1.0
Rename "how-to-contribute.md" file to "CONTRIBUTING.md" and move it to the root folder. - This will make GitHub recognize the file and show the users wherever it's needed most.
non_process
rename how to contribute md file to contributing md and move it to the root folder this will make github recognize the file and show the users wherever it s needed most
0
1,934
4,762,560,500
IssuesEvent
2016-10-25 11:57:12
CERNDocumentServer/cds
https://api.github.com/repos/CERNDocumentServer/cds
closed
webhooks: AVC tasks refactoring
avc_processing
Update code structure based on newest Invenio-Webhooks approach.
1.0
webhooks: AVC tasks refactoring - Update code structure based on newest Invenio-Webhooks approach.
process
webhooks avc tasks refactoring update code structure based on newest invenio webhooks approach
1
184,583
21,784,914,191
IssuesEvent
2022-05-14 01:47:15
n-devs/freebitco.in-mobile
https://api.github.com/repos/n-devs/freebitco.in-mobile
closed
WS-2019-0493 (High) detected in handlebars-4.1.2.tgz - autoclosed
security vulnerability
## WS-2019-0493 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p> <p>Path to dependency file: /freebitco.in-mobile/package.json</p> <p>Path to vulnerable library: freebitco.in-mobile/node_modules/handlebars/package.json</p> <p> Dependency Hierarchy: - react-scripts-3.0.1.tgz (Root Library) - jest-24.7.1.tgz - jest-cli-24.8.0.tgz - core-24.8.0.tgz - reporters-24.8.0.tgz - istanbul-reports-2.2.6.tgz - :x: **handlebars-4.1.2.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> handlebars before 3.0.8 and 4.x before 4.5.2 is vulnerable to Arbitrary Code Execution. The package's lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript in the system. <p>Publish Date: 2019-11-14 <p>URL: <a href=https://github.com/handlebars-lang/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e>WS-2019-0493</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1316">https://www.npmjs.com/advisories/1316</a></p> <p>Release Date: 2019-11-14</p> <p>Fix Resolution: handlebars - 3.0.8,4.5.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2019-0493 (High) detected in handlebars-4.1.2.tgz - autoclosed - ## WS-2019-0493 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p> <p>Path to dependency file: /freebitco.in-mobile/package.json</p> <p>Path to vulnerable library: freebitco.in-mobile/node_modules/handlebars/package.json</p> <p> Dependency Hierarchy: - react-scripts-3.0.1.tgz (Root Library) - jest-24.7.1.tgz - jest-cli-24.8.0.tgz - core-24.8.0.tgz - reporters-24.8.0.tgz - istanbul-reports-2.2.6.tgz - :x: **handlebars-4.1.2.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> handlebars before 3.0.8 and 4.x before 4.5.2 is vulnerable to Arbitrary Code Execution. The package's lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript in the system. <p>Publish Date: 2019-11-14 <p>URL: <a href=https://github.com/handlebars-lang/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e>WS-2019-0493</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1316">https://www.npmjs.com/advisories/1316</a></p> <p>Release Date: 2019-11-14</p> <p>Fix Resolution: handlebars - 3.0.8,4.5.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
ws high detected in handlebars tgz autoclosed ws high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file freebitco in mobile package json path to vulnerable library freebitco in mobile node modules handlebars package json dependency hierarchy react scripts tgz root library jest tgz jest cli tgz core tgz reporters tgz istanbul reports tgz x handlebars tgz vulnerable library vulnerability details handlebars before and x before is vulnerable to arbitrary code execution the package s lookup helper fails to properly validate templates allowing attackers to submit templates that execute arbitrary javascript in the system publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource
0
78,075
15,569,914,337
IssuesEvent
2021-03-17 01:17:13
benchmarkdebricked/Sylius
https://api.github.com/repos/benchmarkdebricked/Sylius
opened
CVE-2020-28500 (Medium) detected in lodash-4.17.11.tgz
security vulnerability
## CVE-2020-28500 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.11.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p> <p>Path to dependency file: /Sylius/package.json</p> <p>Path to vulnerable library: Sylius/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - babel-core-6.26.3.tgz (Root Library) - :x: **lodash-4.17.11.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> All versions of package lodash; all versions of package org.fujion.webjars:lodash are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. Steps to reproduce (provided by reporter Liyuan Chen): var lo = require('lodash'); function build_blank (n) { var ret = "1" for (var i = 0; i < n; i++) { ret += " " } return ret + "1"; } var s = build_blank(50000) var time0 = Date.now(); lo.trim(s) var time_cost0 = Date.now() - time0; console.log("time_cost0: " + time_cost0) var time1 = Date.now(); lo.toNumber(s) var time_cost1 = Date.now() - time1; console.log("time_cost1: " + time_cost1) var time2 = Date.now(); lo.trimEnd(s) var time_cost2 = Date.now() - time2; console.log("time_cost2: " + time_cost2) <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/lodash/lodash/commit/02906b8191d3c100c193fe6f7b27d1c40f200bb7">https://github.com/lodash/lodash/commit/02906b8191d3c100c193fe6f7b27d1c40f200bb7</a></p> <p>Release Date: 2021-02-15</p> <p>Fix Resolution: lodash - 4.17.21</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-28500 (Medium) detected in lodash-4.17.11.tgz - ## CVE-2020-28500 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.11.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p> <p>Path to dependency file: /Sylius/package.json</p> <p>Path to vulnerable library: Sylius/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - babel-core-6.26.3.tgz (Root Library) - :x: **lodash-4.17.11.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> All versions of package lodash; all versions of package org.fujion.webjars:lodash are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. Steps to reproduce (provided by reporter Liyuan Chen): var lo = require('lodash'); function build_blank (n) { var ret = "1" for (var i = 0; i < n; i++) { ret += " " } return ret + "1"; } var s = build_blank(50000) var time0 = Date.now(); lo.trim(s) var time_cost0 = Date.now() - time0; console.log("time_cost0: " + time_cost0) var time1 = Date.now(); lo.toNumber(s) var time_cost1 = Date.now() - time1; console.log("time_cost1: " + time_cost1) var time2 = Date.now(); lo.trimEnd(s) var time_cost2 = Date.now() - time2; console.log("time_cost2: " + time_cost2) <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/lodash/lodash/commit/02906b8191d3c100c193fe6f7b27d1c40f200bb7">https://github.com/lodash/lodash/commit/02906b8191d3c100c193fe6f7b27d1c40f200bb7</a></p> <p>Release Date: 2021-02-15</p> <p>Fix Resolution: lodash - 4.17.21</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in lodash tgz cve medium severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file sylius package json path to vulnerable library sylius node modules lodash package json dependency hierarchy babel core tgz root library x lodash tgz vulnerable library vulnerability details all versions of package lodash all versions of package org fujion webjars lodash are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions steps to reproduce provided by reporter liyuan chen var lo require lodash function build blank n var ret for var i i n i ret return ret var s build blank var date now lo trim s var time date now console log time time var date now lo tonumber s var time date now console log time time var date now lo trimend s var time date now console log time time publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource
0
12,299
14,856,208,861
IssuesEvent
2021-01-18 13:51:17
panther-labs/panther
https://api.github.com/repos/panther-labs/panther
closed
Support AWS WAF logs
p1 story team:data processing
### Description Support AWS Web ACL logs: https://docs.aws.amazon.com/waf/latest/developerguide/logging.html ### Acceptance Criteria - Users can select AWS Web ACL logs when onboarding a new source from S3 - Users can select AWS Web ACL logs when creating a new rule
1.0
Support AWS WAF logs - ### Description Support AWS Web ACL logs: https://docs.aws.amazon.com/waf/latest/developerguide/logging.html ### Acceptance Criteria - Users can select AWS Web ACL logs when onboarding a new source from S3 - Users can select AWS Web ACL logs when creating a new rule
process
support aws waf logs description support aws web acl logs acceptance criteria users can select aws web acl logs when onboarding a new source from users can select aws web acl logs when creating a new rule
1
805,783
29,667,726,916
IssuesEvent
2023-06-11 02:06:52
certbot/certbot
https://api.github.com/repos/certbot/certbot
closed
[feature request] Root CA inclusion
feature request area: cert management priority: unplanned needs-update
Often, certain software requires the concatenation of the root CA and the intermediate together to be treated and trusted as a single CA certificate (e.g. openLDAP, for instance). I'd like to suggest that certbot have the ability to download the root CA signing certificate and place that in `/etc/letsencrypt/live/<domain>/` (technically, the archive) (or a place higher in the hierarchy) as well. Optionally, but highly recommended, provide a "trusted-chain.pem" that consists of a concatenated file of the root CA and intermediate *only*. Otherwise the user is forced to concatenate themselves on every renewal.
1.0
[feature request] Root CA inclusion - Often, certain software requires the concatenation of the root CA and the intermediate together to be treated and trusted as a single CA certificate (e.g. openLDAP, for instance). I'd like to suggest that certbot have the ability to download the root CA signing certificate and place that in `/etc/letsencrypt/live/<domain>/` (technically, the archive) (or a place higher in the hierarchy) as well. Optionally, but highly recommended, provide a "trusted-chain.pem" that consists of a concatenated file of the root CA and intermediate *only*. Otherwise the user is forced to concatenate themselves on every renewal.
non_process
root ca inclusion often certain software requires the concatenation of the root ca and the intermediate together to be treated and trusted as a single ca certificate e g openldap for instance i d like to suggest that certbot have the ability to download the root ca signing certificate and place that in etc letsencrypt live technically the archive or a place higher in the hierarchy as well optionally but highly recommended provide a trusted chain pem that consists of a concatenated file of the root ca and intermediate only otherwise the user is forced to concatenate themselves on every renewal
0
11,289
14,098,908,040
IssuesEvent
2020-11-06 00:01:20
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
Custom Column after aggregation creates wrong query and fails
.Reproduced Priority:P1 Querying/Nested Queries Querying/Notebook Querying/Processor Type:Bug
**Describe the bug** When adding a Custom Column with fields from aggregated data, then the query generated is mixing the original columns and the aggregated columns, which fails query with column not found. **To Reproduce** Steps to reproduce the behavior: 1. Custom question > Sample Dataset > Orders 2. Summarize "Sum of Subtotal" and "Sum of Total" by "CreatedAt:Year" 3. Custom Column `[Sum of Subtotal] + [Sum of Total]` as "MegaTotal" ![image](https://user-images.githubusercontent.com/1447303/85406731-815d0180-b562-11ea-8ba4-1d28dfbfa1ea.png) 4. Fails with error: ``` Column "source.SUBTOTAL" not found; SQL statement: CREATE FORCE VIEW PUBLIC._8 AS SELECT ("source"."sum" + "source"."sum_2") AS "MegaTotal", "source".SUBTOTAL AS SUBTOTAL, "source".TOTAL AS TOTAL, "source".CREATED_AT AS CREATED_AT, "source"."sum" AS "sum", "source"."sum_2" AS "sum_2", "source".CREATED_AT AS CREATED_AT_2 FROM ( SELECT PARSEDATETIME(YEAR(PUBLIC.ORDERS.CREATED_AT), 'yyyy') AS CREATED_AT, SUM(PUBLIC.ORDERS.SUBTOTAL) AS "sum", SUM(PUBLIC.ORDERS.TOTAL) AS "sum_2" FROM PUBLIC.ORDERS GROUP BY PARSEDATETIME(YEAR(PUBLIC.ORDERS.CREATED_AT), 'yyyy') ORDER BY 1 ) "source" [42122-197] ``` 5. "View the SQL": ```SQL SELECT "source"."CREATED_AT" AS "CREATED_AT", "source"."sum" AS "sum", "source"."sum_2" AS "sum_2", "source"."MegaTotal" AS "MegaTotal" FROM (SELECT ( "source"."sum" + "source"."sum_2" ) AS "MegaTotal", "source"."SUBTOTAL" AS "SUBTOTAL", "source"."TOTAL" AS "TOTAL", "source"."CREATED_AT" AS "CREATED_AT", "source"."sum" AS "sum", "source"."sum_2" AS "sum_2", "source"."CREATED_AT" AS "CREATED_AT_2" FROM (SELECT parsedatetime(year("PUBLIC"."ORDERS"."CREATED_AT"), 'yyyy') AS "CREATED_AT", sum("PUBLIC"."ORDERS"."SUBTOTAL") AS "sum", sum("PUBLIC"."ORDERS"."TOTAL") AS "sum_2" FROM "PUBLIC"."ORDERS" GROUP BY parsedatetime(year("PUBLIC"."ORDERS"."CREATED_AT"), 'yyyy') ORDER BY parsedatetime(year("PUBLIC"."ORDERS"."CREATED_AT"), 'yyyy') ASC) "source") "source" LIMIT 1048576 ``` Alternative way to reproduce similar problem with unneeded extra columns that causes the query to fail: 1. Simple question > Sample Dataset > Orders 2. Summarize by "Distinct values of User ID" and "Distinct values of Product ID", and save question as "Q1" 3. Custom question > Saved Questions > Q1 4. Add Custom Column `[Distinct values of Product ID] / [Distinct values of User ID]` as "test" 5. Visualize, the query fails with `Column "source.USER_ID" not found; ...` **Information about your Metabase Installation:** Metabase 0.35.4 and `master` on various backends and datasources - used to work in 0.35.3 and previous. **Additional context** I have a feeling that this might be related to #12507 https://discourse.metabase.com/t/error-on-use-summarized-column-on-custom-column/10623 :arrow_down: Please click the :+1: reaction instead of leaving a `+1` or `update?` comment
1.0
Custom Column after aggregation creates wrong query and fails - **Describe the bug** When adding a Custom Column with fields from aggregated data, then the query generated is mixing the original columns and the aggregated columns, which fails query with column not found. **To Reproduce** Steps to reproduce the behavior: 1. Custom question > Sample Dataset > Orders 2. Summarize "Sum of Subtotal" and "Sum of Total" by "CreatedAt:Year" 3. Custom Column `[Sum of Subtotal] + [Sum of Total]` as "MegaTotal" ![image](https://user-images.githubusercontent.com/1447303/85406731-815d0180-b562-11ea-8ba4-1d28dfbfa1ea.png) 4. Fails with error: ``` Column "source.SUBTOTAL" not found; SQL statement: CREATE FORCE VIEW PUBLIC._8 AS SELECT ("source"."sum" + "source"."sum_2") AS "MegaTotal", "source".SUBTOTAL AS SUBTOTAL, "source".TOTAL AS TOTAL, "source".CREATED_AT AS CREATED_AT, "source"."sum" AS "sum", "source"."sum_2" AS "sum_2", "source".CREATED_AT AS CREATED_AT_2 FROM ( SELECT PARSEDATETIME(YEAR(PUBLIC.ORDERS.CREATED_AT), 'yyyy') AS CREATED_AT, SUM(PUBLIC.ORDERS.SUBTOTAL) AS "sum", SUM(PUBLIC.ORDERS.TOTAL) AS "sum_2" FROM PUBLIC.ORDERS GROUP BY PARSEDATETIME(YEAR(PUBLIC.ORDERS.CREATED_AT), 'yyyy') ORDER BY 1 ) "source" [42122-197] ``` 5. "View the SQL": ```SQL SELECT "source"."CREATED_AT" AS "CREATED_AT", "source"."sum" AS "sum", "source"."sum_2" AS "sum_2", "source"."MegaTotal" AS "MegaTotal" FROM (SELECT ( "source"."sum" + "source"."sum_2" ) AS "MegaTotal", "source"."SUBTOTAL" AS "SUBTOTAL", "source"."TOTAL" AS "TOTAL", "source"."CREATED_AT" AS "CREATED_AT", "source"."sum" AS "sum", "source"."sum_2" AS "sum_2", "source"."CREATED_AT" AS "CREATED_AT_2" FROM (SELECT parsedatetime(year("PUBLIC"."ORDERS"."CREATED_AT"), 'yyyy') AS "CREATED_AT", sum("PUBLIC"."ORDERS"."SUBTOTAL") AS "sum", sum("PUBLIC"."ORDERS"."TOTAL") AS "sum_2" FROM "PUBLIC"."ORDERS" GROUP BY parsedatetime(year("PUBLIC"."ORDERS"."CREATED_AT"), 'yyyy') ORDER BY parsedatetime(year("PUBLIC"."ORDERS"."CREATED_AT"), 'yyyy') ASC) "source") "source" LIMIT 1048576 ``` Alternative way to reproduce similar problem with unneeded extra columns that causes the query to fail: 1. Simple question > Sample Dataset > Orders 2. Summarize by "Distinct values of User ID" and "Distinct values of Product ID", and save question as "Q1" 3. Custom question > Saved Questions > Q1 4. Add Custom Column `[Distinct values of Product ID] / [Distinct values of User ID]` as "test" 5. Visualize, the query fails with `Column "source.USER_ID" not found; ...` **Information about your Metabase Installation:** Metabase 0.35.4 and `master` on various backends and datasources - used to work in 0.35.3 and previous. **Additional context** I have a feeling that this might be related to #12507 https://discourse.metabase.com/t/error-on-use-summarized-column-on-custom-column/10623 :arrow_down: Please click the :+1: reaction instead of leaving a `+1` or `update?` comment
process
custom column after aggregation creates wrong query and fails describe the bug when adding a custom column with fields from aggregated data then the query generated is mixing the original columns and the aggregated columns which fails query with column not found to reproduce steps to reproduce the behavior custom question sample dataset orders summarize sum of subtotal and sum of total by createdat year custom column as megatotal fails with error column source subtotal not found sql statement create force view public as select source sum source sum as megatotal source subtotal as subtotal source total as total source created at as created at source sum as sum source sum as sum source created at as created at from select parsedatetime year public orders created at yyyy as created at sum public orders subtotal as sum sum public orders total as sum from public orders group by parsedatetime year public orders created at yyyy order by source view the sql sql select source created at as created at source sum as sum source sum as sum source megatotal as megatotal from select source sum source sum as megatotal source subtotal as subtotal source total as total source created at as created at source sum as sum source sum as sum source created at as created at from select parsedatetime year public orders created at yyyy as created at sum public orders subtotal as sum sum public orders total as sum from public orders group by parsedatetime year public orders created at yyyy order by parsedatetime year public orders created at yyyy asc source source limit alternative way to reproduce similar problem with unneeded extra columns that causes the query to fail simple question sample dataset orders summarize by distinct values of user id and distinct values of product id and save question as custom question saved questions add custom column as test visualize the query fails with column source user id not found information about your metabase installation metabase and master on various backends and datasources used to work in and previous additional context i have a feeling that this might be related to arrow down please click the reaction instead of leaving a or update comment
1
19,161
3,423,372,397
IssuesEvent
2015-12-09 06:01:19
joyent/sdc-adminui
https://api.github.com/repos/joyent/sdc-adminui
closed
Given a network, provision IPs on it with specific MAC addreses
enhancement need design review
Given a network, provision IPs on it with specific MAC addreses. For instance on ovh (dedicated host), you can buy ip subnets, however you need to pre-generate them in their admin menu. It would be helpful if I was able to prefill the entire subnet with mac addresses rather than use these commands manually: sdc sdc-napi /nics -X POST -d '{ "vlan_id": 0, "nic_tag": "external", "ip" :"xxx.xxx.xxx.xxx", "primary": true, "owner_uuid": "UUID", "belongs_to_uuid": "UUID", "network_uuid": "UUID", "belongs_to_type": "zone", "mac":"00:00:00:00:00:00"}' sdc sdc-vmapi /vms/UUID?action=add_nics -X POST -d '{"macs":"00:00:00:00:00:00"}'
1.0
Given a network, provision IPs on it with specific MAC addreses - Given a network, provision IPs on it with specific MAC addreses. For instance on ovh (dedicated host), you can buy ip subnets, however you need to pre-generate them in their admin menu. It would be helpful if I was able to prefill the entire subnet with mac addresses rather than use these commands manually: sdc sdc-napi /nics -X POST -d '{ "vlan_id": 0, "nic_tag": "external", "ip" :"xxx.xxx.xxx.xxx", "primary": true, "owner_uuid": "UUID", "belongs_to_uuid": "UUID", "network_uuid": "UUID", "belongs_to_type": "zone", "mac":"00:00:00:00:00:00"}' sdc sdc-vmapi /vms/UUID?action=add_nics -X POST -d '{"macs":"00:00:00:00:00:00"}'
non_process
given a network provision ips on it with specific mac addreses given a network provision ips on it with specific mac addreses for instance on ovh dedicated host you can buy ip subnets however you need to pre generate them in their admin menu it would be helpful if i was able to prefill the entire subnet with mac addresses rather than use these commands manually sdc sdc napi nics x post d vlan id nic tag external ip xxx xxx xxx xxx primary true owner uuid uuid belongs to uuid uuid network uuid uuid belongs to type zone mac sdc sdc vmapi vms uuid action add nics x post d macs
0
14,833
18,169,637,342
IssuesEvent
2021-09-27 18:21:26
googleapis/python-firestore
https://api.github.com/repos/googleapis/python-firestore
closed
Add support to run emulator for system tests.
api: firestore type: process
Currently the system tests do not run with the emulator. ``` ERROR grpc._plugin_wrapping:_plugin_wrapping.py:82 AuthMetadataPluginCallback "<google.auth.transport.grpc.AuthMetadataPlugin object at 0x11169b310>" raised exception! Traceback (most recent call last): File "/Users/crwilcox/workspace/python-firestore/.nox/system-3-7/lib/python3.7/site-packages/grpc/_plugin_wrapping.py", line 78, in __call__ context, _AuthMetadataPluginCallback(callback_state, callback)) File "/Users/crwilcox/workspace/python-firestore/.nox/system-3-7/lib/python3.7/site-packages/google/auth/transport/grpc.py", line 86, in __call__ callback(self._get_authorization_headers(context), None) File "/Users/crwilcox/workspace/python-firestore/.nox/system-3-7/lib/python3.7/site-packages/google/auth/transport/grpc.py", line 73, in _get_authorization_headers self._request, context.method_name, context.service_url, headers File "/Users/crwilcox/workspace/python-firestore/.nox/system-3-7/lib/python3.7/site-packages/google/auth/credentials.py", line 134, in before_request self.apply(headers) File "/Users/crwilcox/workspace/python-firestore/.nox/system-3-7/lib/python3.7/site-packages/google/auth/credentials.py", line 112, in apply if self.quota_project_id: File "/Users/crwilcox/workspace/python-firestore/.nox/system-3-7/lib/python3.7/site-packages/google/auth/credentials.py", line 83, in quota_project_id return self._quota_project_id AttributeError: 'EmulatorCreds' object has no attribute '_quota_project_id' ```
1.0
Add support to run emulator for system tests. - Currently the system tests do not run with the emulator. ``` ERROR grpc._plugin_wrapping:_plugin_wrapping.py:82 AuthMetadataPluginCallback "<google.auth.transport.grpc.AuthMetadataPlugin object at 0x11169b310>" raised exception! Traceback (most recent call last): File "/Users/crwilcox/workspace/python-firestore/.nox/system-3-7/lib/python3.7/site-packages/grpc/_plugin_wrapping.py", line 78, in __call__ context, _AuthMetadataPluginCallback(callback_state, callback)) File "/Users/crwilcox/workspace/python-firestore/.nox/system-3-7/lib/python3.7/site-packages/google/auth/transport/grpc.py", line 86, in __call__ callback(self._get_authorization_headers(context), None) File "/Users/crwilcox/workspace/python-firestore/.nox/system-3-7/lib/python3.7/site-packages/google/auth/transport/grpc.py", line 73, in _get_authorization_headers self._request, context.method_name, context.service_url, headers File "/Users/crwilcox/workspace/python-firestore/.nox/system-3-7/lib/python3.7/site-packages/google/auth/credentials.py", line 134, in before_request self.apply(headers) File "/Users/crwilcox/workspace/python-firestore/.nox/system-3-7/lib/python3.7/site-packages/google/auth/credentials.py", line 112, in apply if self.quota_project_id: File "/Users/crwilcox/workspace/python-firestore/.nox/system-3-7/lib/python3.7/site-packages/google/auth/credentials.py", line 83, in quota_project_id return self._quota_project_id AttributeError: 'EmulatorCreds' object has no attribute '_quota_project_id' ```
process
add support to run emulator for system tests currently the system tests do not run with the emulator error grpc plugin wrapping plugin wrapping py authmetadataplugincallback raised exception traceback most recent call last file users crwilcox workspace python firestore nox system lib site packages grpc plugin wrapping py line in call context authmetadataplugincallback callback state callback file users crwilcox workspace python firestore nox system lib site packages google auth transport grpc py line in call callback self get authorization headers context none file users crwilcox workspace python firestore nox system lib site packages google auth transport grpc py line in get authorization headers self request context method name context service url headers file users crwilcox workspace python firestore nox system lib site packages google auth credentials py line in before request self apply headers file users crwilcox workspace python firestore nox system lib site packages google auth credentials py line in apply if self quota project id file users crwilcox workspace python firestore nox system lib site packages google auth credentials py line in quota project id return self quota project id attributeerror emulatorcreds object has no attribute quota project id
1
3,694
6,719,259,466
IssuesEvent
2017-10-15 22:06:35
nodejs/node
https://api.github.com/repos/nodejs/node
closed
Race condition in shutdown hook (process.on('exit'))
process question
On Node.js versions 6 and 8 (I assume 7 acts the same way) I have a process.on('exit') hook like so: ```js process.once('exit', function (code: number) { fs.appendFileSync(file,'data'); process.exit(code); }); ``` I have a handful of calls to fs.appendFileSync in this shutdown hook. 4/5 times, there is no delay, nothing...1/5 of the time, there is a huge delay, of about 2-3 seconds. So there is some form of race condition going on. I don't think it's my code. If I comment out all the fs.appendFileSync calls (there are about 5), then the race condition doesn't seem to occur, as frequently. Does this sound fishy? idk
1.0
Race condition in shutdown hook (process.on('exit')) - On Node.js versions 6 and 8 (I assume 7 acts the same way) I have a process.on('exit') hook like so: ```js process.once('exit', function (code: number) { fs.appendFileSync(file,'data'); process.exit(code); }); ``` I have a handful of calls to fs.appendFileSync in this shutdown hook. 4/5 times, there is no delay, nothing...1/5 of the time, there is a huge delay, of about 2-3 seconds. So there is some form of race condition going on. I don't think it's my code. If I comment out all the fs.appendFileSync calls (there are about 5), then the race condition doesn't seem to occur, as frequently. Does this sound fishy? idk
process
race condition in shutdown hook process on exit on node js versions and i assume acts the same way i have a process on exit hook like so js process once exit function code number fs appendfilesync file data process exit code i have a handful of calls to fs appendfilesync in this shutdown hook times there is no delay nothing of the time there is a huge delay of about seconds so there is some form of race condition going on i don t think it s my code if i comment out all the fs appendfilesync calls there are about then the race condition doesn t seem to occur as frequently does this sound fishy idk
1
3,507
6,559,857,382
IssuesEvent
2017-09-07 06:52:27
inasafe/inasafe-realtime
https://api.github.com/repos/inasafe/inasafe-realtime
closed
Realtime flood translation in Bahasa Indonesia
feature request flood in progress realtime processor web page
Problem We have a good real time monitoring [http://realtime.inasafe.org/] , but there are some problems if we want to see it in Bahasa Indonesia mode. When a new user going to open the real time and they just only know in Bahasa, they will always want to translate it, from English to Bahasa Indonesia. But, in a current release when I want to translate the real time into Bahasa is not translate yet. From the menu until the data flood query most of them not translate yet. See original ticket at https://github.com/inasafe/inasafe/issues/3291 for further discussion.
1.0
Realtime flood translation in Bahasa Indonesia - Problem We have a good real time monitoring [http://realtime.inasafe.org/] , but there are some problems if we want to see it in Bahasa Indonesia mode. When a new user going to open the real time and they just only know in Bahasa, they will always want to translate it, from English to Bahasa Indonesia. But, in a current release when I want to translate the real time into Bahasa is not translate yet. From the menu until the data flood query most of them not translate yet. See original ticket at https://github.com/inasafe/inasafe/issues/3291 for further discussion.
process
realtime flood translation in bahasa indonesia problem we have a good real time monitoring but there are some problems if we want to see it in bahasa indonesia mode when a new user going to open the real time and they just only know in bahasa they will always want to translate it from english to bahasa indonesia but in a current release when i want to translate the real time into bahasa is not translate yet from the menu until the data flood query most of them not translate yet see original ticket at for further discussion
1
15,953
20,172,183,171
IssuesEvent
2022-02-10 11:24:31
NationalSecurityAgency/ghidra
https://api.github.com/repos/NationalSecurityAgency/ghidra
closed
A register definition "eplp" is missing in V850 instructions.
Type: Bug Feature: Processor/v850
Describe the bug: I found a bug in V850 instructions. Ghidra can not disassemble code "fe57 fe83" as the follows. ``` 0000c10c fe ?? FEh 0000c10d 57 ?? 57h W 0000c10e fe83 sst.b r16, 0x7e[ ep] ``` It should be MACU instruction as the follows. ``` 0000c10c fe57 fe83 macu ep, r10, r16r17, eplp ``` To Reproduce: Disassemble V850 binary file including code "fe57 fe83". Ghidra Version: 10.1.1 Additional context: I changed "-" next to "r28r29" to "eplp" in "Variables.sinc" and "Register.sinc" under "V850/data/languages/Helpers". - Variables.sinc ``` 14 attach variables [ R0004x2 R1115x2 R1620x2 R2731x2 ] 15 [ 16 r0r1 _ r2sp _ r4r5 _ r6r7 _ r8r9 _ 17 r10r11 _ r12r13 _ r14r15 _ r16r17 _ r18r19 _ 18 r20r21 _ r22r23 _ r24r25 _ r26r27 _ r28r29 _ 19 eplp _ 20 ]; ``` - Register.sinc ``` 28 define register offset=0x0 size=0x8 29 [ 30 r0r1 r2sp r4r5 r6r7 r8r9 31 r10r11 r12r13 r14r15 r16r17 r18r19 32 r20r21 r22r23 r24r25 r26r27 r28r29 33 eplp 34 ]; ```
1.0
A register definition "eplp" is missing in V850 instructions. - Describe the bug: I found a bug in V850 instructions. Ghidra can not disassemble code "fe57 fe83" as the follows. ``` 0000c10c fe ?? FEh 0000c10d 57 ?? 57h W 0000c10e fe83 sst.b r16, 0x7e[ ep] ``` It should be MACU instruction as the follows. ``` 0000c10c fe57 fe83 macu ep, r10, r16r17, eplp ``` To Reproduce: Disassemble V850 binary file including code "fe57 fe83". Ghidra Version: 10.1.1 Additional context: I changed "-" next to "r28r29" to "eplp" in "Variables.sinc" and "Register.sinc" under "V850/data/languages/Helpers". - Variables.sinc ``` 14 attach variables [ R0004x2 R1115x2 R1620x2 R2731x2 ] 15 [ 16 r0r1 _ r2sp _ r4r5 _ r6r7 _ r8r9 _ 17 r10r11 _ r12r13 _ r14r15 _ r16r17 _ r18r19 _ 18 r20r21 _ r22r23 _ r24r25 _ r26r27 _ r28r29 _ 19 eplp _ 20 ]; ``` - Register.sinc ``` 28 define register offset=0x0 size=0x8 29 [ 30 r0r1 r2sp r4r5 r6r7 r8r9 31 r10r11 r12r13 r14r15 r16r17 r18r19 32 r20r21 r22r23 r24r25 r26r27 r28r29 33 eplp 34 ]; ```
process
a register definition eplp is missing in instructions describe the bug i found a bug in instructions ghidra can not disassemble code as the follows fe feh w sst b it should be macu instruction as the follows macu ep eplp to reproduce disassemble binary file including code ghidra version additional context i changed next to to eplp in variables sinc and register sinc under data languages helpers variables sinc attach variables eplp register sinc define register offset size eplp
1
4,373
7,260,515,955
IssuesEvent
2018-02-18 10:54:39
qgis/QGIS-Documentation
https://api.github.com/repos/qgis/QGIS-Documentation
closed
[FEATURE][processing] Snap geometries to layer algorithm
Automatic new feature Processing
Original commit: https://github.com/qgis/QGIS/commit/c3a978b9da33f69c6c5440e1166c78c89946dde4 by nyalldawson Port the Geometry Snapper plugin across to the analysis lib, and expose to python bindings Add a new algorithm which performs the snapping to layers
1.0
[FEATURE][processing] Snap geometries to layer algorithm - Original commit: https://github.com/qgis/QGIS/commit/c3a978b9da33f69c6c5440e1166c78c89946dde4 by nyalldawson Port the Geometry Snapper plugin across to the analysis lib, and expose to python bindings Add a new algorithm which performs the snapping to layers
process
snap geometries to layer algorithm original commit by nyalldawson port the geometry snapper plugin across to the analysis lib and expose to python bindings add a new algorithm which performs the snapping to layers
1
18,580
25,872,654,784
IssuesEvent
2022-12-14 04:40:33
omgwtfwow/segment-for-wp-by-in8-io
https://api.github.com/repos/omgwtfwow/segment-for-wp-by-in8-io
closed
PHP error notices for php 8.0 and above
PHP 8.0 compatibility
The plugin was sending us a lot of php error notices after updating to php 8.0 on every single page, regarding array to string conversion. After inspection, it would appear that the errors come from `schedule_event` function in `class-segment-for-wp-by-in8-io-track-server.php` : `if (mb_strlen(implode($args)) < 8000) {` Since $args contains arrays, this causes the php error notice about array to string conversion. For now, we have temporarily fixed the issue by writing a two dimensional implode script, however we believe it would be interesting to integrate something like a recursive array implode function. While this was not a huge issue, it did clog up our error.log file.
True
PHP error notices for php 8.0 and above - The plugin was sending us a lot of php error notices after updating to php 8.0 on every single page, regarding array to string conversion. After inspection, it would appear that the errors come from `schedule_event` function in `class-segment-for-wp-by-in8-io-track-server.php` : `if (mb_strlen(implode($args)) < 8000) {` Since $args contains arrays, this causes the php error notice about array to string conversion. For now, we have temporarily fixed the issue by writing a two dimensional implode script, however we believe it would be interesting to integrate something like a recursive array implode function. While this was not a huge issue, it did clog up our error.log file.
non_process
php error notices for php and above the plugin was sending us a lot of php error notices after updating to php on every single page regarding array to string conversion after inspection it would appear that the errors come from schedule event function in class segment for wp by io track server php if mb strlen implode args since args contains arrays this causes the php error notice about array to string conversion for now we have temporarily fixed the issue by writing a two dimensional implode script however we believe it would be interesting to integrate something like a recursive array implode function while this was not a huge issue it did clog up our error log file
0
13,100
15,496,179,908
IssuesEvent
2021-03-11 02:12:27
fluent/fluent-bit
https://api.github.com/repos/fluent/fluent-bit
closed
Upgrading to 1.6.8 from 1.6.7 we see intermittent shutdown and EBADF (Bad file descriptor)
Stale troubleshooting waiting-for-user work-in-process
## Bug Report **Describe the bug** After upgrading to 1.6.8 from 1.6.7 we see intermittent shutdown of pods and healthcheck failures **To Reproduce** Example log message if applicable (strace) ``` [pid 20023] <... read resumed> "\1\0\0\0\0\0\0\0", 8) = 8 [pid 20023] madvise(0x7f55b67ff000, 8335360, MADV_DONTNEED) = 0 [pid 20023] exit(0) = ? [pid 20023] +++ exited with 0 +++ [pid 20022] <... futex resumed> ) = 0 [pid 20022] close(9) = 0 [pid 20022] close(10) = 0 [pid 20022] close(11) = 0 [pid 20022] close(18) = 0 [pid 20022] close(3) = 0 [pid 20022] close(4) = 0 [pid 20022] close(18) = -1 EBADF (Bad file descriptor) [pid 20022] close(19) = 0 [pid 20022] close(6) = 0 [pid 20022] close(7) = 0 [pid 20022] close(169) = 0 [pid 20022] close(170) = 0 [pid 20022] close(171) = 0 [pid 20022] epoll_ctl(8, EPOLL_CTL_DEL, 167, NULL) = 0 [pid 20022] close(167) = 0 [pid 20022] close(184) = 0 [pid 20022] close(8) = 0 [pid 20022] madvise(0x7f55b8232000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b82b6000, 131072, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594b56000, 2101248, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55893d9000, 5251072, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55886d4000, 4988928, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55880c4000, 5668864, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b6511000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55944f8000, 32768, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b650d000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b648d000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b650f000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55945eb000, 331776, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5ae2000, 36864, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5911000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b58ec000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5a1f000, 20480, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5947000, 16384, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5ab3000, 86016, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b58f4000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b6495000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5c1d000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5a14000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b58b2000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594d8e000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5930000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594e94000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594250000, 49152, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594aa1000, 49152, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55942a2000, 122880, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594dd7000, 73728, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55947f7000, 73728, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5aca000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5971000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b58e3000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594e83000, 49152, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594768000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5b05000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55943c2000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594a90000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594e2a000, 98304, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594ac0000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5a8c000, 110592, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55943cf000, 49152, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5982000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5908000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594b35000, 73728, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594b25000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594689000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f559466a000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594a40000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594582000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b59cb000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5a66000, 106496, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5a85000, 20480, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55944bd000, 20480, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b59bd000, 20480, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5927000, 28672, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f559464c000, 98304, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f559419b000, 200704, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5940000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5920000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b597a000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b596f000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5900000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b58db000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5afd000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5a02000, 61440, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b6487000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b6481000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5b0b000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5942000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5c0c000, 20480, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b59e0000, 106496, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55944e0000, 94208, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f559491f000, 147456, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594865000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f559473a000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55944af000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5954000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f559452e000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55949e0000, 98304, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594408000, 540672, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5a25000, 32768, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55944d4000, 32768, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594558000, 151552, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5a3d000, 32768, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b59d0000, 32768, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55944c5000, 53248, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5ad3000, 57344, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f559453c000, 94208, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594507000, 90112, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b595d000, 36864, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5a49000, 61440, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b598e000, 36864, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f559458b000, 212992, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b649f000, 446464, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55945cb000, 81920, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b63e6000, 12288, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b63ea000, 593920, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5b1d000, 593920, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b58a9000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b585a000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b597e000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b586d000, 12288, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b587d000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5aee000, 36864, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b59a5000, 49152, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5915000, 40960, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b7047000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b587a000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5876000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b7052000, 32768, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5871000, 12288, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b702e000, 40960, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594497000, 86016, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b585d000, 61440, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b6516000, 16384, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b582d000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b61c0000, 753664, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b627e000, 1470464, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5b11000, 40960, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5834000, 143360, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5894000, 69632, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b58b9000, 110592, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5882000, 61440, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b7001000, 36864, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b6521000, 40960, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5baf000, 335872, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b700d000, 122880, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b55ff000, 2281472, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b705f000, 1708032, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5c25000, 5873664, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b652f000, 2949120, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b75ff000, 8335360, MADV_DONTNEED) = 0 [pid 20022] exit(0) = ? [pid 20022] +++ exited with 0 +++ <... futex resumed> ) = 0 epoll_ctl(5, EPOLL_CTL_DEL, 6, NULL) = -1 EBADF (Bad file descriptor) close(5) = 0 exit_group(0) = ? +++ exited with 0 +++ cbeverlin@sjc04p1kubhv36:~$ sudo strace -f -p 19978 strace: attach: ptrace(PTRACE_SEIZE, 19978): No such process ``` **Expected behavior** Runs just fine, healthchecks pass **Your Environment** <!--- Include as many relevant details about the environment you experienced the bug in --> * Version used: 1.6.8 * Configuration: ``` customParsers: | [PARSER] Name apache Format regex Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache2 Format regex Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache_error Format regex Regex ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$ [PARSER] Name nginx Format regex Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name json Format json Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name docker Format json Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L Time_Keep On [PARSER] # http://rubular.com/r/tjUt3Awgg4 Name cri Format regex Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<message>.*)$ Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L%z [PARSER] Name syslog Format regex Regex ^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$ Time_Key time Time_Format %b %d %H:%M:%S filters: | [FILTER] Name kubernetes Match kube.* Kube_URL https://kubernetes.default.svc:443 Merge_Log On Keep_Log Off Buffer_Size 0 tls.debug 3 [FILTER] Name modify Match * Set LoggingAgent fluent-bit Set KubeClusterName ${CLUSTER_NAME} Set KubeClusterAbbreviation ${CLUSTER_NAME} Set KubeClusterRegion ${CLUSTER_REGION} Set KubeHostname ${HOST_NAME} Set KubeHostIP ${HOST_IP} [FILTER] Name nest Match * Operation nest Wildcard * Nest_under event [FILTER] Name modify Match * Set index ${SPLUNK_INDEX} inputs: | [INPUT] Name tail Path /var/log/containers/*.log Parser docker Tag kube.* Refresh_Interval 5 Mem_Buf_Limit 2G Read_from_Head on DB /var/log/flb_kube.db DB.Sync Off DB.locking true Skip_Long_Lines On Docker_Mode On [INPUT] Name systemd Tag host.* Path /var/log/journal DB /var/log/flb_systemd.db Strip_Underscores true Systemd_Filter _SYSTEMD_UNIT=kubelet.service Systemd_Filter _SYSTEMD_UNIT=docker.service Systemd_Filter _SYSTEMD_UNIT=containerd.service outputs: | [OUTPUT] Name splunk Match * Host ${SPLUNK_HOST} Splunk_Token ${SPLUNK_TOKEN} Port 443 TLS On TLS.Verify Off Splunk_Send_Raw On tls.debug 3 service: | [SERVICE] HTTP_Server On Config_Watch Off HTTP_Listen 0.0.0.0 HTTP_PORT 2020 Flush 1 Daemon Off Log_Level info Parsers_File parsers.conf Parsers_File custom_parsers.conf ``` * Environment name and version (e.g. Kubernetes? What version?): 1.12, 1.15.11, 1.16.11 and others * Server type and version: Ubuntu server
1.0
Upgrading to 1.6.8 from 1.6.7 we see intermittent shutdown and EBADF (Bad file descriptor) - ## Bug Report **Describe the bug** After upgrading to 1.6.8 from 1.6.7 we see intermittent shutdown of pods and healthcheck failures **To Reproduce** Example log message if applicable (strace) ``` [pid 20023] <... read resumed> "\1\0\0\0\0\0\0\0", 8) = 8 [pid 20023] madvise(0x7f55b67ff000, 8335360, MADV_DONTNEED) = 0 [pid 20023] exit(0) = ? [pid 20023] +++ exited with 0 +++ [pid 20022] <... futex resumed> ) = 0 [pid 20022] close(9) = 0 [pid 20022] close(10) = 0 [pid 20022] close(11) = 0 [pid 20022] close(18) = 0 [pid 20022] close(3) = 0 [pid 20022] close(4) = 0 [pid 20022] close(18) = -1 EBADF (Bad file descriptor) [pid 20022] close(19) = 0 [pid 20022] close(6) = 0 [pid 20022] close(7) = 0 [pid 20022] close(169) = 0 [pid 20022] close(170) = 0 [pid 20022] close(171) = 0 [pid 20022] epoll_ctl(8, EPOLL_CTL_DEL, 167, NULL) = 0 [pid 20022] close(167) = 0 [pid 20022] close(184) = 0 [pid 20022] close(8) = 0 [pid 20022] madvise(0x7f55b8232000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b82b6000, 131072, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594b56000, 2101248, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55893d9000, 5251072, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55886d4000, 4988928, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55880c4000, 5668864, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b6511000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55944f8000, 32768, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b650d000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b648d000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b650f000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55945eb000, 331776, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5ae2000, 36864, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5911000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b58ec000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5a1f000, 20480, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5947000, 16384, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5ab3000, 86016, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b58f4000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b6495000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5c1d000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5a14000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b58b2000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594d8e000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5930000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594e94000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594250000, 49152, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594aa1000, 49152, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55942a2000, 122880, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594dd7000, 73728, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55947f7000, 73728, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5aca000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5971000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b58e3000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594e83000, 49152, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594768000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5b05000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55943c2000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594a90000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594e2a000, 98304, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594ac0000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5a8c000, 110592, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55943cf000, 49152, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5982000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5908000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594b35000, 73728, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594b25000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594689000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f559466a000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594a40000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594582000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b59cb000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5a66000, 106496, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5a85000, 20480, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55944bd000, 20480, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b59bd000, 20480, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5927000, 28672, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f559464c000, 98304, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f559419b000, 200704, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5940000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5920000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b597a000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b596f000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5900000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b58db000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5afd000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5a02000, 61440, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b6487000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b6481000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5b0b000, 4096, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5942000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5c0c000, 20480, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b59e0000, 106496, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55944e0000, 94208, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f559491f000, 147456, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594865000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f559473a000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55944af000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5954000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f559452e000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55949e0000, 98304, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594408000, 540672, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5a25000, 32768, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55944d4000, 32768, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594558000, 151552, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5a3d000, 32768, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b59d0000, 32768, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55944c5000, 53248, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5ad3000, 57344, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f559453c000, 94208, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594507000, 90112, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b595d000, 36864, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5a49000, 61440, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b598e000, 36864, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f559458b000, 212992, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b649f000, 446464, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55945cb000, 81920, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b63e6000, 12288, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b63ea000, 593920, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5b1d000, 593920, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b58a9000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b585a000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b597e000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b586d000, 12288, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b587d000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5aee000, 36864, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b59a5000, 49152, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5915000, 40960, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b7047000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b587a000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5876000, 8192, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b7052000, 32768, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5871000, 12288, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b702e000, 40960, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f5594497000, 86016, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b585d000, 61440, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b6516000, 16384, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b582d000, 24576, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b61c0000, 753664, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b627e000, 1470464, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5b11000, 40960, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5834000, 143360, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5894000, 69632, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b58b9000, 110592, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5882000, 61440, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b7001000, 36864, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b6521000, 40960, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5baf000, 335872, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b700d000, 122880, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b55ff000, 2281472, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b705f000, 1708032, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b5c25000, 5873664, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b652f000, 2949120, MADV_DONTNEED) = 0 [pid 20022] madvise(0x7f55b75ff000, 8335360, MADV_DONTNEED) = 0 [pid 20022] exit(0) = ? [pid 20022] +++ exited with 0 +++ <... futex resumed> ) = 0 epoll_ctl(5, EPOLL_CTL_DEL, 6, NULL) = -1 EBADF (Bad file descriptor) close(5) = 0 exit_group(0) = ? +++ exited with 0 +++ cbeverlin@sjc04p1kubhv36:~$ sudo strace -f -p 19978 strace: attach: ptrace(PTRACE_SEIZE, 19978): No such process ``` **Expected behavior** Runs just fine, healthchecks pass **Your Environment** <!--- Include as many relevant details about the environment you experienced the bug in --> * Version used: 1.6.8 * Configuration: ``` customParsers: | [PARSER] Name apache Format regex Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache2 Format regex Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache_error Format regex Regex ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$ [PARSER] Name nginx Format regex Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name json Format json Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name docker Format json Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L Time_Keep On [PARSER] # http://rubular.com/r/tjUt3Awgg4 Name cri Format regex Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<message>.*)$ Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L%z [PARSER] Name syslog Format regex Regex ^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$ Time_Key time Time_Format %b %d %H:%M:%S filters: | [FILTER] Name kubernetes Match kube.* Kube_URL https://kubernetes.default.svc:443 Merge_Log On Keep_Log Off Buffer_Size 0 tls.debug 3 [FILTER] Name modify Match * Set LoggingAgent fluent-bit Set KubeClusterName ${CLUSTER_NAME} Set KubeClusterAbbreviation ${CLUSTER_NAME} Set KubeClusterRegion ${CLUSTER_REGION} Set KubeHostname ${HOST_NAME} Set KubeHostIP ${HOST_IP} [FILTER] Name nest Match * Operation nest Wildcard * Nest_under event [FILTER] Name modify Match * Set index ${SPLUNK_INDEX} inputs: | [INPUT] Name tail Path /var/log/containers/*.log Parser docker Tag kube.* Refresh_Interval 5 Mem_Buf_Limit 2G Read_from_Head on DB /var/log/flb_kube.db DB.Sync Off DB.locking true Skip_Long_Lines On Docker_Mode On [INPUT] Name systemd Tag host.* Path /var/log/journal DB /var/log/flb_systemd.db Strip_Underscores true Systemd_Filter _SYSTEMD_UNIT=kubelet.service Systemd_Filter _SYSTEMD_UNIT=docker.service Systemd_Filter _SYSTEMD_UNIT=containerd.service outputs: | [OUTPUT] Name splunk Match * Host ${SPLUNK_HOST} Splunk_Token ${SPLUNK_TOKEN} Port 443 TLS On TLS.Verify Off Splunk_Send_Raw On tls.debug 3 service: | [SERVICE] HTTP_Server On Config_Watch Off HTTP_Listen 0.0.0.0 HTTP_PORT 2020 Flush 1 Daemon Off Log_Level info Parsers_File parsers.conf Parsers_File custom_parsers.conf ``` * Environment name and version (e.g. Kubernetes? What version?): 1.12, 1.15.11, 1.16.11 and others * Server type and version: Ubuntu server
process
upgrading to from we see intermittent shutdown and ebadf bad file descriptor bug report describe the bug after upgrading to from we see intermittent shutdown of pods and healthcheck failures to reproduce example log message if applicable strace madvise madv dontneed exit exited with close close close close close close close ebadf bad file descriptor close close close close close close epoll ctl epoll ctl del null close close close madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed madvise madv dontneed exit exited with epoll ctl epoll ctl del null ebadf bad file descriptor close exit group exited with cbeverlin sudo strace f p strace attach ptrace ptrace seize no such process expected behavior runs just fine healthchecks pass your environment version used configuration customparsers name apache format regex regex s s time key time time format d b y h m s z name format regex regex s s time key time time format d b y h m s z name apache error format regex regex name nginx format regex regex s s time key time time format d b y h m s z name json format json time key time time format d b y h m s z name docker format json time key time time format y m dt h m s l time keep on name cri format regex regex stdout stderr time key time time format y m dt h m s l z name syslog format regex regex time key time time format b d h m s filters name kubernetes match kube kube url merge log on keep log off buffer size tls debug name modify match set loggingagent fluent bit set kubeclustername cluster name set kubeclusterabbreviation cluster name set kubeclusterregion cluster region set kubehostname host name set kubehostip host ip name nest match operation nest wildcard nest under event name modify match set index splunk index inputs name tail path var log containers log parser docker tag kube refresh interval mem buf limit read from head on db var log flb kube db db sync off db locking true skip long lines on docker mode on name systemd tag host path var log journal db var log flb systemd db strip underscores true systemd filter systemd unit kubelet service systemd filter systemd unit docker service systemd filter systemd unit containerd service outputs name splunk match host splunk host splunk token splunk token port tls on tls verify off splunk send raw on tls debug service http server on config watch off http listen http port flush daemon off log level info parsers file parsers conf parsers file custom parsers conf environment name and version e g kubernetes what version and others server type and version ubuntu server
1
339,474
24,620,705,550
IssuesEvent
2022-10-15 22:13:37
stonejfg/MISO-PRUEBAS-AUTOMATIZADAS-GHOST
https://api.github.com/repos/stonejfg/MISO-PRUEBAS-AUTOMATIZADAS-GHOST
closed
Realizar prueba exploratoria para identificar INCI_001
documentation
Nombre: Realizar prueba exploratoria en la seccion de AddMember en GHOST Descripción: Se debe realizar pruebas exploratorias en el modulo AddMember de la aplicación GHOST, esto con el fin de identificar posibles inconsistencias. La inconsistencia encontrada tendrá como identificador INCI_001 y deberá ser documentada en la Plantilla de Reporte de Incidencias.
1.0
Realizar prueba exploratoria para identificar INCI_001 - Nombre: Realizar prueba exploratoria en la seccion de AddMember en GHOST Descripción: Se debe realizar pruebas exploratorias en el modulo AddMember de la aplicación GHOST, esto con el fin de identificar posibles inconsistencias. La inconsistencia encontrada tendrá como identificador INCI_001 y deberá ser documentada en la Plantilla de Reporte de Incidencias.
non_process
realizar prueba exploratoria para identificar inci nombre realizar prueba exploratoria en la seccion de addmember en ghost descripción se debe realizar pruebas exploratorias en el modulo addmember de la aplicación ghost esto con el fin de identificar posibles inconsistencias la inconsistencia encontrada tendrá como identificador inci y deberá ser documentada en la plantilla de reporte de incidencias
0
2,235
4,972,195,404
IssuesEvent
2016-12-05 20:51:54
poldracklab/fmriprep
https://api.github.com/repos/poldracklab/fmriprep
closed
Argument list too long
beast compatibility bug
When running fslmerge with 721 volumes: ``` File "/root/src/nipype/nipype/pipeline/plugins/base.py", line 249, in run result=result)) File "/root/src/nipype/nipype/pipeline/plugins/base.py", line 294, in _clean_queue raise RuntimeError("".join(result['traceback'])) RuntimeError: Traceback (most recent call last): File "/root/src/nipype/nipype/pipeline/plugins/multiproc.py", line 52, in run_node result['result'] = node.run(updatehash=updatehash) File "/root/src/nipype/nipype/pipeline/engine/nodes.py", line 367, in run self._run_interface() File "/root/src/nipype/nipype/pipeline/engine/nodes.py", line 477, in _run_interface self._result = self._run_command(execute) File "/root/src/nipype/nipype/pipeline/engine/nodes.py", line 607, in _run_command result = self._interface.run() File "/root/src/nipype/nipype/interfaces/base.py", line 1085, in run runtime = self._run_wrapper(runtime) File "/root/src/nipype/nipype/interfaces/base.py", line 1728, in _run_wrapper runtime = self._run_interface(runtime) File "/root/src/nipype/nipype/interfaces/base.py", line 1759, in _run_interface redirect_x=self._redirect_x) File "/root/src/nipype/nipype/interfaces/base.py", line 1461, in run_command env=runtime.environ) File "/usr/local/miniconda/lib/python3.5/subprocess.py", line 947, in __init__ restore_signals, start_new_session) File "/usr/local/miniconda/lib/python3.5/subprocess.py", line 1551, in _execute_child raise child_exception_type(errno_num, err_msg) OSError: [Errno 7] Argument list too long ``` We should use nibabel or nilearn to do the merges.
True
Argument list too long - When running fslmerge with 721 volumes: ``` File "/root/src/nipype/nipype/pipeline/plugins/base.py", line 249, in run result=result)) File "/root/src/nipype/nipype/pipeline/plugins/base.py", line 294, in _clean_queue raise RuntimeError("".join(result['traceback'])) RuntimeError: Traceback (most recent call last): File "/root/src/nipype/nipype/pipeline/plugins/multiproc.py", line 52, in run_node result['result'] = node.run(updatehash=updatehash) File "/root/src/nipype/nipype/pipeline/engine/nodes.py", line 367, in run self._run_interface() File "/root/src/nipype/nipype/pipeline/engine/nodes.py", line 477, in _run_interface self._result = self._run_command(execute) File "/root/src/nipype/nipype/pipeline/engine/nodes.py", line 607, in _run_command result = self._interface.run() File "/root/src/nipype/nipype/interfaces/base.py", line 1085, in run runtime = self._run_wrapper(runtime) File "/root/src/nipype/nipype/interfaces/base.py", line 1728, in _run_wrapper runtime = self._run_interface(runtime) File "/root/src/nipype/nipype/interfaces/base.py", line 1759, in _run_interface redirect_x=self._redirect_x) File "/root/src/nipype/nipype/interfaces/base.py", line 1461, in run_command env=runtime.environ) File "/usr/local/miniconda/lib/python3.5/subprocess.py", line 947, in __init__ restore_signals, start_new_session) File "/usr/local/miniconda/lib/python3.5/subprocess.py", line 1551, in _execute_child raise child_exception_type(errno_num, err_msg) OSError: [Errno 7] Argument list too long ``` We should use nibabel or nilearn to do the merges.
non_process
argument list too long when running fslmerge with volumes file root src nipype nipype pipeline plugins base py line in run result result file root src nipype nipype pipeline plugins base py line in clean queue raise runtimeerror join result runtimeerror traceback most recent call last file root src nipype nipype pipeline plugins multiproc py line in run node result node run updatehash updatehash file root src nipype nipype pipeline engine nodes py line in run self run interface file root src nipype nipype pipeline engine nodes py line in run interface self result self run command execute file root src nipype nipype pipeline engine nodes py line in run command result self interface run file root src nipype nipype interfaces base py line in run runtime self run wrapper runtime file root src nipype nipype interfaces base py line in run wrapper runtime self run interface runtime file root src nipype nipype interfaces base py line in run interface redirect x self redirect x file root src nipype nipype interfaces base py line in run command env runtime environ file usr local miniconda lib subprocess py line in init restore signals start new session file usr local miniconda lib subprocess py line in execute child raise child exception type errno num err msg oserror argument list too long we should use nibabel or nilearn to do the merges
0
713,655
24,534,633,886
IssuesEvent
2022-10-11 19:32:34
DSpace/dspace-angular
https://api.github.com/repos/DSpace/dspace-angular
closed
dc.type facet not visible on MyDSpace page
bug component: Discovery high priority component: MyDSpace Estimate TBD
**dc.type facet missing on MyDSpace page** I noticed that the dc.type facet is not among the sidebar facets of the My DSpace page [1], even though it is included in discovery.xml [2],[3] and there are actually items with a dc.type field in the workspace / workflow. ![grafik](https://user-images.githubusercontent.com/67266996/135588332-d4e3f0c3-1d04-4c29-b6ff-f46f4df84a95.png) Screenshot of workspace view At some point I got a glimpse of the facet (very shortly) and then it vanished. [1] https://demo7.dspace.org/mydspace?configuration=workspace [2] Workspace view https://github.com/DSpace/DSpace/blob/main/dspace/config/spring/api/discovery.xml#L661 [3] Workflow view https://github.com/DSpace/DSpace/blob/main/dspace/config/spring/api/discovery.xml#L734 **To Reproduce** 1. Log in as DSpace Submitter. 2. Go to MyDSpace Page 3. Create a submission with a dc.type field (e.g. article) and save it. 4. Check whether you can see the dc.type facet https://demo7.dspace.org/mydspace?configuration=workspace 5. Create a submission with a dc.type field and submit it. 6. Check whether you can see the dc.type facet https://demo7.dspace.org/mydspace?configuration=workflow **Expected behavior** The dc.type facet should be visible.
1.0
dc.type facet not visible on MyDSpace page - **dc.type facet missing on MyDSpace page** I noticed that the dc.type facet is not among the sidebar facets of the My DSpace page [1], even though it is included in discovery.xml [2],[3] and there are actually items with a dc.type field in the workspace / workflow. ![grafik](https://user-images.githubusercontent.com/67266996/135588332-d4e3f0c3-1d04-4c29-b6ff-f46f4df84a95.png) Screenshot of workspace view At some point I got a glimpse of the facet (very shortly) and then it vanished. [1] https://demo7.dspace.org/mydspace?configuration=workspace [2] Workspace view https://github.com/DSpace/DSpace/blob/main/dspace/config/spring/api/discovery.xml#L661 [3] Workflow view https://github.com/DSpace/DSpace/blob/main/dspace/config/spring/api/discovery.xml#L734 **To Reproduce** 1. Log in as DSpace Submitter. 2. Go to MyDSpace Page 3. Create a submission with a dc.type field (e.g. article) and save it. 4. Check whether you can see the dc.type facet https://demo7.dspace.org/mydspace?configuration=workspace 5. Create a submission with a dc.type field and submit it. 6. Check whether you can see the dc.type facet https://demo7.dspace.org/mydspace?configuration=workflow **Expected behavior** The dc.type facet should be visible.
non_process
dc type facet not visible on mydspace page dc type facet missing on mydspace page i noticed that the dc type facet is not among the sidebar facets of the my dspace page even though it is included in discovery xml and there are actually items with a dc type field in the workspace workflow screenshot of workspace view at some point i got a glimpse of the facet very shortly and then it vanished workspace view workflow view to reproduce log in as dspace submitter go to mydspace page create a submission with a dc type field e g article and save it check whether you can see the dc type facet create a submission with a dc type field and submit it check whether you can see the dc type facet expected behavior the dc type facet should be visible
0
68,611
8,310,376,902
IssuesEvent
2018-09-24 10:30:02
decidim/decidim
https://api.github.com/repos/decidim/decidim
opened
Usability tests improvements
status: Needs-definition status: design required status: needs feedback
After reviewing the results from usability tests, here are some things I would face. They are selected based on my perception of the size of the problem they imply, impact, and ease of improvement. Please comment and complement the list, so we can create a final list. - Add a filter in search results to filter based on a process status: or just Active-All, or the full 4 states we have in /processes (active, past, next, all) - Show more clearly the phases / actions that can be done now (related to System help / Process intro #4136) - Improve comments actions: 1) substitute hand icons for the up-down arrows that are used when showing this supports, and 2) add tooltips on the actions to explain what this action mean - Add a full "view all" button below the first 4 items shown in a space page (meetings, proposals...) so users understand that there are more (now the view all link may not be seen) - Search engine: Organize information in blocks/tabs per item type - Improve the distinction between official and citizen proposals/content - "What affects me" - some kind of call to action to search on the system about processes, meetings, proposals, etc. that affects someone in particular, based on location, interests... (definition pending, if we think this is worth the effort, we can develop) - Support for calendar export - just add ics to be able to add a meeting info into an external calendar. cc @decidim/product
1.0
Usability tests improvements - After reviewing the results from usability tests, here are some things I would face. They are selected based on my perception of the size of the problem they imply, impact, and ease of improvement. Please comment and complement the list, so we can create a final list. - Add a filter in search results to filter based on a process status: or just Active-All, or the full 4 states we have in /processes (active, past, next, all) - Show more clearly the phases / actions that can be done now (related to System help / Process intro #4136) - Improve comments actions: 1) substitute hand icons for the up-down arrows that are used when showing this supports, and 2) add tooltips on the actions to explain what this action mean - Add a full "view all" button below the first 4 items shown in a space page (meetings, proposals...) so users understand that there are more (now the view all link may not be seen) - Search engine: Organize information in blocks/tabs per item type - Improve the distinction between official and citizen proposals/content - "What affects me" - some kind of call to action to search on the system about processes, meetings, proposals, etc. that affects someone in particular, based on location, interests... (definition pending, if we think this is worth the effort, we can develop) - Support for calendar export - just add ics to be able to add a meeting info into an external calendar. cc @decidim/product
non_process
usability tests improvements after reviewing the results from usability tests here are some things i would face they are selected based on my perception of the size of the problem they imply impact and ease of improvement please comment and complement the list so we can create a final list add a filter in search results to filter based on a process status or just active all or the full states we have in processes active past next all show more clearly the phases actions that can be done now related to system help process intro improve comments actions substitute hand icons for the up down arrows that are used when showing this supports and add tooltips on the actions to explain what this action mean add a full view all button below the first items shown in a space page meetings proposals so users understand that there are more now the view all link may not be seen search engine organize information in blocks tabs per item type improve the distinction between official and citizen proposals content what affects me some kind of call to action to search on the system about processes meetings proposals etc that affects someone in particular based on location interests definition pending if we think this is worth the effort we can develop support for calendar export just add ics to be able to add a meeting info into an external calendar cc decidim product
0
278,197
8,637,994,551
IssuesEvent
2018-11-23 13:17:47
geosolutions-it/MapStore2
https://api.github.com/repos/geosolutions-it/MapStore2
closed
Widgets: Advanced chart options
Priority: Medium Widgets backlog enhancement
### Description Some things in the charts should be configurable. - [x] hide/ display background grid (for bar/line charts) - [x] Optional Y axis labels - [x] allow customization of y axis name. By default is something like `Count(attribute_name)`. A collapsible panel with advanced options for charts should be present to allow these config. All these suggestions come from feedback from C040
1.0
Widgets: Advanced chart options - ### Description Some things in the charts should be configurable. - [x] hide/ display background grid (for bar/line charts) - [x] Optional Y axis labels - [x] allow customization of y axis name. By default is something like `Count(attribute_name)`. A collapsible panel with advanced options for charts should be present to allow these config. All these suggestions come from feedback from C040
non_process
widgets advanced chart options description some things in the charts should be configurable hide display background grid for bar line charts optional y axis labels allow customization of y axis name by default is something like count attribute name a collapsible panel with advanced options for charts should be present to allow these config all these suggestions come from feedback from
0
19,342
25,477,202,482
IssuesEvent
2022-11-25 15:37:11
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] [GCI] Add admins > Pop up > UI issues
Bug P1 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
UI issues Pop up while adding non-organization admins in Participant manager 1. 'cancel' button text should be changed to 'Cancel' 2. 'confirm' button text should be changed to 'Confirm' 3. Remove cross mark and Upper line present above the text 4. Margins should be present before and after the text 5. Buttons should be left-aligned on the popup (to maintain consistency across the application) 6. 'Confirm' button should come first and after that 'Cancel' button should be displayed (to maintain consistency across the application) [Note: Refer 'Edit enrollment target' popup ![image](https://user-images.githubusercontent.com/71445210/146887497-5b92280c-43ec-480b-a137-d1cd24688fb0.png)
3.0
[PM] [GCI] Add admins > Pop up > UI issues - UI issues Pop up while adding non-organization admins in Participant manager 1. 'cancel' button text should be changed to 'Cancel' 2. 'confirm' button text should be changed to 'Confirm' 3. Remove cross mark and Upper line present above the text 4. Margins should be present before and after the text 5. Buttons should be left-aligned on the popup (to maintain consistency across the application) 6. 'Confirm' button should come first and after that 'Cancel' button should be displayed (to maintain consistency across the application) [Note: Refer 'Edit enrollment target' popup ![image](https://user-images.githubusercontent.com/71445210/146887497-5b92280c-43ec-480b-a137-d1cd24688fb0.png)
process
add admins pop up ui issues ui issues pop up while adding non organization admins in participant manager cancel button text should be changed to cancel confirm button text should be changed to confirm remove cross mark and upper line present above the text margins should be present before and after the text buttons should be left aligned on the popup to maintain consistency across the application confirm button should come first and after that cancel button should be displayed to maintain consistency across the application note refer edit enrollment target popup
1
158
2,582,323,555
IssuesEvent
2015-02-15 03:27:27
GsDevKit/flow
https://api.github.com/repos/GsDevKit/flow
closed
Get flow tests passing with new gem server code
in process
tests have been failing randomly ... likely to be gem server integration issues rather than bugs in the flow code
1.0
Get flow tests passing with new gem server code - tests have been failing randomly ... likely to be gem server integration issues rather than bugs in the flow code
process
get flow tests passing with new gem server code tests have been failing randomly likely to be gem server integration issues rather than bugs in the flow code
1
7,809
10,962,958,941
IssuesEvent
2019-11-27 18:28:39
shirou/gopsutil
https://api.github.com/repos/shirou/gopsutil
closed
bug: data race seen in v2.19.9
os:darwin package:process
**Describe the bug** Got a data race in our agent using the package. **To Reproduce** ```go p, err := process.NewProcess(int32(myPid)) if err != nil { return 0, 0, 0.0, err } mem, err := p.MemoryInfo() if err != nil { return 0, 0, err } ``` **Expected behavior** No data race should be seen **Environment (please complete the following information):** ```console ProductName: Mac OS X ProductVersion: 10.14.6 BuildVersion: 18G95 ``` ```console Darwin Sibis-MacBook-Pro.local 18.7.0 Darwin Kernel Version 18.7.0: Tue Aug 20 16:57:14 PDT 2019; root:xnu-4903.271.2~2/RELEASE_X86_64 x86_64 ``` **Additional context** Data race stack ```console ================== WARNING: DATA RACE Read at 0x00c007157050 by goroutine 192: go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process.(*Process).CreateTimeWithContext() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process/process.go:254 +0x4a go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process.(*Process).CreateTime() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process/process.go:250 +0x65 Previous write at 0x00c007157050 by goroutine 180: go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process.(*Process).CreateTimeWithContext() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process/process.go:258 +0xe8 go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process.(*Process).CPUPercentWithContext() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process/process.go:250 +0x7a go.aporeto.io/enforcerd/internal/lifecycle.getCPUandMemory() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process/process.go:294 +0xc1 go.aporeto.io/enforcerd/internal/lifecycle.collectStats() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/lifecycle/lifecycle.go:267 +0xc7 go.aporeto.io/enforcerd/internal/lifecycle.(*EnforcerLifecycleManager).sendEnforcerStats() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/lifecycle/lifecycle.go:197 +0x6a go.aporeto.io/enforcerd/internal/lifecycle.(*EnforcerLifecycleManager).runHeartbeat() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/lifecycle/lifecycle.go:124 +0x20c Goroutine 192 (running) created at: go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process.NewProcess() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process/process.go:167 +0x117 go.aporeto.io/enforcerd/internal/lifecycle.collectStats() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/lifecycle/lifecycle.go:262 +0x84 go.aporeto.io/enforcerd/internal/lifecycle.(*EnforcerLifecycleManager).sendEnforcerStats() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/lifecycle/lifecycle.go:197 +0x6a go.aporeto.io/enforcerd/internal/lifecycle.(*EnforcerLifecycleManager).runHeartbeat() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/lifecycle/lifecycle.go:124 +0x20c Goroutine 180 (running) created at: go.aporeto.io/enforcerd/internal/lifecycle.(*EnforcerLifecycleManager).Run() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/lifecycle/lifecycle.go:93 +0x2d2 go.aporeto.io/enforcerd/internal/entrypoint/run.run() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/entrypoint/run/run.go:596 +0x39cf go.aporeto.io/enforcerd/internal/entrypoint/run.Enforcer() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/entrypoint/run/enforcer.go:135 +0x158 go.aporeto.io/enforcerd/internal/entrypoint.Enforcer() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/entrypoint/cmd.go:110 +0xab8 go.aporeto.io/enforcerd/internal/command.glob..func2() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/command/cmd.go:369 +0xaa3 go.aporeto.io/enforcerd/vendor/github.com/spf13/cobra.(*Command).execute() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/vendor/github.com/spf13/cobra/command.go:826 +0x527 go.aporeto.io/enforcerd/vendor/github.com/spf13/cobra.(*Command).ExecuteC() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/vendor/github.com/spf13/cobra/command.go:914 +0x41b go.aporeto.io/enforcerd/internal/command.Execute() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/vendor/github.com/spf13/cobra/command.go:864 +0x78 go.aporeto.io/enforcerd.main() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/main.go:11 +0x47 go.aporeto.io/enforcerd.TestRunMain() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/main_test.go:18 +0xd3 testing.tRunner() /usr/local/go/src/testing/testing.go:865 +0x163 ```
1.0
bug: data race seen in v2.19.9 - **Describe the bug** Got a data race in our agent using the package. **To Reproduce** ```go p, err := process.NewProcess(int32(myPid)) if err != nil { return 0, 0, 0.0, err } mem, err := p.MemoryInfo() if err != nil { return 0, 0, err } ``` **Expected behavior** No data race should be seen **Environment (please complete the following information):** ```console ProductName: Mac OS X ProductVersion: 10.14.6 BuildVersion: 18G95 ``` ```console Darwin Sibis-MacBook-Pro.local 18.7.0 Darwin Kernel Version 18.7.0: Tue Aug 20 16:57:14 PDT 2019; root:xnu-4903.271.2~2/RELEASE_X86_64 x86_64 ``` **Additional context** Data race stack ```console ================== WARNING: DATA RACE Read at 0x00c007157050 by goroutine 192: go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process.(*Process).CreateTimeWithContext() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process/process.go:254 +0x4a go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process.(*Process).CreateTime() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process/process.go:250 +0x65 Previous write at 0x00c007157050 by goroutine 180: go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process.(*Process).CreateTimeWithContext() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process/process.go:258 +0xe8 go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process.(*Process).CPUPercentWithContext() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process/process.go:250 +0x7a go.aporeto.io/enforcerd/internal/lifecycle.getCPUandMemory() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process/process.go:294 +0xc1 go.aporeto.io/enforcerd/internal/lifecycle.collectStats() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/lifecycle/lifecycle.go:267 +0xc7 go.aporeto.io/enforcerd/internal/lifecycle.(*EnforcerLifecycleManager).sendEnforcerStats() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/lifecycle/lifecycle.go:197 +0x6a go.aporeto.io/enforcerd/internal/lifecycle.(*EnforcerLifecycleManager).runHeartbeat() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/lifecycle/lifecycle.go:124 +0x20c Goroutine 192 (running) created at: go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process.NewProcess() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/vendor/github.com/shirou/gopsutil/process/process.go:167 +0x117 go.aporeto.io/enforcerd/internal/lifecycle.collectStats() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/lifecycle/lifecycle.go:262 +0x84 go.aporeto.io/enforcerd/internal/lifecycle.(*EnforcerLifecycleManager).sendEnforcerStats() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/lifecycle/lifecycle.go:197 +0x6a go.aporeto.io/enforcerd/internal/lifecycle.(*EnforcerLifecycleManager).runHeartbeat() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/lifecycle/lifecycle.go:124 +0x20c Goroutine 180 (running) created at: go.aporeto.io/enforcerd/internal/lifecycle.(*EnforcerLifecycleManager).Run() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/lifecycle/lifecycle.go:93 +0x2d2 go.aporeto.io/enforcerd/internal/entrypoint/run.run() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/entrypoint/run/run.go:596 +0x39cf go.aporeto.io/enforcerd/internal/entrypoint/run.Enforcer() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/entrypoint/run/enforcer.go:135 +0x158 go.aporeto.io/enforcerd/internal/entrypoint.Enforcer() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/entrypoint/cmd.go:110 +0xab8 go.aporeto.io/enforcerd/internal/command.glob..func2() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/internal/command/cmd.go:369 +0xaa3 go.aporeto.io/enforcerd/vendor/github.com/spf13/cobra.(*Command).execute() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/vendor/github.com/spf13/cobra/command.go:826 +0x527 go.aporeto.io/enforcerd/vendor/github.com/spf13/cobra.(*Command).ExecuteC() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/vendor/github.com/spf13/cobra/command.go:914 +0x41b go.aporeto.io/enforcerd/internal/command.Execute() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/vendor/github.com/spf13/cobra/command.go:864 +0x78 go.aporeto.io/enforcerd.main() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/main.go:11 +0x47 go.aporeto.io/enforcerd.TestRunMain() /tmp/build/22a7bc40/go/src/go.aporeto.io/enforcerd/main_test.go:18 +0xd3 testing.tRunner() /usr/local/go/src/testing/testing.go:865 +0x163 ```
process
bug data race seen in describe the bug got a data race in our agent using the package to reproduce go p err process newprocess mypid if err nil return err mem err p memoryinfo if err nil return err expected behavior no data race should be seen environment please complete the following information console productname mac os x productversion buildversion console darwin sibis macbook pro local darwin kernel version tue aug pdt root xnu release additional context data race stack console warning data race read at by goroutine go aporeto io enforcerd vendor github com shirou gopsutil process process createtimewithcontext tmp build go src go aporeto io enforcerd vendor github com shirou gopsutil process process go go aporeto io enforcerd vendor github com shirou gopsutil process process createtime tmp build go src go aporeto io enforcerd vendor github com shirou gopsutil process process go previous write at by goroutine go aporeto io enforcerd vendor github com shirou gopsutil process process createtimewithcontext tmp build go src go aporeto io enforcerd vendor github com shirou gopsutil process process go go aporeto io enforcerd vendor github com shirou gopsutil process process cpupercentwithcontext tmp build go src go aporeto io enforcerd vendor github com shirou gopsutil process process go go aporeto io enforcerd internal lifecycle getcpuandmemory tmp build go src go aporeto io enforcerd vendor github com shirou gopsutil process process go go aporeto io enforcerd internal lifecycle collectstats tmp build go src go aporeto io enforcerd internal lifecycle lifecycle go go aporeto io enforcerd internal lifecycle enforcerlifecyclemanager sendenforcerstats tmp build go src go aporeto io enforcerd internal lifecycle lifecycle go go aporeto io enforcerd internal lifecycle enforcerlifecyclemanager runheartbeat tmp build go src go aporeto io enforcerd internal lifecycle lifecycle go goroutine running created at go aporeto io enforcerd vendor github com shirou gopsutil process newprocess tmp build go src go aporeto io enforcerd vendor github com shirou gopsutil process process go go aporeto io enforcerd internal lifecycle collectstats tmp build go src go aporeto io enforcerd internal lifecycle lifecycle go go aporeto io enforcerd internal lifecycle enforcerlifecyclemanager sendenforcerstats tmp build go src go aporeto io enforcerd internal lifecycle lifecycle go go aporeto io enforcerd internal lifecycle enforcerlifecyclemanager runheartbeat tmp build go src go aporeto io enforcerd internal lifecycle lifecycle go goroutine running created at go aporeto io enforcerd internal lifecycle enforcerlifecyclemanager run tmp build go src go aporeto io enforcerd internal lifecycle lifecycle go go aporeto io enforcerd internal entrypoint run run tmp build go src go aporeto io enforcerd internal entrypoint run run go go aporeto io enforcerd internal entrypoint run enforcer tmp build go src go aporeto io enforcerd internal entrypoint run enforcer go go aporeto io enforcerd internal entrypoint enforcer tmp build go src go aporeto io enforcerd internal entrypoint cmd go go aporeto io enforcerd internal command glob tmp build go src go aporeto io enforcerd internal command cmd go go aporeto io enforcerd vendor github com cobra command execute tmp build go src go aporeto io enforcerd vendor github com cobra command go go aporeto io enforcerd vendor github com cobra command executec tmp build go src go aporeto io enforcerd vendor github com cobra command go go aporeto io enforcerd internal command execute tmp build go src go aporeto io enforcerd vendor github com cobra command go go aporeto io enforcerd main tmp build go src go aporeto io enforcerd main go go aporeto io enforcerd testrunmain tmp build go src go aporeto io enforcerd main test go testing trunner usr local go src testing testing go
1
15,643
19,846,030,570
IssuesEvent
2022-01-21 06:27:34
ooi-data/CE07SHSP-SP001-05-NUTNRJ000-recovered_cspp-nutnr_j_cspp_instrument_recovered
https://api.github.com/repos/ooi-data/CE07SHSP-SP001-05-NUTNRJ000-recovered_cspp-nutnr_j_cspp_instrument_recovered
opened
🛑 Processing failed: ValueError
process
## Overview `ValueError` found in `processing_task` task during run ended on 2022-01-21T06:27:33.782653. ## Details Flow name: `CE07SHSP-SP001-05-NUTNRJ000-recovered_cspp-nutnr_j_cspp_instrument_recovered` Task name: `processing_task` Error type: `ValueError` Error message: not enough values to unpack (expected 3, got 0) <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream append_to_zarr(mod_ds, final_store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr _append_zarr(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr existing_arr.append(var_data.values) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values return _as_array_or_item(self._data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item data = np.asarray(data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__ x = self.compute() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute (result,) = compute(self, traverse=False, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute results = schedule(dsk, keys, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get results = get_async( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async raise_exception(exc, tb) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise raise exc File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task result = _execute_task(task, data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task return func(*(_execute_task(a, cache) for a in args)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter c = np.asarray(c) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__ self._ensure_cached() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached self.array = NumpyIndexingAdapter(np.asarray(self.array)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__ return self.func(self.array) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask data = np.asarray(data, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__ return array[key.tuple] File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__ return self.get_basic_selection(selection, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection return self._get_basic_selection_nd(selection=selection, out=out, File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd return self._get_selection(indexer=indexer, out=out, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection lchunk_coords, lchunk_selection, lout_selection = zip(*indexer) ValueError: not enough values to unpack (expected 3, got 0) ``` </details>
1.0
🛑 Processing failed: ValueError - ## Overview `ValueError` found in `processing_task` task during run ended on 2022-01-21T06:27:33.782653. ## Details Flow name: `CE07SHSP-SP001-05-NUTNRJ000-recovered_cspp-nutnr_j_cspp_instrument_recovered` Task name: `processing_task` Error type: `ValueError` Error message: not enough values to unpack (expected 3, got 0) <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream append_to_zarr(mod_ds, final_store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr _append_zarr(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr existing_arr.append(var_data.values) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values return _as_array_or_item(self._data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item data = np.asarray(data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__ x = self.compute() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute (result,) = compute(self, traverse=False, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute results = schedule(dsk, keys, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get results = get_async( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async raise_exception(exc, tb) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise raise exc File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task result = _execute_task(task, data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task return func(*(_execute_task(a, cache) for a in args)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter c = np.asarray(c) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__ self._ensure_cached() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached self.array = NumpyIndexingAdapter(np.asarray(self.array)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__ return self.func(self.array) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask data = np.asarray(data, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__ return array[key.tuple] File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__ return self.get_basic_selection(selection, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection return self._get_basic_selection_nd(selection=selection, out=out, File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd return self._get_selection(indexer=indexer, out=out, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection lchunk_coords, lchunk_selection, lout_selection = zip(*indexer) ValueError: not enough values to unpack (expected 3, got 0) ``` </details>
process
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name recovered cspp nutnr j cspp instrument recovered task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray coding variables py line in array return self func self array file srv conda envs notebook lib site packages xarray coding variables py line in apply mask data np asarray data dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got
1
14,872
18,281,568,848
IssuesEvent
2021-10-05 04:34:18
rubberduck-vba/Rubberduck
https://api.github.com/repos/rubberduck-vba/Rubberduck
closed
Parameter named 'Object' incorrectly triggers ParameterNotUsedInspection
bug parse-tree-processing has-workaround edge-case resolver
Version 2.5.0.5244 OS: Microsoft Windows NT 10.0.18363.0, x64 Host Product: Microsoft Office x86 Host Version: 16.0.12410.20000 Host Executable: EXCEL.EXE **Description** A function/property parameter named 'Object' triggers ParameterNotUsedInspection, even though the parameter is used. Changing the name from 'Object' to anything else, e.g. 'ObjectX', does not trigger the inspection **To Reproduce** ' this (incorrectly) triggers ParameterNotUsedInspection: Public Function Foo(Object As Object) As Object Set Foo = Object End Function ' this (correctly) does not trigger ParameterNotUsedInspection: Public Function Foo(Objectx As Object) As Object Set Foo = Objectx End Function +++ Note that a local variable called 'Object' will also incorrectly trigger a VariableNotUsedInspection
1.0
Parameter named 'Object' incorrectly triggers ParameterNotUsedInspection - Version 2.5.0.5244 OS: Microsoft Windows NT 10.0.18363.0, x64 Host Product: Microsoft Office x86 Host Version: 16.0.12410.20000 Host Executable: EXCEL.EXE **Description** A function/property parameter named 'Object' triggers ParameterNotUsedInspection, even though the parameter is used. Changing the name from 'Object' to anything else, e.g. 'ObjectX', does not trigger the inspection **To Reproduce** ' this (incorrectly) triggers ParameterNotUsedInspection: Public Function Foo(Object As Object) As Object Set Foo = Object End Function ' this (correctly) does not trigger ParameterNotUsedInspection: Public Function Foo(Objectx As Object) As Object Set Foo = Objectx End Function +++ Note that a local variable called 'Object' will also incorrectly trigger a VariableNotUsedInspection
process
parameter named object incorrectly triggers parameternotusedinspection version os microsoft windows nt host product microsoft office host version host executable excel exe description a function property parameter named object triggers parameternotusedinspection even though the parameter is used changing the name from object to anything else e g objectx does not trigger the inspection to reproduce this incorrectly triggers parameternotusedinspection public function foo object as object as object set foo object end function this correctly does not trigger parameternotusedinspection public function foo objectx as object as object set foo objectx end function note that a local variable called object will also incorrectly trigger a variablenotusedinspection
1
7,554
10,677,417,669
IssuesEvent
2019-10-21 15:23:47
googleapis/nodejs-pubsub
https://api.github.com/repos/googleapis/nodejs-pubsub
closed
begin testing @grpc/grpc-js@0.6.9 as alternative to grpc
type: process
@murgatroid99 believes he's continued to address some bugs in the `@grpc/grpc-js` libraries, based on logs we've been provided while debugging #770. We're holding off on making this a recommendation for all users, as we don't want to continue to create disruptions, and the `grpc` transport is working well. However, if a few folks would like to pilot the latest changes to `@grpc/grpc-js`, we would greatly appreciate the help). The latest version is `0.6.9`, after installing you should run `npm ls`, to make sure other dependencies have not pinned you on an older `@grpc/grpc-js`. CC: @MichaelMarkieta, @Redgwell, @RaptDept, @xoraingroup
1.0
begin testing @grpc/grpc-js@0.6.9 as alternative to grpc - @murgatroid99 believes he's continued to address some bugs in the `@grpc/grpc-js` libraries, based on logs we've been provided while debugging #770. We're holding off on making this a recommendation for all users, as we don't want to continue to create disruptions, and the `grpc` transport is working well. However, if a few folks would like to pilot the latest changes to `@grpc/grpc-js`, we would greatly appreciate the help). The latest version is `0.6.9`, after installing you should run `npm ls`, to make sure other dependencies have not pinned you on an older `@grpc/grpc-js`. CC: @MichaelMarkieta, @Redgwell, @RaptDept, @xoraingroup
process
begin testing grpc grpc js as alternative to grpc believes he s continued to address some bugs in the grpc grpc js libraries based on logs we ve been provided while debugging we re holding off on making this a recommendation for all users as we don t want to continue to create disruptions and the grpc transport is working well however if a few folks would like to pilot the latest changes to grpc grpc js we would greatly appreciate the help the latest version is after installing you should run npm ls to make sure other dependencies have not pinned you on an older grpc grpc js cc michaelmarkieta redgwell raptdept xoraingroup
1
133,424
18,884,472,970
IssuesEvent
2021-11-15 05:26:33
projectcontour/contour
https://api.github.com/repos/projectcontour/contour
closed
The "includes" function in HTTPProxy causes bad life-cycle management
blocked/needs-design area/deployment
## The "includes" function in HTTPProxy causes bad life-cycle management The "includes" can be used to delegate route definitions to sub-objects; ``` apiVersion: projectcontour.io/v1 kind: HTTPProxy metadata: name: kahttp-default spec: virtualhost: fqdn: kahttp.com tls: secretName: contour-secret includes: - name: kahttp-default - name: kahttp-admin ``` In a larger site where route definitions are updates regularly this can be used to allow applications to provide their own route definition and add them to the virtual host when loaded and remove them when un-loaded. **The problem is that the "top" object containing the vhost and the "includes" array must be updated for these operations.** This update operation of the top object requires coordination between otherwise independent applications. To update the top object manually is not feasible in a larger sites. ### Use Case As an operator I want to be able to deploy a applications that adds a route definitions to a vhost with a simple install, e.g. with "helm". When an application is removed it's route definition shall be removed automatically. As an application owner I want to use "secure backend" to protect my traffic. ### Comparison with the Ingress object K8s allows "Ingress" objects to specify the same vhost in multiple instances; ``` apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kahttp-default spec: tls: - hosts: - kahttp.com secretName: contour-secret rules: - host: kahttp.com http: paths: - path: / backend: serviceName: kahttp-ipv4 servicePort: 80 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kahttp-admin spec: tls: - hosts: - kahttp.com secretName: contour-secret rules: - host: kahttp.com http: paths: - path: /admin backend: serviceName: kahttp-admin servicePort: 80 ``` Contour handles this nicely. An "admin" application can be deployed (and removed) independently and the route to "/admin" is updated automatically. The vhost can set with a value/parameter in a helm install. But Contour supports backend encryption only for `HTTPProxy`. ### Proposal Allow to specify the relation in the sub-objects, example; ``` apiVersion: projectcontour.io/v1 kind: HTTPProxy metadata: name: kahttp-admin spec: virtualhost: from: - name: kahttp-default namespace: default routes: - conditions: - prefix: /admin services: - name: kahttp-admin port: 443 weight: 1000 validation: caSecret: kahttp-admin-ca subjectName: kahttp-admin.com ``` The "from" field is an array to be compliant with the current implementation where multiple "top" objects can include the same sub-object. There are of course misconfigurations that must be checked but I leave them for the moment because I can't think of anything unsolvable. ### The Canary aspect A spin-off I find really cool is the elegance of which canary testing can be made with this addition. To test a new `kahttp-admin` simply install a canary with something like; ``` apiVersion: projectcontour.io/v1 kind: HTTPProxy metadata: name: kahttp-admin-canary spec: virtualhost: from: - name: kahttp-default namespace: default routes: - conditions: - prefix: /admin services: - name: kahttp-admin-canary port: 443 weight: 100 validation: caSecret: kahttp-admin-ca subjectName: kahttp-admin.com ``` The canary will grab ~10% of the traffic to "/admin". After a test period the canary can simply be removed and traffic goes back to normal. If the canary is ok the normal backend application can be updated.
1.0
The "includes" function in HTTPProxy causes bad life-cycle management - ## The "includes" function in HTTPProxy causes bad life-cycle management The "includes" can be used to delegate route definitions to sub-objects; ``` apiVersion: projectcontour.io/v1 kind: HTTPProxy metadata: name: kahttp-default spec: virtualhost: fqdn: kahttp.com tls: secretName: contour-secret includes: - name: kahttp-default - name: kahttp-admin ``` In a larger site where route definitions are updates regularly this can be used to allow applications to provide their own route definition and add them to the virtual host when loaded and remove them when un-loaded. **The problem is that the "top" object containing the vhost and the "includes" array must be updated for these operations.** This update operation of the top object requires coordination between otherwise independent applications. To update the top object manually is not feasible in a larger sites. ### Use Case As an operator I want to be able to deploy a applications that adds a route definitions to a vhost with a simple install, e.g. with "helm". When an application is removed it's route definition shall be removed automatically. As an application owner I want to use "secure backend" to protect my traffic. ### Comparison with the Ingress object K8s allows "Ingress" objects to specify the same vhost in multiple instances; ``` apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kahttp-default spec: tls: - hosts: - kahttp.com secretName: contour-secret rules: - host: kahttp.com http: paths: - path: / backend: serviceName: kahttp-ipv4 servicePort: 80 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kahttp-admin spec: tls: - hosts: - kahttp.com secretName: contour-secret rules: - host: kahttp.com http: paths: - path: /admin backend: serviceName: kahttp-admin servicePort: 80 ``` Contour handles this nicely. An "admin" application can be deployed (and removed) independently and the route to "/admin" is updated automatically. The vhost can set with a value/parameter in a helm install. But Contour supports backend encryption only for `HTTPProxy`. ### Proposal Allow to specify the relation in the sub-objects, example; ``` apiVersion: projectcontour.io/v1 kind: HTTPProxy metadata: name: kahttp-admin spec: virtualhost: from: - name: kahttp-default namespace: default routes: - conditions: - prefix: /admin services: - name: kahttp-admin port: 443 weight: 1000 validation: caSecret: kahttp-admin-ca subjectName: kahttp-admin.com ``` The "from" field is an array to be compliant with the current implementation where multiple "top" objects can include the same sub-object. There are of course misconfigurations that must be checked but I leave them for the moment because I can't think of anything unsolvable. ### The Canary aspect A spin-off I find really cool is the elegance of which canary testing can be made with this addition. To test a new `kahttp-admin` simply install a canary with something like; ``` apiVersion: projectcontour.io/v1 kind: HTTPProxy metadata: name: kahttp-admin-canary spec: virtualhost: from: - name: kahttp-default namespace: default routes: - conditions: - prefix: /admin services: - name: kahttp-admin-canary port: 443 weight: 100 validation: caSecret: kahttp-admin-ca subjectName: kahttp-admin.com ``` The canary will grab ~10% of the traffic to "/admin". After a test period the canary can simply be removed and traffic goes back to normal. If the canary is ok the normal backend application can be updated.
non_process
the includes function in httpproxy causes bad life cycle management the includes function in httpproxy causes bad life cycle management the includes can be used to delegate route definitions to sub objects apiversion projectcontour io kind httpproxy metadata name kahttp default spec virtualhost fqdn kahttp com tls secretname contour secret includes name kahttp default name kahttp admin in a larger site where route definitions are updates regularly this can be used to allow applications to provide their own route definition and add them to the virtual host when loaded and remove them when un loaded the problem is that the top object containing the vhost and the includes array must be updated for these operations this update operation of the top object requires coordination between otherwise independent applications to update the top object manually is not feasible in a larger sites use case as an operator i want to be able to deploy a applications that adds a route definitions to a vhost with a simple install e g with helm when an application is removed it s route definition shall be removed automatically as an application owner i want to use secure backend to protect my traffic comparison with the ingress object allows ingress objects to specify the same vhost in multiple instances apiversion extensions kind ingress metadata name kahttp default spec tls hosts kahttp com secretname contour secret rules host kahttp com http paths path backend servicename kahttp serviceport apiversion extensions kind ingress metadata name kahttp admin spec tls hosts kahttp com secretname contour secret rules host kahttp com http paths path admin backend servicename kahttp admin serviceport contour handles this nicely an admin application can be deployed and removed independently and the route to admin is updated automatically the vhost can set with a value parameter in a helm install but contour supports backend encryption only for httpproxy proposal allow to specify the relation in the sub objects example apiversion projectcontour io kind httpproxy metadata name kahttp admin spec virtualhost from name kahttp default namespace default routes conditions prefix admin services name kahttp admin port weight validation casecret kahttp admin ca subjectname kahttp admin com the from field is an array to be compliant with the current implementation where multiple top objects can include the same sub object there are of course misconfigurations that must be checked but i leave them for the moment because i can t think of anything unsolvable the canary aspect a spin off i find really cool is the elegance of which canary testing can be made with this addition to test a new kahttp admin simply install a canary with something like apiversion projectcontour io kind httpproxy metadata name kahttp admin canary spec virtualhost from name kahttp default namespace default routes conditions prefix admin services name kahttp admin canary port weight validation casecret kahttp admin ca subjectname kahttp admin com the canary will grab of the traffic to admin after a test period the canary can simply be removed and traffic goes back to normal if the canary is ok the normal backend application can be updated
0
77,663
10,013,653,012
IssuesEvent
2019-07-15 15:39:05
heptio/velero
https://api.github.com/repos/heptio/velero
closed
Restic Backup - HowTo
Documentation Question Restic
So far the Documentation regarding Restic is clear For Reference: https://velero.io/docs/v1.0.0/restic/ But i miss some Feature/Information. Let's start with what i have done already: I've added the annotation for the pods in the deployment: ``` spec: template: metadata: annotations: backup.velero.io/backup-volumes: volume-name-to-backup ``` I triggered an backup: `velero backup create berndklaus-at --include-namespaces berndklaus-at` When performing: `velero restic repo get` I can see a repo was added, there was no repo before performing the velero backup command. `velero backup get` I can see a backup, state: completed OK, thats fine. Well now i wont to take a newew backup of the above things, so i fired: `velero backup create berndklaus-at --include-namespaces berndklaus-at` That does not work, cause the backup still exists. Ok, i delete the backup "berndklaus-at" But when looking into my spaces, i can still see the restic part is NOT removed. So i tought, there should be a command like: "velero restic backup delete." But i did not find a way to remove a restic backup. So there came up a few questions: What does the restic repo represent? How can i manage restic backups (delete/recreate/...)? How can i trigger a restic backup only? Does a schedule also starts a restic backup? Please excuse typos and/or bad english. Thanks for clearification, BR Bernd
1.0
Restic Backup - HowTo - So far the Documentation regarding Restic is clear For Reference: https://velero.io/docs/v1.0.0/restic/ But i miss some Feature/Information. Let's start with what i have done already: I've added the annotation for the pods in the deployment: ``` spec: template: metadata: annotations: backup.velero.io/backup-volumes: volume-name-to-backup ``` I triggered an backup: `velero backup create berndklaus-at --include-namespaces berndklaus-at` When performing: `velero restic repo get` I can see a repo was added, there was no repo before performing the velero backup command. `velero backup get` I can see a backup, state: completed OK, thats fine. Well now i wont to take a newew backup of the above things, so i fired: `velero backup create berndklaus-at --include-namespaces berndklaus-at` That does not work, cause the backup still exists. Ok, i delete the backup "berndklaus-at" But when looking into my spaces, i can still see the restic part is NOT removed. So i tought, there should be a command like: "velero restic backup delete." But i did not find a way to remove a restic backup. So there came up a few questions: What does the restic repo represent? How can i manage restic backups (delete/recreate/...)? How can i trigger a restic backup only? Does a schedule also starts a restic backup? Please excuse typos and/or bad english. Thanks for clearification, BR Bernd
non_process
restic backup howto so far the documentation regarding restic is clear for reference but i miss some feature information let s start with what i have done already i ve added the annotation for the pods in the deployment spec template metadata annotations backup velero io backup volumes volume name to backup i triggered an backup velero backup create berndklaus at include namespaces berndklaus at when performing velero restic repo get i can see a repo was added there was no repo before performing the velero backup command velero backup get i can see a backup state completed ok thats fine well now i wont to take a newew backup of the above things so i fired velero backup create berndklaus at include namespaces berndklaus at that does not work cause the backup still exists ok i delete the backup berndklaus at but when looking into my spaces i can still see the restic part is not removed so i tought there should be a command like velero restic backup delete but i did not find a way to remove a restic backup so there came up a few questions what does the restic repo represent how can i manage restic backups delete recreate how can i trigger a restic backup only does a schedule also starts a restic backup please excuse typos and or bad english thanks for clearification br bernd
0
48,849
13,398,656,442
IssuesEvent
2020-09-03 13:28:00
mixellent/demo-app
https://api.github.com/repos/mixellent/demo-app
opened
CVE-2020-9488 (Low) detected in log4j-core-2.6.1.jar
security vulnerability
## CVE-2020-9488 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-core-2.6.1.jar</b></p></summary> <p>The Apache Log4j Implementation</p> <p>Path to dependency file: /tmp/ws-scm/demo-app/pom.xml</p> <p>Path to vulnerable library: canner/.m2/repository/org/apache/logging/log4j/log4j-core/2.6.1/log4j-core-2.6.1.jar</p> <p> Dependency Hierarchy: - :x: **log4j-core-2.6.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/mixellent/demo-app/commit/e224ec4297971467bc3e91a705b785158ac67da0">e224ec4297971467bc3e91a705b785158ac67da0</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Improper validation of certificate with host mismatch in Apache Log4j SMTP appender. This could allow an SMTPS connection to be intercepted by a man-in-the-middle attack which could leak any log messages sent through that appender. <p>Publish Date: 2020-04-27 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9488>CVE-2020-9488</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://issues.apache.org/jira/browse/LOG4J2-2819">https://issues.apache.org/jira/browse/LOG4J2-2819</a></p> <p>Release Date: 2020-04-27</p> <p>Fix Resolution: org.apache.logging.log4j:log4j-core:2.13.2</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.logging.log4j","packageName":"log4j-core","packageVersion":"2.6.1","isTransitiveDependency":false,"dependencyTree":"org.apache.logging.log4j:log4j-core:2.6.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.logging.log4j:log4j-core:2.13.2"}],"vulnerabilityIdentifier":"CVE-2020-9488","vulnerabilityDetails":"Improper validation of certificate with host mismatch in Apache Log4j SMTP appender. This could allow an SMTPS connection to be intercepted by a man-in-the-middle attack which could leak any log messages sent through that appender.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9488","cvss3Severity":"low","cvss3Score":"3.7","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-9488 (Low) detected in log4j-core-2.6.1.jar - ## CVE-2020-9488 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-core-2.6.1.jar</b></p></summary> <p>The Apache Log4j Implementation</p> <p>Path to dependency file: /tmp/ws-scm/demo-app/pom.xml</p> <p>Path to vulnerable library: canner/.m2/repository/org/apache/logging/log4j/log4j-core/2.6.1/log4j-core-2.6.1.jar</p> <p> Dependency Hierarchy: - :x: **log4j-core-2.6.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/mixellent/demo-app/commit/e224ec4297971467bc3e91a705b785158ac67da0">e224ec4297971467bc3e91a705b785158ac67da0</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Improper validation of certificate with host mismatch in Apache Log4j SMTP appender. This could allow an SMTPS connection to be intercepted by a man-in-the-middle attack which could leak any log messages sent through that appender. <p>Publish Date: 2020-04-27 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9488>CVE-2020-9488</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://issues.apache.org/jira/browse/LOG4J2-2819">https://issues.apache.org/jira/browse/LOG4J2-2819</a></p> <p>Release Date: 2020-04-27</p> <p>Fix Resolution: org.apache.logging.log4j:log4j-core:2.13.2</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.logging.log4j","packageName":"log4j-core","packageVersion":"2.6.1","isTransitiveDependency":false,"dependencyTree":"org.apache.logging.log4j:log4j-core:2.6.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.logging.log4j:log4j-core:2.13.2"}],"vulnerabilityIdentifier":"CVE-2020-9488","vulnerabilityDetails":"Improper validation of certificate with host mismatch in Apache Log4j SMTP appender. This could allow an SMTPS connection to be intercepted by a man-in-the-middle attack which could leak any log messages sent through that appender.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9488","cvss3Severity":"low","cvss3Score":"3.7","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_process
cve low detected in core jar cve low severity vulnerability vulnerable library core jar the apache implementation path to dependency file tmp ws scm demo app pom xml path to vulnerable library canner repository org apache logging core core jar dependency hierarchy x core jar vulnerable library found in head commit a href vulnerability details improper validation of certificate with host mismatch in apache smtp appender this could allow an smtps connection to be intercepted by a man in the middle attack which could leak any log messages sent through that appender publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache logging core rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails improper validation of certificate with host mismatch in apache smtp appender this could allow an smtps connection to be intercepted by a man in the middle attack which could leak any log messages sent through that appender vulnerabilityurl
0
22,086
30,608,781,658
IssuesEvent
2023-07-23 10:53:31
python/cpython
https://api.github.com/repos/python/cpython
closed
Segmentation fault in test_concurrent_futures and test_compileall when using multiprocessing.Value attribute in multiprocessing.Queue class.
tests type-crash topic-multiprocessing
# Crash report I'm working on a PR (https://github.com/python/cpython/pull/102499) and have a lot of 'Fatal Python error : Segmentation fault' in **Ubuntu** tests. These errors are only on the `multiprocessing.Queue` class and are mainly on the `test_compileall` and `test_concurrent_futures` unit tests. Here are two examples of them: + ./python -m unittest test.test_concurrent_futures.ProcessPoolForkserverProcessPoolShutdownTest.test_submit_after_interpreter_shutdown, on https://github.com/python/cpython/actions/runs/4461678241/jobs/7835693302?pr=102499#step:17:4833 + ./python -m unittest test.test_compileall.CommandLineTestsWithSourceEpoch.test_workers, on (https://github.com/python/cpython/actions/runs/4461678241/jobs/7835693302?pr=102499#step:17:466) # Error messages Partial result is: ```zsh ../.. stdout: --- runtime-error --- stderr: --- Fatal Python error: Segmentation fault Current thread 0x00007f7db45f34c0 (most recent call first): File "<string>", line 3 in getvalue File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/multiprocessing/queues.py", line 113 in get File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/concurrent/futures/process.py", line 246 in _process_worker File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/multiprocessing/process.py", line 108 in run File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/multiprocessing/process.py", line 314 in _bootstrap File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/multiprocessing/spawn.py", line 133 in _main File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/multiprocessing/forkserver.py", line 313 in _serve_one File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/multiprocessing/forkserver.py", line 274 in main File "<string>", line 1 in <module> ``` From the detail of these references, I'm assuming the problem is in the `get` method of `multiprocessing.Queue`, called from the `ProcessPoolExecutor` function. There is indeed a queue derived from `multiprocessing.Queue` in this function. And in the `get` method, there is a test on an attribut create as a `multiprocessing.Value`. All unit tests dedicated to this class succeed. To realize the evolution linked to the PR, I have used a `multiprocessing.Value` to share a state. May be I used it badly ? All tests succeed with **OSX** and **Win**. Working on OSX, I have no idea how to help and go further. # Your environment - CPython versions tested on: Python 3.12.0a6+ - Operating system and architecture: Ubuntu
1.0
Segmentation fault in test_concurrent_futures and test_compileall when using multiprocessing.Value attribute in multiprocessing.Queue class. - # Crash report I'm working on a PR (https://github.com/python/cpython/pull/102499) and have a lot of 'Fatal Python error : Segmentation fault' in **Ubuntu** tests. These errors are only on the `multiprocessing.Queue` class and are mainly on the `test_compileall` and `test_concurrent_futures` unit tests. Here are two examples of them: + ./python -m unittest test.test_concurrent_futures.ProcessPoolForkserverProcessPoolShutdownTest.test_submit_after_interpreter_shutdown, on https://github.com/python/cpython/actions/runs/4461678241/jobs/7835693302?pr=102499#step:17:4833 + ./python -m unittest test.test_compileall.CommandLineTestsWithSourceEpoch.test_workers, on (https://github.com/python/cpython/actions/runs/4461678241/jobs/7835693302?pr=102499#step:17:466) # Error messages Partial result is: ```zsh ../.. stdout: --- runtime-error --- stderr: --- Fatal Python error: Segmentation fault Current thread 0x00007f7db45f34c0 (most recent call first): File "<string>", line 3 in getvalue File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/multiprocessing/queues.py", line 113 in get File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/concurrent/futures/process.py", line 246 in _process_worker File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/multiprocessing/process.py", line 108 in run File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/multiprocessing/process.py", line 314 in _bootstrap File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/multiprocessing/spawn.py", line 133 in _main File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/multiprocessing/forkserver.py", line 313 in _serve_one File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/multiprocessing/forkserver.py", line 274 in main File "<string>", line 1 in <module> ``` From the detail of these references, I'm assuming the problem is in the `get` method of `multiprocessing.Queue`, called from the `ProcessPoolExecutor` function. There is indeed a queue derived from `multiprocessing.Queue` in this function. And in the `get` method, there is a test on an attribut create as a `multiprocessing.Value`. All unit tests dedicated to this class succeed. To realize the evolution linked to the PR, I have used a `multiprocessing.Value` to share a state. May be I used it badly ? All tests succeed with **OSX** and **Win**. Working on OSX, I have no idea how to help and go further. # Your environment - CPython versions tested on: Python 3.12.0a6+ - Operating system and architecture: Ubuntu
process
segmentation fault in test concurrent futures and test compileall when using multiprocessing value attribute in multiprocessing queue class crash report i m working on a pr and have a lot of fatal python error segmentation fault in ubuntu tests these errors are only on the multiprocessing queue class and are mainly on the test compileall and test concurrent futures unit tests here are two examples of them python m unittest test test concurrent futures processpoolforkserverprocesspoolshutdowntest test submit after interpreter shutdown on python m unittest test test compileall commandlinetestswithsourceepoch test workers on error messages partial result is zsh stdout runtime error stderr fatal python error segmentation fault current thread most recent call first file line in getvalue file home runner work cpython cpython ro srcdir lib multiprocessing queues py line in get file home runner work cpython cpython ro srcdir lib concurrent futures process py line in process worker file home runner work cpython cpython ro srcdir lib multiprocessing process py line in run file home runner work cpython cpython ro srcdir lib multiprocessing process py line in bootstrap file home runner work cpython cpython ro srcdir lib multiprocessing spawn py line in main file home runner work cpython cpython ro srcdir lib multiprocessing forkserver py line in serve one file home runner work cpython cpython ro srcdir lib multiprocessing forkserver py line in main file line in from the detail of these references i m assuming the problem is in the get method of multiprocessing queue called from the processpoolexecutor function there is indeed a queue derived from multiprocessing queue in this function and in the get method there is a test on an attribut create as a multiprocessing value all unit tests dedicated to this class succeed to realize the evolution linked to the pr i have used a multiprocessing value to share a state may be i used it badly all tests succeed with osx and win working on osx i have no idea how to help and go further your environment cpython versions tested on python operating system and architecture ubuntu
1
12,670
15,038,008,405
IssuesEvent
2021-02-02 16:58:04
metabase/metabase
https://api.github.com/repos/metabase/metabase
opened
Sandboxing on tables with remapped FK (Display Values) causes query to fail
.Regression Administration/Data Sandboxes Querying/Processor Type:Bug
**Describe the bug** When using column sandboxing on a table with remapped FK (Display Values), then the sandboxed query fails on 1.38.0-rc4. This works in 1.37.8. **To Reproduce** 1. Admin > Data Model > Sample Dataset > Reviews > Product ID :gear: > Display Values = Use foreign key: `Products.Title` ![image](https://user-images.githubusercontent.com/1447303/106633505-320e9300-657f-11eb-90e1-df96faafdb2e.png) 2. Admin > People > create user "U1" and set attributes `user_id`=`1` 3. Admin > Permissions, revoke all collection and data permissions, and grant full access to Products, and sandbox to Reviews: ![image](https://user-images.githubusercontent.com/1447303/106633303-fbd11380-657e-11eb-8c5f-2c8c65c83858.png) 4. Login as user "U1" and go to Reviews table, which fails with `Value does not match schema: {:query {:fields (named (not ("distinct" a-clojure.lang.PersistentVector)) "Distinct, non-empty sequence of Field clauses")}}` ![image](https://user-images.githubusercontent.com/1447303/106633920-8fa2df80-657f-11eb-8de7-f9102f26bc1d.png) <details><summary>Full stacktrace</summary> ``` 2021-02-02 15:59:48,158 ERROR middleware.catch-exceptions :: Error processing query: null {:database_id 1, :started_at #t "2021-02-02T15:59:47.466413+01:00[Europe/Copenhagen]", :error_type :invalid-query, :json_query {:database 1, :query {:source-table 4}, :type "query", :parameters [], :middleware {:js-int-to-string? true, :add-default-userland-constraints? true}}, :native nil, :status :failed, :class clojure.lang.ExceptionInfo, :stacktrace ["--> util.schema$schema_core_validator$fn__17782.invoke(schema.clj:29)" "query_processor.middleware.wrap_value_literals$wrap_value_literals_STAR_.invokeStatic(wrap_value_literals.clj:137)" "query_processor.middleware.wrap_value_literals$wrap_value_literals_STAR_.invoke(wrap_value_literals.clj:133)" "query_processor.middleware.wrap_value_literals$wrap_value_literals$fn__40651.invoke(wrap_value_literals.clj:147)" "query_processor.middleware.annotate$add_column_info$fn__40536.invoke(annotate.clj:578)" "query_processor.middleware.permissions$check_query_permissions$fn__46234.invoke(permissions.clj:69)" "query_processor.middleware.pre_alias_aggregations$pre_alias_aggregations$fn__47767.invoke(pre_alias_aggregations.clj:40)" "query_processor.middleware.cumulative_aggregations$handle_cumulative_aggregations$fn__46432.invoke(cumulative_aggregations.clj:60)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_row_level_permissions$fn__49135.invoke(row_level_restrictions.clj:327)" "query_processor.middleware.resolve_joined_fields$resolve_joined_fields$fn__48004.invoke(resolve_joined_fields.clj:35)" "query_processor.middleware.resolve_joins$resolve_joins$fn__48323.invoke(resolve_joins.clj:184)" "query_processor.middleware.add_implicit_joins$add_implicit_joins$fn__44755.invoke(add_implicit_joins.clj:249)" "query_processor.middleware.large_int_id$convert_id_to_string$fn__47044.invoke(large_int_id.clj:44)" "query_processor.middleware.format_rows$format_rows$fn__47024.invoke(format_rows.clj:74)" "query_processor.middleware.desugar$desugar$fn__46498.invoke(desugar.clj:21)" "query_processor.middleware.binning$update_binning_strategy$fn__45516.invoke(binning.clj:225)" "query_processor.middleware.resolve_fields$resolve_fields$fn__46034.invoke(resolve_fields.clj:24)" "query_processor.middleware.add_dimension_projections$add_remapping$fn__44303.invoke(add_dimension_projections.clj:315)" "query_processor.middleware.add_implicit_clauses$add_implicit_clauses$fn__44506.invoke(add_implicit_clauses.clj:138)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_row_level_permissions$fn__49135.invoke(row_level_restrictions.clj:327)" "query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__44904.invoke(add_source_metadata.clj:103)" "metabase_enterprise.sandbox.query_processor.middleware.column_level_perms_check$maybe_apply_column_level_perms_check$fn__48737.invoke(column_level_perms_check.clj:25)" "query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__47964.invoke(reconcile_breakout_and_order_by_bucketing.clj:97)" "query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__45104.invoke(auto_bucket_datetimes.clj:139)" "query_processor.middleware.resolve_source_table$resolve_source_tables$fn__46081.invoke(resolve_source_table.clj:45)" "query_processor.middleware.parameters$substitute_parameters$fn__47749.invoke(parameters.clj:111)" "query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__46133.invoke(resolve_referenced.clj:79)" "query_processor.middleware.expand_macros$expand_macros$fn__46754.invoke(expand_macros.clj:155)" "query_processor.middleware.add_timezone_info$add_timezone_info$fn__44913.invoke(add_timezone_info.clj:15)" "query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__48685.invoke(splice_params_in_response.clj:32)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__47975$fn__47979.invoke(resolve_database_and_driver.clj:31)" "driver$do_with_driver.invokeStatic(driver.clj:60)" "driver$do_with_driver.invoke(driver.clj:56)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__47975.invoke(resolve_database_and_driver.clj:25)" "query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__46972.invoke(fetch_source_query.clj:264)" "query_processor.middleware.store$initialize_store$fn__48694$fn__48695.invoke(store.clj:11)" "query_processor.store$do_with_store.invokeStatic(store.clj:42)" "query_processor.store$do_with_store.invoke(store.clj:38)" "query_processor.middleware.store$initialize_store$fn__48694.invoke(store.clj:10)" "query_processor.middleware.validate$validate_query$fn__48703.invoke(validate.clj:10)" "query_processor.middleware.normalize_query$normalize$fn__47096.invoke(normalize_query.clj:22)" "query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__44773.invoke(add_rows_truncated.clj:35)" "metabase_enterprise.audit.query_processor.middleware.handle_audit_queries$handle_internal_queries$fn__31273.invoke(handle_audit_queries.clj:162)" "query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__48670.invoke(results_metadata.clj:146)" "query_processor.reducible$async_qp$qp_STAR___33081$thunk__33082.invoke(reducible.clj:103)" "query_processor.reducible$async_qp$qp_STAR___33081.invoke(reducible.clj:109)" "query_processor.reducible$sync_qp$qp_STAR___33090$fn__33093.invoke(reducible.clj:135)" "query_processor.reducible$sync_qp$qp_STAR___33090.invoke(reducible.clj:134)" "query_processor$preprocess_query.invokeStatic(query_processor.clj:162)" "query_processor$preprocess_query.invoke(query_processor.clj:154)" "query_processor$query__GT_preprocessed.invokeStatic(query_processor.clj:168)" "query_processor$query__GT_preprocessed.invoke(query_processor.clj:164)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$fn__48947$preprocess_source_query__48952$fn__48953$fn__48954.invoke(row_level_restrictions.clj:135)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$fn__48947$preprocess_source_query__48952$fn__48953.invoke(row_level_restrictions.clj:134)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$fn__48947$preprocess_source_query__48952.invoke(row_level_restrictions.clj:129)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$fn__49051$gtap__GT_source__49056$fn__49060.invoke(row_level_restrictions.clj:207)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$fn__49051$gtap__GT_source__49056.invoke(row_level_restrictions.clj:195)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_gtap.invokeStatic(row_level_restrictions.clj:258)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_gtap.invoke(row_level_restrictions.clj:244)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_gtaps$replace_49108__49109.invoke(row_level_restrictions.clj:268)" "mbql.util.match$replace_in_collection$iter__19339__19343$fn__19344.invoke(match.clj:139)" "mbql.util.match$replace_in_collection.invokeStatic(match.clj:138)" "mbql.util.match$replace_in_collection.invoke(match.clj:133)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_gtaps$replace_49108__49109.invoke(row_level_restrictions.clj:268)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_gtaps.invokeStatic(row_level_restrictions.clj:268)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_gtaps.invoke(row_level_restrictions.clj:263)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$gtapped_query.invokeStatic(row_level_restrictions.clj:310)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$gtapped_query.invoke(row_level_restrictions.clj:307)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_row_level_permissions$fn__49135.invoke(row_level_restrictions.clj:321)" "query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__44904.invoke(add_source_metadata.clj:103)" "metabase_enterprise.sandbox.query_processor.middleware.column_level_perms_check$maybe_apply_column_level_perms_check$fn__48737.invoke(column_level_perms_check.clj:25)" "query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__47964.invoke(reconcile_breakout_and_order_by_bucketing.clj:97)" "query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__45104.invoke(auto_bucket_datetimes.clj:139)" "query_processor.middleware.resolve_source_table$resolve_source_tables$fn__46081.invoke(resolve_source_table.clj:45)" "query_processor.middleware.parameters$substitute_parameters$fn__47749.invoke(parameters.clj:111)" "query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__46133.invoke(resolve_referenced.clj:79)" "query_processor.middleware.expand_macros$expand_macros$fn__46754.invoke(expand_macros.clj:155)" "query_processor.middleware.add_timezone_info$add_timezone_info$fn__44913.invoke(add_timezone_info.clj:15)" "query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__48685.invoke(splice_params_in_response.clj:32)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__47975$fn__47979.invoke(resolve_database_and_driver.clj:31)" "driver$do_with_driver.invokeStatic(driver.clj:60)" "driver$do_with_driver.invoke(driver.clj:56)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__47975.invoke(resolve_database_and_driver.clj:25)" "query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__46972.invoke(fetch_source_query.clj:264)" "query_processor.middleware.store$initialize_store$fn__48694$fn__48695.invoke(store.clj:11)" "query_processor.store$do_with_store.invokeStatic(store.clj:44)" "query_processor.store$do_with_store.invoke(store.clj:38)" "query_processor.middleware.store$initialize_store$fn__48694.invoke(store.clj:10)" "query_processor.middleware.validate$validate_query$fn__48703.invoke(validate.clj:10)" "query_processor.middleware.normalize_query$normalize$fn__47096.invoke(normalize_query.clj:22)" "query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__44773.invoke(add_rows_truncated.clj:35)" "metabase_enterprise.audit.query_processor.middleware.handle_audit_queries$handle_internal_queries$fn__31273.invoke(handle_audit_queries.clj:162)" "query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__48670.invoke(results_metadata.clj:146)" "query_processor.middleware.constraints$add_default_userland_constraints$fn__46375.invoke(constraints.clj:42)" "query_processor.middleware.process_userland_query$process_userland_query$fn__47838.invoke(process_userland_query.clj:135)" "query_processor.middleware.catch_exceptions$catch_exceptions$fn__46318.invoke(catch_exceptions.clj:173)" "query_processor.reducible$async_qp$qp_STAR___33081$thunk__33082.invoke(reducible.clj:103)" "query_processor.reducible$async_qp$qp_STAR___33081.invoke(reducible.clj:109)" "query_processor.reducible$sync_qp$qp_STAR___33090$fn__33093.invoke(reducible.clj:135)" "query_processor.reducible$sync_qp$qp_STAR___33090.invoke(reducible.clj:134)" "query_processor$process_userland_query.invokeStatic(query_processor.clj:235)" "query_processor$process_userland_query.doInvoke(query_processor.clj:231)" "query_processor$fn__49181$process_query_and_save_execution_BANG___49190$fn__49193.invoke(query_processor.clj:247)" "query_processor$fn__49181$process_query_and_save_execution_BANG___49190.invoke(query_processor.clj:239)" "query_processor$fn__49225$process_query_and_save_with_max_results_constraints_BANG___49234$fn__49237.invoke(query_processor.clj:259)" "query_processor$fn__49225$process_query_and_save_with_max_results_constraints_BANG___49234.invoke(query_processor.clj:252)" "api.dataset$fn__63122$fn__63125.invoke(dataset.clj:55)" "query_processor.streaming$streaming_response_STAR_$fn__63103$fn__63104.invoke(streaming.clj:72)" "query_processor.streaming$streaming_response_STAR_$fn__63103.invoke(streaming.clj:71)" "async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:65)" "async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:63)" "async.streaming_response$do_f_async$fn__17489.invoke(streaming_response.clj:84)"], :context :ad-hoc, :error "Value does not match schema: {:query {:fields (named (not (\"distinct\" a-clojure.lang.PersistentVector)) \"Distinct, non-empty sequence of Field clauses\")}}", :row_count 0, :running_time 0, :preprocessed nil, :ex-data {:type :schema.core/error, :value {:database 1, :type :query, :query {:source-metadata [{:name "ID", :id 36, :table_id 4, :display_name "ID", :base_type :type/BigInteger, :special_type :type/PK, :fingerprint nil, :settings nil} {:name "PRODUCT_ID", :id 33, :table_id 4, :display_name "Product ID", :base_type :type/Integer, :special_type :type/FK, :fingerprint {:global {:distinct-count 176, :nil% 0.0}}, :settings nil} {:name "REVIEWER", :id 35, :table_id 4, :display_name "Reviewer", :base_type :type/Text, :special_type nil, :fingerprint {:global {:distinct-count 1076, :nil% 0.0}, :type {:type/Text {:percent-json 0.0, :percent-url 0.0, :percent-email 0.0, :percent-state 0.001798561151079137, :average-length 9.972122302158274}}}, :settings nil} {:name "RATING", :id 31, :table_id 4, :display_name "Rating", :base_type :type/Integer, :special_type :type/Score, :fingerprint {:global {:distinct-count 5, :nil% 0.0}, :type {:type/Number {:min 1.0, :q1 3.54744353181696, :q3 4.764807071650455, :max 5.0, :sd 1.0443899855660577, :avg 3.987410071942446}}}, :settings nil} {:name "BODY", :id 32, :table_id 4, :display_name "Body", :base_type :type/Text, :special_type :type/Description, :fingerprint {:global {:distinct-count 1112, :nil% 0.0}, :type {:type/Text {:percent-json 0.0, :percent-url 0.0, :percent-email 0.0, :percent-state 0.0, :average-length 177.41996402877697}}}, :settings nil} {:table_id 4, :special_type :type/CreationTimestamp, :unit :default, :name "CREATED_AT", :settings nil, :id 34, :display_name "Created At", :fingerprint {:global {:distinct-count 1112, :nil% 0.0}, :type {:type/DateTime {:earliest "2016-06-03T00:37:05.818Z", :latest "2020-04-19T14:15:25.677Z"}}}, :base_type :type/DateTime} {:table_id 1, :special_type :type/Title, :name "TITLE", :settings nil, :id 5, :display_name "Product → Title", :fingerprint {:global {:distinct-count 199, :nil% 0.0}, :type {:type/Text {:percent-json 0.0, :percent-url 0.0, :percent-email 0.0, :percent-state 0.0, :average-length 21.495}}}, :base_type :type/Text, :source_alias "PRODUCTS__via__PRODUCT_ID"}], :fields [[:field-id 36] [:field-id 33] [:field-id 35] [:field-id 31] [:field-id 32] [:field-id 34] [:joined-field "PRODUCTS__via__PRODUCT_ID" [:field-id 5]] [:joined-field "PRODUCTS__via__PRODUCT_ID" [:field-id 5]]], :joins [{:strategy :left-join, :source-table 1, :alias "PRODUCTS__via__PRODUCT_ID", :fk-field-id 33, :condition [:= [:field-id 33] [:joined-field "PRODUCTS__via__PRODUCT_ID" [:field-id 8]]]}], :source-query {:source-table 4, :filter [:= [:field-id 33] [:value 1 {:base_type :type/Integer, :special_type :type/FK, :database_type "INTEGER", :name "PRODUCT_ID"}]], :fields [[:field-id 36] [:field-id 33] [:field-id 35] [:field-id 31] [:field-id 32] [:datetime-field [:field-id 34] :default] [:joined-field "PRODUCTS__via__PRODUCT_ID" [:field-id 5]]], :joins [{:strategy :left-join, :source-table 1, :alias "PRODUCTS__via__PRODUCT_ID", :fk-field-id 33, :condition [:= [:field-id 33] [:joined-field "PRODUCTS__via__PRODUCT_ID" [:field-id 8]]]}]}}}, :error {:query {:fields (named (not ("distinct" a-clojure.lang.PersistentVector)) "Distinct, non-empty sequence of Field clauses")}}}, :data {:rows [], :cols []}} ``` </details> **Information about your Metabase Installation:** Hash `6312927` ~1.38.0-rc4 **Additional context** Related to https://github.com/metabase/metabase-enterprise/issues/405
1.0
Sandboxing on tables with remapped FK (Display Values) causes query to fail - **Describe the bug** When using column sandboxing on a table with remapped FK (Display Values), then the sandboxed query fails on 1.38.0-rc4. This works in 1.37.8. **To Reproduce** 1. Admin > Data Model > Sample Dataset > Reviews > Product ID :gear: > Display Values = Use foreign key: `Products.Title` ![image](https://user-images.githubusercontent.com/1447303/106633505-320e9300-657f-11eb-90e1-df96faafdb2e.png) 2. Admin > People > create user "U1" and set attributes `user_id`=`1` 3. Admin > Permissions, revoke all collection and data permissions, and grant full access to Products, and sandbox to Reviews: ![image](https://user-images.githubusercontent.com/1447303/106633303-fbd11380-657e-11eb-8c5f-2c8c65c83858.png) 4. Login as user "U1" and go to Reviews table, which fails with `Value does not match schema: {:query {:fields (named (not ("distinct" a-clojure.lang.PersistentVector)) "Distinct, non-empty sequence of Field clauses")}}` ![image](https://user-images.githubusercontent.com/1447303/106633920-8fa2df80-657f-11eb-8de7-f9102f26bc1d.png) <details><summary>Full stacktrace</summary> ``` 2021-02-02 15:59:48,158 ERROR middleware.catch-exceptions :: Error processing query: null {:database_id 1, :started_at #t "2021-02-02T15:59:47.466413+01:00[Europe/Copenhagen]", :error_type :invalid-query, :json_query {:database 1, :query {:source-table 4}, :type "query", :parameters [], :middleware {:js-int-to-string? true, :add-default-userland-constraints? true}}, :native nil, :status :failed, :class clojure.lang.ExceptionInfo, :stacktrace ["--> util.schema$schema_core_validator$fn__17782.invoke(schema.clj:29)" "query_processor.middleware.wrap_value_literals$wrap_value_literals_STAR_.invokeStatic(wrap_value_literals.clj:137)" "query_processor.middleware.wrap_value_literals$wrap_value_literals_STAR_.invoke(wrap_value_literals.clj:133)" "query_processor.middleware.wrap_value_literals$wrap_value_literals$fn__40651.invoke(wrap_value_literals.clj:147)" "query_processor.middleware.annotate$add_column_info$fn__40536.invoke(annotate.clj:578)" "query_processor.middleware.permissions$check_query_permissions$fn__46234.invoke(permissions.clj:69)" "query_processor.middleware.pre_alias_aggregations$pre_alias_aggregations$fn__47767.invoke(pre_alias_aggregations.clj:40)" "query_processor.middleware.cumulative_aggregations$handle_cumulative_aggregations$fn__46432.invoke(cumulative_aggregations.clj:60)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_row_level_permissions$fn__49135.invoke(row_level_restrictions.clj:327)" "query_processor.middleware.resolve_joined_fields$resolve_joined_fields$fn__48004.invoke(resolve_joined_fields.clj:35)" "query_processor.middleware.resolve_joins$resolve_joins$fn__48323.invoke(resolve_joins.clj:184)" "query_processor.middleware.add_implicit_joins$add_implicit_joins$fn__44755.invoke(add_implicit_joins.clj:249)" "query_processor.middleware.large_int_id$convert_id_to_string$fn__47044.invoke(large_int_id.clj:44)" "query_processor.middleware.format_rows$format_rows$fn__47024.invoke(format_rows.clj:74)" "query_processor.middleware.desugar$desugar$fn__46498.invoke(desugar.clj:21)" "query_processor.middleware.binning$update_binning_strategy$fn__45516.invoke(binning.clj:225)" "query_processor.middleware.resolve_fields$resolve_fields$fn__46034.invoke(resolve_fields.clj:24)" "query_processor.middleware.add_dimension_projections$add_remapping$fn__44303.invoke(add_dimension_projections.clj:315)" "query_processor.middleware.add_implicit_clauses$add_implicit_clauses$fn__44506.invoke(add_implicit_clauses.clj:138)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_row_level_permissions$fn__49135.invoke(row_level_restrictions.clj:327)" "query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__44904.invoke(add_source_metadata.clj:103)" "metabase_enterprise.sandbox.query_processor.middleware.column_level_perms_check$maybe_apply_column_level_perms_check$fn__48737.invoke(column_level_perms_check.clj:25)" "query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__47964.invoke(reconcile_breakout_and_order_by_bucketing.clj:97)" "query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__45104.invoke(auto_bucket_datetimes.clj:139)" "query_processor.middleware.resolve_source_table$resolve_source_tables$fn__46081.invoke(resolve_source_table.clj:45)" "query_processor.middleware.parameters$substitute_parameters$fn__47749.invoke(parameters.clj:111)" "query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__46133.invoke(resolve_referenced.clj:79)" "query_processor.middleware.expand_macros$expand_macros$fn__46754.invoke(expand_macros.clj:155)" "query_processor.middleware.add_timezone_info$add_timezone_info$fn__44913.invoke(add_timezone_info.clj:15)" "query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__48685.invoke(splice_params_in_response.clj:32)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__47975$fn__47979.invoke(resolve_database_and_driver.clj:31)" "driver$do_with_driver.invokeStatic(driver.clj:60)" "driver$do_with_driver.invoke(driver.clj:56)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__47975.invoke(resolve_database_and_driver.clj:25)" "query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__46972.invoke(fetch_source_query.clj:264)" "query_processor.middleware.store$initialize_store$fn__48694$fn__48695.invoke(store.clj:11)" "query_processor.store$do_with_store.invokeStatic(store.clj:42)" "query_processor.store$do_with_store.invoke(store.clj:38)" "query_processor.middleware.store$initialize_store$fn__48694.invoke(store.clj:10)" "query_processor.middleware.validate$validate_query$fn__48703.invoke(validate.clj:10)" "query_processor.middleware.normalize_query$normalize$fn__47096.invoke(normalize_query.clj:22)" "query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__44773.invoke(add_rows_truncated.clj:35)" "metabase_enterprise.audit.query_processor.middleware.handle_audit_queries$handle_internal_queries$fn__31273.invoke(handle_audit_queries.clj:162)" "query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__48670.invoke(results_metadata.clj:146)" "query_processor.reducible$async_qp$qp_STAR___33081$thunk__33082.invoke(reducible.clj:103)" "query_processor.reducible$async_qp$qp_STAR___33081.invoke(reducible.clj:109)" "query_processor.reducible$sync_qp$qp_STAR___33090$fn__33093.invoke(reducible.clj:135)" "query_processor.reducible$sync_qp$qp_STAR___33090.invoke(reducible.clj:134)" "query_processor$preprocess_query.invokeStatic(query_processor.clj:162)" "query_processor$preprocess_query.invoke(query_processor.clj:154)" "query_processor$query__GT_preprocessed.invokeStatic(query_processor.clj:168)" "query_processor$query__GT_preprocessed.invoke(query_processor.clj:164)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$fn__48947$preprocess_source_query__48952$fn__48953$fn__48954.invoke(row_level_restrictions.clj:135)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$fn__48947$preprocess_source_query__48952$fn__48953.invoke(row_level_restrictions.clj:134)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$fn__48947$preprocess_source_query__48952.invoke(row_level_restrictions.clj:129)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$fn__49051$gtap__GT_source__49056$fn__49060.invoke(row_level_restrictions.clj:207)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$fn__49051$gtap__GT_source__49056.invoke(row_level_restrictions.clj:195)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_gtap.invokeStatic(row_level_restrictions.clj:258)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_gtap.invoke(row_level_restrictions.clj:244)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_gtaps$replace_49108__49109.invoke(row_level_restrictions.clj:268)" "mbql.util.match$replace_in_collection$iter__19339__19343$fn__19344.invoke(match.clj:139)" "mbql.util.match$replace_in_collection.invokeStatic(match.clj:138)" "mbql.util.match$replace_in_collection.invoke(match.clj:133)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_gtaps$replace_49108__49109.invoke(row_level_restrictions.clj:268)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_gtaps.invokeStatic(row_level_restrictions.clj:268)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_gtaps.invoke(row_level_restrictions.clj:263)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$gtapped_query.invokeStatic(row_level_restrictions.clj:310)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$gtapped_query.invoke(row_level_restrictions.clj:307)" "metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_row_level_permissions$fn__49135.invoke(row_level_restrictions.clj:321)" "query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__44904.invoke(add_source_metadata.clj:103)" "metabase_enterprise.sandbox.query_processor.middleware.column_level_perms_check$maybe_apply_column_level_perms_check$fn__48737.invoke(column_level_perms_check.clj:25)" "query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__47964.invoke(reconcile_breakout_and_order_by_bucketing.clj:97)" "query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__45104.invoke(auto_bucket_datetimes.clj:139)" "query_processor.middleware.resolve_source_table$resolve_source_tables$fn__46081.invoke(resolve_source_table.clj:45)" "query_processor.middleware.parameters$substitute_parameters$fn__47749.invoke(parameters.clj:111)" "query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__46133.invoke(resolve_referenced.clj:79)" "query_processor.middleware.expand_macros$expand_macros$fn__46754.invoke(expand_macros.clj:155)" "query_processor.middleware.add_timezone_info$add_timezone_info$fn__44913.invoke(add_timezone_info.clj:15)" "query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__48685.invoke(splice_params_in_response.clj:32)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__47975$fn__47979.invoke(resolve_database_and_driver.clj:31)" "driver$do_with_driver.invokeStatic(driver.clj:60)" "driver$do_with_driver.invoke(driver.clj:56)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__47975.invoke(resolve_database_and_driver.clj:25)" "query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__46972.invoke(fetch_source_query.clj:264)" "query_processor.middleware.store$initialize_store$fn__48694$fn__48695.invoke(store.clj:11)" "query_processor.store$do_with_store.invokeStatic(store.clj:44)" "query_processor.store$do_with_store.invoke(store.clj:38)" "query_processor.middleware.store$initialize_store$fn__48694.invoke(store.clj:10)" "query_processor.middleware.validate$validate_query$fn__48703.invoke(validate.clj:10)" "query_processor.middleware.normalize_query$normalize$fn__47096.invoke(normalize_query.clj:22)" "query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__44773.invoke(add_rows_truncated.clj:35)" "metabase_enterprise.audit.query_processor.middleware.handle_audit_queries$handle_internal_queries$fn__31273.invoke(handle_audit_queries.clj:162)" "query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__48670.invoke(results_metadata.clj:146)" "query_processor.middleware.constraints$add_default_userland_constraints$fn__46375.invoke(constraints.clj:42)" "query_processor.middleware.process_userland_query$process_userland_query$fn__47838.invoke(process_userland_query.clj:135)" "query_processor.middleware.catch_exceptions$catch_exceptions$fn__46318.invoke(catch_exceptions.clj:173)" "query_processor.reducible$async_qp$qp_STAR___33081$thunk__33082.invoke(reducible.clj:103)" "query_processor.reducible$async_qp$qp_STAR___33081.invoke(reducible.clj:109)" "query_processor.reducible$sync_qp$qp_STAR___33090$fn__33093.invoke(reducible.clj:135)" "query_processor.reducible$sync_qp$qp_STAR___33090.invoke(reducible.clj:134)" "query_processor$process_userland_query.invokeStatic(query_processor.clj:235)" "query_processor$process_userland_query.doInvoke(query_processor.clj:231)" "query_processor$fn__49181$process_query_and_save_execution_BANG___49190$fn__49193.invoke(query_processor.clj:247)" "query_processor$fn__49181$process_query_and_save_execution_BANG___49190.invoke(query_processor.clj:239)" "query_processor$fn__49225$process_query_and_save_with_max_results_constraints_BANG___49234$fn__49237.invoke(query_processor.clj:259)" "query_processor$fn__49225$process_query_and_save_with_max_results_constraints_BANG___49234.invoke(query_processor.clj:252)" "api.dataset$fn__63122$fn__63125.invoke(dataset.clj:55)" "query_processor.streaming$streaming_response_STAR_$fn__63103$fn__63104.invoke(streaming.clj:72)" "query_processor.streaming$streaming_response_STAR_$fn__63103.invoke(streaming.clj:71)" "async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:65)" "async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:63)" "async.streaming_response$do_f_async$fn__17489.invoke(streaming_response.clj:84)"], :context :ad-hoc, :error "Value does not match schema: {:query {:fields (named (not (\"distinct\" a-clojure.lang.PersistentVector)) \"Distinct, non-empty sequence of Field clauses\")}}", :row_count 0, :running_time 0, :preprocessed nil, :ex-data {:type :schema.core/error, :value {:database 1, :type :query, :query {:source-metadata [{:name "ID", :id 36, :table_id 4, :display_name "ID", :base_type :type/BigInteger, :special_type :type/PK, :fingerprint nil, :settings nil} {:name "PRODUCT_ID", :id 33, :table_id 4, :display_name "Product ID", :base_type :type/Integer, :special_type :type/FK, :fingerprint {:global {:distinct-count 176, :nil% 0.0}}, :settings nil} {:name "REVIEWER", :id 35, :table_id 4, :display_name "Reviewer", :base_type :type/Text, :special_type nil, :fingerprint {:global {:distinct-count 1076, :nil% 0.0}, :type {:type/Text {:percent-json 0.0, :percent-url 0.0, :percent-email 0.0, :percent-state 0.001798561151079137, :average-length 9.972122302158274}}}, :settings nil} {:name "RATING", :id 31, :table_id 4, :display_name "Rating", :base_type :type/Integer, :special_type :type/Score, :fingerprint {:global {:distinct-count 5, :nil% 0.0}, :type {:type/Number {:min 1.0, :q1 3.54744353181696, :q3 4.764807071650455, :max 5.0, :sd 1.0443899855660577, :avg 3.987410071942446}}}, :settings nil} {:name "BODY", :id 32, :table_id 4, :display_name "Body", :base_type :type/Text, :special_type :type/Description, :fingerprint {:global {:distinct-count 1112, :nil% 0.0}, :type {:type/Text {:percent-json 0.0, :percent-url 0.0, :percent-email 0.0, :percent-state 0.0, :average-length 177.41996402877697}}}, :settings nil} {:table_id 4, :special_type :type/CreationTimestamp, :unit :default, :name "CREATED_AT", :settings nil, :id 34, :display_name "Created At", :fingerprint {:global {:distinct-count 1112, :nil% 0.0}, :type {:type/DateTime {:earliest "2016-06-03T00:37:05.818Z", :latest "2020-04-19T14:15:25.677Z"}}}, :base_type :type/DateTime} {:table_id 1, :special_type :type/Title, :name "TITLE", :settings nil, :id 5, :display_name "Product → Title", :fingerprint {:global {:distinct-count 199, :nil% 0.0}, :type {:type/Text {:percent-json 0.0, :percent-url 0.0, :percent-email 0.0, :percent-state 0.0, :average-length 21.495}}}, :base_type :type/Text, :source_alias "PRODUCTS__via__PRODUCT_ID"}], :fields [[:field-id 36] [:field-id 33] [:field-id 35] [:field-id 31] [:field-id 32] [:field-id 34] [:joined-field "PRODUCTS__via__PRODUCT_ID" [:field-id 5]] [:joined-field "PRODUCTS__via__PRODUCT_ID" [:field-id 5]]], :joins [{:strategy :left-join, :source-table 1, :alias "PRODUCTS__via__PRODUCT_ID", :fk-field-id 33, :condition [:= [:field-id 33] [:joined-field "PRODUCTS__via__PRODUCT_ID" [:field-id 8]]]}], :source-query {:source-table 4, :filter [:= [:field-id 33] [:value 1 {:base_type :type/Integer, :special_type :type/FK, :database_type "INTEGER", :name "PRODUCT_ID"}]], :fields [[:field-id 36] [:field-id 33] [:field-id 35] [:field-id 31] [:field-id 32] [:datetime-field [:field-id 34] :default] [:joined-field "PRODUCTS__via__PRODUCT_ID" [:field-id 5]]], :joins [{:strategy :left-join, :source-table 1, :alias "PRODUCTS__via__PRODUCT_ID", :fk-field-id 33, :condition [:= [:field-id 33] [:joined-field "PRODUCTS__via__PRODUCT_ID" [:field-id 8]]]}]}}}, :error {:query {:fields (named (not ("distinct" a-clojure.lang.PersistentVector)) "Distinct, non-empty sequence of Field clauses")}}}, :data {:rows [], :cols []}} ``` </details> **Information about your Metabase Installation:** Hash `6312927` ~1.38.0-rc4 **Additional context** Related to https://github.com/metabase/metabase-enterprise/issues/405
process
sandboxing on tables with remapped fk display values causes query to fail describe the bug when using column sandboxing on a table with remapped fk display values then the sandboxed query fails on this works in to reproduce admin data model sample dataset reviews product id gear display values use foreign key products title admin people create user and set attributes user id admin permissions revoke all collection and data permissions and grant full access to products and sandbox to reviews login as user and go to reviews table which fails with value does not match schema query fields named not distinct a clojure lang persistentvector distinct non empty sequence of field clauses full stacktrace error middleware catch exceptions error processing query null database id started at t error type invalid query json query database query source table type query parameters middleware js int to string true add default userland constraints true native nil status failed class clojure lang exceptioninfo stacktrace util schema schema core validator fn invoke schema clj query processor middleware wrap value literals wrap value literals star invokestatic wrap value literals clj query processor middleware wrap value literals wrap value literals star invoke wrap value literals clj query processor middleware wrap value literals wrap value literals fn invoke wrap value literals clj query processor middleware annotate add column info fn invoke annotate clj query processor middleware permissions check query permissions fn invoke permissions clj query processor middleware pre alias aggregations pre alias aggregations fn invoke pre alias aggregations clj query processor middleware cumulative aggregations handle cumulative aggregations fn invoke cumulative aggregations clj metabase enterprise sandbox query processor middleware row level restrictions apply row level permissions fn invoke row level restrictions clj query processor middleware resolve joined fields resolve joined fields fn invoke resolve joined fields clj query processor middleware resolve joins resolve joins fn invoke resolve joins clj query processor middleware add implicit joins add implicit joins fn invoke add implicit joins clj query processor middleware large int id convert id to string fn invoke large int id clj query processor middleware format rows format rows fn invoke format rows clj query processor middleware desugar desugar fn invoke desugar clj query processor middleware binning update binning strategy fn invoke binning clj query processor middleware resolve fields resolve fields fn invoke resolve fields clj query processor middleware add dimension projections add remapping fn invoke add dimension projections clj query processor middleware add implicit clauses add implicit clauses fn invoke add implicit clauses clj metabase enterprise sandbox query processor middleware row level restrictions apply row level permissions fn invoke row level restrictions clj query processor middleware add source metadata add source metadata for source queries fn invoke add source metadata clj metabase enterprise sandbox query processor middleware column level perms check maybe apply column level perms check fn invoke column level perms check clj query processor middleware reconcile breakout and order by bucketing reconcile breakout and order by bucketing fn invoke reconcile breakout and order by bucketing clj query processor middleware auto bucket datetimes auto bucket datetimes fn invoke auto bucket datetimes clj query processor middleware resolve source table resolve source tables fn invoke resolve source table clj query processor middleware parameters substitute parameters fn invoke parameters clj query processor middleware resolve referenced resolve referenced card resources fn invoke resolve referenced clj query processor middleware expand macros expand macros fn invoke expand macros clj query processor middleware add timezone info add timezone info fn invoke add timezone info clj query processor middleware splice params in response splice params in response fn invoke splice params in response clj query processor middleware resolve database and driver resolve database and driver fn fn invoke resolve database and driver clj driver do with driver invokestatic driver clj driver do with driver invoke driver clj query processor middleware resolve database and driver resolve database and driver fn invoke resolve database and driver clj query processor middleware fetch source query resolve card id source tables fn invoke fetch source query clj query processor middleware store initialize store fn fn invoke store clj query processor store do with store invokestatic store clj query processor store do with store invoke store clj query processor middleware store initialize store fn invoke store clj query processor middleware validate validate query fn invoke validate clj query processor middleware normalize query normalize fn invoke normalize query clj query processor middleware add rows truncated add rows truncated fn invoke add rows truncated clj metabase enterprise audit query processor middleware handle audit queries handle internal queries fn invoke handle audit queries clj query processor middleware results metadata record and return metadata bang fn invoke results metadata clj query processor reducible async qp qp star thunk invoke reducible clj query processor reducible async qp qp star invoke reducible clj query processor reducible sync qp qp star fn invoke reducible clj query processor reducible sync qp qp star invoke reducible clj query processor preprocess query invokestatic query processor clj query processor preprocess query invoke query processor clj query processor query gt preprocessed invokestatic query processor clj query processor query gt preprocessed invoke query processor clj metabase enterprise sandbox query processor middleware row level restrictions fn preprocess source query fn fn invoke row level restrictions clj metabase enterprise sandbox query processor middleware row level restrictions fn preprocess source query fn invoke row level restrictions clj metabase enterprise sandbox query processor middleware row level restrictions fn preprocess source query invoke row level restrictions clj metabase enterprise sandbox query processor middleware row level restrictions fn gtap gt source fn invoke row level restrictions clj metabase enterprise sandbox query processor middleware row level restrictions fn gtap gt source invoke row level restrictions clj metabase enterprise sandbox query processor middleware row level restrictions apply gtap invokestatic row level restrictions clj metabase enterprise sandbox query processor middleware row level restrictions apply gtap invoke row level restrictions clj metabase enterprise sandbox query processor middleware row level restrictions apply gtaps replace invoke row level restrictions clj mbql util match replace in collection iter fn invoke match clj mbql util match replace in collection invokestatic match clj mbql util match replace in collection invoke match clj metabase enterprise sandbox query processor middleware row level restrictions apply gtaps replace invoke row level restrictions clj metabase enterprise sandbox query processor middleware row level restrictions apply gtaps invokestatic row level restrictions clj metabase enterprise sandbox query processor middleware row level restrictions apply gtaps invoke row level restrictions clj metabase enterprise sandbox query processor middleware row level restrictions gtapped query invokestatic row level restrictions clj metabase enterprise sandbox query processor middleware row level restrictions gtapped query invoke row level restrictions clj metabase enterprise sandbox query processor middleware row level restrictions apply row level permissions fn invoke row level restrictions clj query processor middleware add source metadata add source metadata for source queries fn invoke add source metadata clj metabase enterprise sandbox query processor middleware column level perms check maybe apply column level perms check fn invoke column level perms check clj query processor middleware reconcile breakout and order by bucketing reconcile breakout and order by bucketing fn invoke reconcile breakout and order by bucketing clj query processor middleware auto bucket datetimes auto bucket datetimes fn invoke auto bucket datetimes clj query processor middleware resolve source table resolve source tables fn invoke resolve source table clj query processor middleware parameters substitute parameters fn invoke parameters clj query processor middleware resolve referenced resolve referenced card resources fn invoke resolve referenced clj query processor middleware expand macros expand macros fn invoke expand macros clj query processor middleware add timezone info add timezone info fn invoke add timezone info clj query processor middleware splice params in response splice params in response fn invoke splice params in response clj query processor middleware resolve database and driver resolve database and driver fn fn invoke resolve database and driver clj driver do with driver invokestatic driver clj driver do with driver invoke driver clj query processor middleware resolve database and driver resolve database and driver fn invoke resolve database and driver clj query processor middleware fetch source query resolve card id source tables fn invoke fetch source query clj query processor middleware store initialize store fn fn invoke store clj query processor store do with store invokestatic store clj query processor store do with store invoke store clj query processor middleware store initialize store fn invoke store clj query processor middleware validate validate query fn invoke validate clj query processor middleware normalize query normalize fn invoke normalize query clj query processor middleware add rows truncated add rows truncated fn invoke add rows truncated clj metabase enterprise audit query processor middleware handle audit queries handle internal queries fn invoke handle audit queries clj query processor middleware results metadata record and return metadata bang fn invoke results metadata clj query processor middleware constraints add default userland constraints fn invoke constraints clj query processor middleware process userland query process userland query fn invoke process userland query clj query processor middleware catch exceptions catch exceptions fn invoke catch exceptions clj query processor reducible async qp qp star thunk invoke reducible clj query processor reducible async qp qp star invoke reducible clj query processor reducible sync qp qp star fn invoke reducible clj query processor reducible sync qp qp star invoke reducible clj query processor process userland query invokestatic query processor clj query processor process userland query doinvoke query processor clj query processor fn process query and save execution bang fn invoke query processor clj query processor fn process query and save execution bang invoke query processor clj query processor fn process query and save with max results constraints bang fn invoke query processor clj query processor fn process query and save with max results constraints bang invoke query processor clj api dataset fn fn invoke dataset clj query processor streaming streaming response star fn fn invoke streaming clj query processor streaming streaming response star fn invoke streaming clj async streaming response do f star invokestatic streaming response clj async streaming response do f star invoke streaming response clj async streaming response do f async fn invoke streaming response clj context ad hoc error value does not match schema query fields named not distinct a clojure lang persistentvector distinct non empty sequence of field clauses row count running time preprocessed nil ex data type schema core error value database type query query source metadata name id id table id display name id base type type biginteger special type type pk fingerprint nil settings nil name product id id table id display name product id base type type integer special type type fk fingerprint global distinct count nil settings nil name reviewer id table id display name reviewer base type type text special type nil fingerprint global distinct count nil type type text percent json percent url percent email percent state average length settings nil name rating id table id display name rating base type type integer special type type score fingerprint global distinct count nil type type number min max sd avg settings nil name body id table id display name body base type type text special type type description fingerprint global distinct count nil type type text percent json percent url percent email percent state average length settings nil table id special type type creationtimestamp unit default name created at settings nil id display name created at fingerprint global distinct count nil type type datetime earliest latest base type type datetime table id special type type title name title settings nil id display name product → title fingerprint global distinct count nil type type text percent json percent url percent email percent state average length base type type text source alias products via product id fields joins source query source table filter fields default joins error query fields named not distinct a clojure lang persistentvector distinct non empty sequence of field clauses data rows cols information about your metabase installation hash additional context related to
1
19,534
25,844,357,035
IssuesEvent
2022-12-13 04:36:15
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
[processor/servicegraphprocessor] Not included in the `v0.61.0` release image
question Stale priority:p2 processor/servicegraph
### What happened? ## Description It seems like the Service Graph Processor (https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/servicegraphprocessor) isn't included in the `contrib` distro. Is that by mistake? Was that intentional? I think it should be included as it's officially released in the contrib source code since `v0.60.0` (https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.60.0) With image version 0.61.0 of the contrib collector I am getting: ``` collector server run finished with error: failed to get config: cannot unmarshal the configuration: unknown processors type \"servicegraph\" for \"servicegraph\" (valid values: [cumulativetodelta resourcedetection resource memory_limiter attributes k8sattributes experimental_metricsgeneration probabilistic_sampler spanmetrics transform batch deltatorate filter groupbytrace metricstransform span groupbyattrs redaction routing tail_sampling])\n ``` ## Steps to Reproduce ## Expected Result The `servicegraph` processor is available and can be configured ## Actual Result ### Collector version v0.61.0 ### Environment information ## Environment Docker / Kubernetes ### OpenTelemetry Collector configuration _No response_ ### Log output _No response_ ### Additional context I also filed https://github.com/open-telemetry/opentelemetry-collector-releases/issues/217 as I wasn't sure where the issue should be addressed
1.0
[processor/servicegraphprocessor] Not included in the `v0.61.0` release image - ### What happened? ## Description It seems like the Service Graph Processor (https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/servicegraphprocessor) isn't included in the `contrib` distro. Is that by mistake? Was that intentional? I think it should be included as it's officially released in the contrib source code since `v0.60.0` (https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.60.0) With image version 0.61.0 of the contrib collector I am getting: ``` collector server run finished with error: failed to get config: cannot unmarshal the configuration: unknown processors type \"servicegraph\" for \"servicegraph\" (valid values: [cumulativetodelta resourcedetection resource memory_limiter attributes k8sattributes experimental_metricsgeneration probabilistic_sampler spanmetrics transform batch deltatorate filter groupbytrace metricstransform span groupbyattrs redaction routing tail_sampling])\n ``` ## Steps to Reproduce ## Expected Result The `servicegraph` processor is available and can be configured ## Actual Result ### Collector version v0.61.0 ### Environment information ## Environment Docker / Kubernetes ### OpenTelemetry Collector configuration _No response_ ### Log output _No response_ ### Additional context I also filed https://github.com/open-telemetry/opentelemetry-collector-releases/issues/217 as I wasn't sure where the issue should be addressed
process
not included in the release image what happened description it seems like the service graph processor isn t included in the contrib distro is that by mistake was that intentional i think it should be included as it s officially released in the contrib source code since with image version of the contrib collector i am getting collector server run finished with error failed to get config cannot unmarshal the configuration unknown processors type servicegraph for servicegraph valid values n steps to reproduce expected result the servicegraph processor is available and can be configured actual result collector version environment information environment docker kubernetes opentelemetry collector configuration no response log output no response additional context i also filed as i wasn t sure where the issue should be addressed
1
142,023
11,452,327,272
IssuesEvent
2020-02-06 13:29:44
appsody/appsody
https://api.github.com/repos/appsody/appsody
closed
Copy 'testdata' into the sandboxing test folder
enhancement testing
**Is your feature request related to a problem? Please describe.** Currently multiple tests will fail if we increase the thread count above 1 because although the sandboxing copies appsody.yaml, it doesn't create its own version of the testing stack. **Describe the solution you'd like** In order to have parallel tests, and more robust synchronous tests we should copy the `testdata` folder into the sandboxing directory during setup. A lot of the tests wont touch it, but to remain consistent it could probably be best suited to be carried out during the `TestSetupWithSandbox` function, into a sibling or child directory of the `ProjectDir` - e.g. `sandbox.TestData`.
1.0
Copy 'testdata' into the sandboxing test folder - **Is your feature request related to a problem? Please describe.** Currently multiple tests will fail if we increase the thread count above 1 because although the sandboxing copies appsody.yaml, it doesn't create its own version of the testing stack. **Describe the solution you'd like** In order to have parallel tests, and more robust synchronous tests we should copy the `testdata` folder into the sandboxing directory during setup. A lot of the tests wont touch it, but to remain consistent it could probably be best suited to be carried out during the `TestSetupWithSandbox` function, into a sibling or child directory of the `ProjectDir` - e.g. `sandbox.TestData`.
non_process
copy testdata into the sandboxing test folder is your feature request related to a problem please describe currently multiple tests will fail if we increase the thread count above because although the sandboxing copies appsody yaml it doesn t create its own version of the testing stack describe the solution you d like in order to have parallel tests and more robust synchronous tests we should copy the testdata folder into the sandboxing directory during setup a lot of the tests wont touch it but to remain consistent it could probably be best suited to be carried out during the testsetupwithsandbox function into a sibling or child directory of the projectdir e g sandbox testdata
0
209,050
16,168,380,406
IssuesEvent
2021-05-02 00:20:52
Voronoff317/homepage
https://api.github.com/repos/Voronoff317/homepage
closed
Table of Contents shouldn't be on the same line
bug documentation
Adding two spaces after each content reference should solve the issue
1.0
Table of Contents shouldn't be on the same line - Adding two spaces after each content reference should solve the issue
non_process
table of contents shouldn t be on the same line adding two spaces after each content reference should solve the issue
0
8,264
4,211,882,531
IssuesEvent
2016-06-29 14:46:56
twisted-infra/braid
https://api.github.com/repos/twisted-infra/braid
opened
Allow other command to be executed before trial when extracting the trial result
Buildbot
trial step fails if other command are executed before the trial comnands see https://buildbot.twistedmatrix.com/builders/debian8-py2.7/builds/564 I think that the trial step should be more flexible in extracting the test results and allow other command to be execute before trial
1.0
Allow other command to be executed before trial when extracting the trial result - trial step fails if other command are executed before the trial comnands see https://buildbot.twistedmatrix.com/builders/debian8-py2.7/builds/564 I think that the trial step should be more flexible in extracting the test results and allow other command to be execute before trial
non_process
allow other command to be executed before trial when extracting the trial result trial step fails if other command are executed before the trial comnands see i think that the trial step should be more flexible in extracting the test results and allow other command to be execute before trial
0
4,280
7,190,599,822
IssuesEvent
2018-02-02 17:49:55
Great-Hill-Corporation/quickBlocks
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
closed
Revisit the database.cpp issue with openIt = createLock(lockType != LOCK_WAIT);
libs-utillib status-inprocess type-enhancement
Does the locking mechanism actually work? If one runs watch.sh refresh 31 and then runs 'make testall' the 04_error code sometimes gets stuck in a loop that runs forever because the miniBlock database gets opened and then appended too while opened which extends the end past the end and it loops forever.
1.0
Revisit the database.cpp issue with openIt = createLock(lockType != LOCK_WAIT); - Does the locking mechanism actually work? If one runs watch.sh refresh 31 and then runs 'make testall' the 04_error code sometimes gets stuck in a loop that runs forever because the miniBlock database gets opened and then appended too while opened which extends the end past the end and it loops forever.
process
revisit the database cpp issue with openit createlock locktype lock wait does the locking mechanism actually work if one runs watch sh refresh and then runs make testall the error code sometimes gets stuck in a loop that runs forever because the miniblock database gets opened and then appended too while opened which extends the end past the end and it loops forever
1
113,233
24,383,284,971
IssuesEvent
2022-10-04 09:37:10
sourcegraph/sourcegraph
https://api.github.com/repos/sourcegraph/sourcegraph
closed
vsce: fix open remote file from search results test
team/integrations vscode-extension
Fix after each hook, which was failing in the CI while the actual functionality kept working (tested locally). Test was disabled in https://github.com/sourcegraph/sourcegraph/pull/40470. /cc @muratsu @jjinnii @ryankscott
1.0
vsce: fix open remote file from search results test - Fix after each hook, which was failing in the CI while the actual functionality kept working (tested locally). Test was disabled in https://github.com/sourcegraph/sourcegraph/pull/40470. /cc @muratsu @jjinnii @ryankscott
non_process
vsce fix open remote file from search results test fix after each hook which was failing in the ci while the actual functionality kept working tested locally test was disabled in cc muratsu jjinnii ryankscott
0
16,900
22,204,711,817
IssuesEvent
2022-06-07 14:02:35
usgpo/bill-status
https://api.github.com/repos/usgpo/bill-status
closed
BILLSTATUS being reprocessed for 108th-112th Congress
reprocessing files
FYI that we are reprocessing the BILLSTATUS files for the 108th through 112th Congress over the next several days. We will close this issue when complete.
1.0
BILLSTATUS being reprocessed for 108th-112th Congress - FYI that we are reprocessing the BILLSTATUS files for the 108th through 112th Congress over the next several days. We will close this issue when complete.
process
billstatus being reprocessed for congress fyi that we are reprocessing the billstatus files for the through congress over the next several days we will close this issue when complete
1
542,763
15,866,256,652
IssuesEvent
2021-04-08 15:33:23
gnosis/ido-ux
https://api.github.com/repos/gnosis/ido-ux
reopened
xdai gas price is not always set correctly
high priority
For the approval tx, I got a gas price of 20 suggested, though on xdai a gas price of 1 is sufficient. It seems that always 20 gwei is suggested
1.0
xdai gas price is not always set correctly - For the approval tx, I got a gas price of 20 suggested, though on xdai a gas price of 1 is sufficient. It seems that always 20 gwei is suggested
non_process
xdai gas price is not always set correctly for the approval tx i got a gas price of suggested though on xdai a gas price of is sufficient it seems that always gwei is suggested
0
20,450
27,108,263,770
IssuesEvent
2023-02-15 13:41:07
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Cleanup flags and code paths for legacy runtime resolution
P3 type: process team-Rules-Python stale
Tracking bug for deleting `--incompatible_use_python_toolchains`, `--python_top`, `--python_path`, `--python2_path`, `--python3_path`, and the legacy code supporting these flags. `--python_path` is also tracked by #7901. We actually still depend on a variant of the legacy mechanism within Google, so this cleanup won't be completed anytime soon.
1.0
Cleanup flags and code paths for legacy runtime resolution - Tracking bug for deleting `--incompatible_use_python_toolchains`, `--python_top`, `--python_path`, `--python2_path`, `--python3_path`, and the legacy code supporting these flags. `--python_path` is also tracked by #7901. We actually still depend on a variant of the legacy mechanism within Google, so this cleanup won't be completed anytime soon.
process
cleanup flags and code paths for legacy runtime resolution tracking bug for deleting incompatible use python toolchains python top python path path path and the legacy code supporting these flags python path is also tracked by we actually still depend on a variant of the legacy mechanism within google so this cleanup won t be completed anytime soon
1
1,173
3,061,982,734
IssuesEvent
2015-08-16 04:10:14
jquery/esprima
https://api.github.com/repos/jquery/esprima
closed
Integrate codecov.io
infrastructure
With [Codecov.io](https://codecov.io/) integration, there can be two benefits: * Coverage information is prominent in every pull request * Code coverage dashboard is available for more detailed information Any possible coverage regression can be easily tracked.
1.0
Integrate codecov.io - With [Codecov.io](https://codecov.io/) integration, there can be two benefits: * Coverage information is prominent in every pull request * Code coverage dashboard is available for more detailed information Any possible coverage regression can be easily tracked.
non_process
integrate codecov io with integration there can be two benefits coverage information is prominent in every pull request code coverage dashboard is available for more detailed information any possible coverage regression can be easily tracked
0
18,534
24,553,251,223
IssuesEvent
2022-10-12 14:05:50
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[Android] App is crashing when navigated to 'Dashboard' screen
Bug Blocker P0 Android Process: Fixed Process: Tested QA Process: Tested dev
**Steps:** 1. Sign in / Sign up 2. Enroll to the study 3. Navigate to 'Dashboard' and Verify **AR:** App is crashing when navigated to 'Dashboard' screen **ER:** App should not crash , when navigated to 'Dashboard' screen https://user-images.githubusercontent.com/86007179/190660139-81733c7c-0c1e-405d-94a0-fc4853a7e7c2.mp4
3.0
[Android] App is crashing when navigated to 'Dashboard' screen - **Steps:** 1. Sign in / Sign up 2. Enroll to the study 3. Navigate to 'Dashboard' and Verify **AR:** App is crashing when navigated to 'Dashboard' screen **ER:** App should not crash , when navigated to 'Dashboard' screen https://user-images.githubusercontent.com/86007179/190660139-81733c7c-0c1e-405d-94a0-fc4853a7e7c2.mp4
process
app is crashing when navigated to dashboard screen steps sign in sign up enroll to the study navigate to dashboard and verify ar app is crashing when navigated to dashboard screen er app should not crash when navigated to dashboard screen
1
17,701
23,579,531,454
IssuesEvent
2022-08-23 06:16:30
Battle-s/battle-school-backend
https://api.github.com/repos/Battle-s/battle-school-backend
closed
[FEAT] 경기 개최 및 조회
feature :computer: processing :hourglass_flowing_sand:
## 설명 경기 개최 및 조회 ## 체크사항 - [x] 경기 엔티티 및 repo 생성 - [x] 경기 서비스 - crud - [x] 시즌 & 종목 매핑 ## 참고자료 ## 관련 논의
1.0
[FEAT] 경기 개최 및 조회 - ## 설명 경기 개최 및 조회 ## 체크사항 - [x] 경기 엔티티 및 repo 생성 - [x] 경기 서비스 - crud - [x] 시즌 & 종목 매핑 ## 참고자료 ## 관련 논의
process
경기 개최 및 조회 설명 경기 개최 및 조회 체크사항 경기 엔티티 및 repo 생성 경기 서비스 crud 시즌 종목 매핑 참고자료 관련 논의
1
374,371
11,088,607,037
IssuesEvent
2019-12-14 12:29:25
googleapis/google-auth-library-nodejs
https://api.github.com/repos/googleapis/google-auth-library-nodejs
closed
Synthesis failed for google-auth-library-nodejs
autosynth failure priority: p1 type: bug
Hello! Autosynth couldn't regenerate google-auth-library-nodejs. :broken_heart: Here's the output from running `synth.py`: ``` Cloning into 'working_repo'... Switched to branch 'autosynth' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--'] synthtool > Executing /tmpfs/src/git/autosynth/working_repo/synth.py. .eslintignore .eslintrc.yml .github/ISSUE_TEMPLATE/bug_report.md .github/ISSUE_TEMPLATE/feature_request.md .github/ISSUE_TEMPLATE/support_request.md .github/PULL_REQUEST_TEMPLATE.md .github/release-please.yml .jsdoc.js .kokoro/common.cfg .kokoro/continuous/node10/common.cfg .kokoro/continuous/node10/docs.cfg .kokoro/continuous/node10/lint.cfg .kokoro/continuous/node10/samples-test.cfg .kokoro/continuous/node10/system-test.cfg .kokoro/continuous/node10/test.cfg .kokoro/continuous/node12/common.cfg .kokoro/continuous/node12/test.cfg .kokoro/continuous/node8/common.cfg .kokoro/continuous/node8/test.cfg .kokoro/docs.sh .kokoro/lint.sh .kokoro/presubmit/node10/common.cfg .kokoro/presubmit/node10/docs.cfg .kokoro/presubmit/node10/lint.cfg .kokoro/presubmit/node10/samples-test.cfg .kokoro/presubmit/node10/system-test.cfg .kokoro/presubmit/node10/test.cfg .kokoro/presubmit/node12/common.cfg .kokoro/presubmit/node12/test.cfg .kokoro/presubmit/node8/common.cfg .kokoro/presubmit/node8/test.cfg .kokoro/presubmit/windows/common.cfg .kokoro/presubmit/windows/test.cfg .kokoro/publish.sh .kokoro/release/docs.cfg .kokoro/release/docs.sh .kokoro/release/publish.cfg .kokoro/samples-test.sh .kokoro/system-test.sh .kokoro/test.bat .kokoro/test.sh .kokoro/trampoline.sh .nycrc .prettierignore .prettierrc CODE_OF_CONDUCT.md CONTRIBUTING.md LICENSE README.md codecov.yaml renovate.json samples/README.md Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 95, in <module> main() File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 85, in main synthtool.metadata.add_new_files(start_time) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/metadata.py", line 69, in add_new_files mtime = os.path.getmtime(filepath) File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/genericpath.py", line 55, in getmtime return os.stat(filename).st_mtime FileNotFoundError: [Errno 2] No such file or directory: '/tmpfs/src/git/autosynth/working_repo/test/fixtures/badlink' Synthesis failed ``` Google internal developers can see the full log [here](https://sponge/6d5b8126-dc85-49e7-9e1b-f3950d259326).
1.0
Synthesis failed for google-auth-library-nodejs - Hello! Autosynth couldn't regenerate google-auth-library-nodejs. :broken_heart: Here's the output from running `synth.py`: ``` Cloning into 'working_repo'... Switched to branch 'autosynth' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--'] synthtool > Executing /tmpfs/src/git/autosynth/working_repo/synth.py. .eslintignore .eslintrc.yml .github/ISSUE_TEMPLATE/bug_report.md .github/ISSUE_TEMPLATE/feature_request.md .github/ISSUE_TEMPLATE/support_request.md .github/PULL_REQUEST_TEMPLATE.md .github/release-please.yml .jsdoc.js .kokoro/common.cfg .kokoro/continuous/node10/common.cfg .kokoro/continuous/node10/docs.cfg .kokoro/continuous/node10/lint.cfg .kokoro/continuous/node10/samples-test.cfg .kokoro/continuous/node10/system-test.cfg .kokoro/continuous/node10/test.cfg .kokoro/continuous/node12/common.cfg .kokoro/continuous/node12/test.cfg .kokoro/continuous/node8/common.cfg .kokoro/continuous/node8/test.cfg .kokoro/docs.sh .kokoro/lint.sh .kokoro/presubmit/node10/common.cfg .kokoro/presubmit/node10/docs.cfg .kokoro/presubmit/node10/lint.cfg .kokoro/presubmit/node10/samples-test.cfg .kokoro/presubmit/node10/system-test.cfg .kokoro/presubmit/node10/test.cfg .kokoro/presubmit/node12/common.cfg .kokoro/presubmit/node12/test.cfg .kokoro/presubmit/node8/common.cfg .kokoro/presubmit/node8/test.cfg .kokoro/presubmit/windows/common.cfg .kokoro/presubmit/windows/test.cfg .kokoro/publish.sh .kokoro/release/docs.cfg .kokoro/release/docs.sh .kokoro/release/publish.cfg .kokoro/samples-test.sh .kokoro/system-test.sh .kokoro/test.bat .kokoro/test.sh .kokoro/trampoline.sh .nycrc .prettierignore .prettierrc CODE_OF_CONDUCT.md CONTRIBUTING.md LICENSE README.md codecov.yaml renovate.json samples/README.md Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 95, in <module> main() File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 85, in main synthtool.metadata.add_new_files(start_time) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/metadata.py", line 69, in add_new_files mtime = os.path.getmtime(filepath) File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/genericpath.py", line 55, in getmtime return os.stat(filename).st_mtime FileNotFoundError: [Errno 2] No such file or directory: '/tmpfs/src/git/autosynth/working_repo/test/fixtures/badlink' Synthesis failed ``` Google internal developers can see the full log [here](https://sponge/6d5b8126-dc85-49e7-9e1b-f3950d259326).
non_process
synthesis failed for google auth library nodejs hello autosynth couldn t regenerate google auth library nodejs broken heart here s the output from running synth py cloning into working repo switched to branch autosynth running synthtool synthtool executing tmpfs src git autosynth working repo synth py eslintignore eslintrc yml github issue template bug report md github issue template feature request md github issue template support request md github pull request template md github release please yml jsdoc js kokoro common cfg kokoro continuous common cfg kokoro continuous docs cfg kokoro continuous lint cfg kokoro continuous samples test cfg kokoro continuous system test cfg kokoro continuous test cfg kokoro continuous common cfg kokoro continuous test cfg kokoro continuous common cfg kokoro continuous test cfg kokoro docs sh kokoro lint sh kokoro presubmit common cfg kokoro presubmit docs cfg kokoro presubmit lint cfg kokoro presubmit samples test cfg kokoro presubmit system test cfg kokoro presubmit test cfg kokoro presubmit common cfg kokoro presubmit test cfg kokoro presubmit common cfg kokoro presubmit test cfg kokoro presubmit windows common cfg kokoro presubmit windows test cfg kokoro publish sh kokoro release docs cfg kokoro release docs sh kokoro release publish cfg kokoro samples test sh kokoro system test sh kokoro test bat kokoro test sh kokoro trampoline sh nycrc prettierignore prettierrc code of conduct md contributing md license readme md codecov yaml renovate json samples readme md traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src git autosynth env lib site packages synthtool main py line in main file tmpfs src git autosynth env lib site packages click core py line in call return self main args kwargs file tmpfs src git autosynth env lib site packages click core py line in main rv self invoke ctx file tmpfs src git autosynth env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src git autosynth env lib site packages click core py line in invoke return callback args kwargs file tmpfs src git autosynth env lib site packages synthtool main py line in main synthtool metadata add new files start time file tmpfs src git autosynth env lib site packages synthtool metadata py line in add new files mtime os path getmtime filepath file home kbuilder pyenv versions lib genericpath py line in getmtime return os stat filename st mtime filenotfounderror no such file or directory tmpfs src git autosynth working repo test fixtures badlink synthesis failed google internal developers can see the full log
0
23,406
4,016,294,962
IssuesEvent
2016-05-15 14:15:22
MajkiIT/polish-ads-filter
https://api.github.com/repos/MajkiIT/polish-ads-filter
closed
kekuko.com
reguły gotowe/testowanie reklama
more kekuko block whole domain http://pobierz.kekuko.com/ Koszt usługi to 4,92 zł brutto za każdy SMS otrzymany w ramach subskrypcji.
1.0
kekuko.com - more kekuko block whole domain http://pobierz.kekuko.com/ Koszt usługi to 4,92 zł brutto za każdy SMS otrzymany w ramach subskrypcji.
non_process
kekuko com more kekuko block whole domain koszt usługi to zł brutto za każdy sms otrzymany w ramach subskrypcji
0
1,760
2,603,970,839
IssuesEvent
2015-02-24 19:00:13
chrsmith/nishazi6
https://api.github.com/repos/chrsmith/nishazi6
opened
沈阳病毒性疣类
auto-migrated Priority-Medium Type-Defect
``` 沈阳病毒性疣类〓沈陽軍區政治部醫院性病〓TEL:024-31023308�� �成立于1946年,68年專注于性傳播疾病的研究和治療。位于沈� ��市沈河區二緯路32號。是一所與新中國同建立共輝煌的歷史� ��久、設備精良、技術權威、專家云集,是預防、保健、醫療 、科研康復為一體的綜合性醫院。是國家首批公立甲等部隊�� �院、全國首批醫療規范定點單位,是第四軍醫大學、東南大� ��等知名高等院校的教學醫院。曾被中國人民解放軍空軍后勤 部衛生部評為衛生工作先進單位,先后兩次榮立集體二等功�� � ``` ----- Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:30
1.0
沈阳病毒性疣类 - ``` 沈阳病毒性疣类〓沈陽軍區政治部醫院性病〓TEL:024-31023308�� �成立于1946年,68年專注于性傳播疾病的研究和治療。位于沈� ��市沈河區二緯路32號。是一所與新中國同建立共輝煌的歷史� ��久、設備精良、技術權威、專家云集,是預防、保健、醫療 、科研康復為一體的綜合性醫院。是國家首批公立甲等部隊�� �院、全國首批醫療規范定點單位,是第四軍醫大學、東南大� ��等知名高等院校的教學醫院。曾被中國人民解放軍空軍后勤 部衛生部評為衛生工作先進單位,先后兩次榮立集體二等功�� � ``` ----- Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:30
non_process
沈阳病毒性疣类 沈阳病毒性疣类〓沈陽軍區政治部醫院性病〓tel: �� � , 。位于沈� �� 。是一所與新中國同建立共輝煌的歷史� ��久、設備精良、技術權威、專家云集,是預防、保健、醫療 、科研康復為一體的綜合性醫院。是國家首批公立甲等部隊�� �院、全國首批醫療規范定點單位,是第四軍醫大學、東南大� ��等知名高等院校的教學醫院。曾被中國人民解放軍空軍后勤 部衛生部評為衛生工作先進單位,先后兩次榮立集體二等功�� � original issue reported on code google com by gmail com on jun at
0
29,817
4,537,478,976
IssuesEvent
2016-09-09 00:34:41
OwlTechnology/CommandPalette.js
https://api.github.com/repos/OwlTechnology/CommandPalette.js
closed
Build Occasionally Fails due to Phantom.js Bug
bug Unit Tests
The build occasionally fails due to a bug in the Phantom.js system when it tries to run the line ``` eval node install.js ``` --- Restarting the build usually fixes this error. Needs more research to determine cause and whether it can be mitigated.
1.0
Build Occasionally Fails due to Phantom.js Bug - The build occasionally fails due to a bug in the Phantom.js system when it tries to run the line ``` eval node install.js ``` --- Restarting the build usually fixes this error. Needs more research to determine cause and whether it can be mitigated.
non_process
build occasionally fails due to phantom js bug the build occasionally fails due to a bug in the phantom js system when it tries to run the line eval node install js restarting the build usually fixes this error needs more research to determine cause and whether it can be mitigated
0
15,676
19,847,602,184
IssuesEvent
2022-01-21 08:39:42
ooi-data/CE04OSPS-PC01B-4A-CTDPFA109-streamed-ctdpf_sbe43_sample
https://api.github.com/repos/ooi-data/CE04OSPS-PC01B-4A-CTDPFA109-streamed-ctdpf_sbe43_sample
opened
🛑 Processing failed: ValueError
process
## Overview `ValueError` found in `processing_task` task during run ended on 2022-01-21T08:39:41.992057. ## Details Flow name: `CE04OSPS-PC01B-4A-CTDPFA109-streamed-ctdpf_sbe43_sample` Task name: `processing_task` Error type: `ValueError` Error message: not enough values to unpack (expected 3, got 0) <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream append_to_zarr(mod_ds, final_store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr _append_zarr(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr existing_arr.append(var_data.values) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values return _as_array_or_item(self._data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item data = np.asarray(data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__ x = self.compute() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute (result,) = compute(self, traverse=False, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute results = schedule(dsk, keys, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get results = get_async( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async raise_exception(exc, tb) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise raise exc File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task result = _execute_task(task, data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task return func(*(_execute_task(a, cache) for a in args)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter c = np.asarray(c) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__ self._ensure_cached() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached self.array = NumpyIndexingAdapter(np.asarray(self.array)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__ return self.func(self.array) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask data = np.asarray(data, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__ return array[key.tuple] File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__ return self.get_basic_selection(selection, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection return self._get_basic_selection_nd(selection=selection, out=out, File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd return self._get_selection(indexer=indexer, out=out, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection lchunk_coords, lchunk_selection, lout_selection = zip(*indexer) ValueError: not enough values to unpack (expected 3, got 0) ``` </details>
1.0
🛑 Processing failed: ValueError - ## Overview `ValueError` found in `processing_task` task during run ended on 2022-01-21T08:39:41.992057. ## Details Flow name: `CE04OSPS-PC01B-4A-CTDPFA109-streamed-ctdpf_sbe43_sample` Task name: `processing_task` Error type: `ValueError` Error message: not enough values to unpack (expected 3, got 0) <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream append_to_zarr(mod_ds, final_store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr _append_zarr(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr existing_arr.append(var_data.values) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values return _as_array_or_item(self._data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item data = np.asarray(data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__ x = self.compute() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute (result,) = compute(self, traverse=False, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute results = schedule(dsk, keys, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get results = get_async( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async raise_exception(exc, tb) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise raise exc File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task result = _execute_task(task, data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task return func(*(_execute_task(a, cache) for a in args)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter c = np.asarray(c) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__ self._ensure_cached() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached self.array = NumpyIndexingAdapter(np.asarray(self.array)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__ return self.func(self.array) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask data = np.asarray(data, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__ return array[key.tuple] File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__ return self.get_basic_selection(selection, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection return self._get_basic_selection_nd(selection=selection, out=out, File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd return self._get_selection(indexer=indexer, out=out, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection lchunk_coords, lchunk_selection, lout_selection = zip(*indexer) ValueError: not enough values to unpack (expected 3, got 0) ``` </details>
process
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name streamed ctdpf sample task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray coding variables py line in array return self func self array file srv conda envs notebook lib site packages xarray coding variables py line in apply mask data np asarray data dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got
1
7,790
10,932,956,595
IssuesEvent
2019-11-23 21:34:36
MetaMask/metamask-extension
https://api.github.com/repos/MetaMask/metamask-extension
closed
Set up Continuous Delivery System
L09-process P2-sooner T01-enhancement
This thread will be for exploring strategies for continuous deployment of MetaMask. Attributes I'd like a solution to have: - As secure as possible - Requires test passing - Requires code review (maybe multiple users, maybe weighted by their previous contribution value) - Maybe authenticated via the blockchain itself. (Boardroom? Backfeed?) Right now Chrome's most secure extension deployment strategy involves an authorized private key, which raises the question of where this private key is, that it could enforce all of our other rules? Maybe this would be a good job for [Microsoft Cryptlets](https://azure.microsoft.com/en-us/blog/bletchley-blockchain/)? I also have heard good things about [GitLab's Continuous Deployment features](https://about.gitlab.com/2016/08/05/continuous-integration-delivery-and-deployment-with-gitlab/), with Parity adopting them for their multi-platform builds.
1.0
Set up Continuous Delivery System - This thread will be for exploring strategies for continuous deployment of MetaMask. Attributes I'd like a solution to have: - As secure as possible - Requires test passing - Requires code review (maybe multiple users, maybe weighted by their previous contribution value) - Maybe authenticated via the blockchain itself. (Boardroom? Backfeed?) Right now Chrome's most secure extension deployment strategy involves an authorized private key, which raises the question of where this private key is, that it could enforce all of our other rules? Maybe this would be a good job for [Microsoft Cryptlets](https://azure.microsoft.com/en-us/blog/bletchley-blockchain/)? I also have heard good things about [GitLab's Continuous Deployment features](https://about.gitlab.com/2016/08/05/continuous-integration-delivery-and-deployment-with-gitlab/), with Parity adopting them for their multi-platform builds.
process
set up continuous delivery system this thread will be for exploring strategies for continuous deployment of metamask attributes i d like a solution to have as secure as possible requires test passing requires code review maybe multiple users maybe weighted by their previous contribution value maybe authenticated via the blockchain itself boardroom backfeed right now chrome s most secure extension deployment strategy involves an authorized private key which raises the question of where this private key is that it could enforce all of our other rules maybe this would be a good job for i also have heard good things about with parity adopting them for their multi platform builds
1
14,179
17,089,600,197
IssuesEvent
2021-07-08 15:43:28
Vaibhavpratapsingh22/website
https://api.github.com/repos/Vaibhavpratapsingh22/website
opened
One Page web design 🛠️
enhancement implementation under-process
### Building a basic one-page layout that must include: - Navbar - Header Image - 2 Sections - Footer
1.0
One Page web design 🛠️ - ### Building a basic one-page layout that must include: - Navbar - Header Image - 2 Sections - Footer
process
one page web design 🛠️ building a basic one page layout that must include navbar header image sections footer
1
14,209
17,106,922,681
IssuesEvent
2021-07-09 19:23:13
googleapis/python-bigquery
https://api.github.com/repos/googleapis/python-bigquery
closed
use test-utils prefixer in system tests
api: bigquery type: process
ERROR: type should be string, got "https://source.cloud.google.com/results/invocations/31b4c7f9-e7d2-4e7a-8bdf-2dc79be1a586/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-bigquery%2Fpresubmit%2Fprerelease-deps-3.8/log\r\n\r\n```\r\ntests/system/test_client.py ...........................................F [ 52%]\r\n....................... [ 79%]\r\ntests/system/test_magics.py . [ 80%]\r\ntests/system/test_pandas.py .............. [ 97%]\r\ntests/system/test_structs.py .. [100%]\r\n\r\n=================================== FAILURES ===================================\r\n___________ TestBigQuery.test_load_table_from_json_schema_autodetect ___________\r\n\r\nself = <tests.system.test_client.TestBigQuery testMethod=test_load_table_from_json_schema_autodetect>\r\n\r\n def setUp(self):\r\n Config.DATASET = _make_dataset_id(\"bq_system_tests\")\r\n> dataset = Config.CLIENT.create_dataset(Config.DATASET)\r\n\r\ntests/system/test_client.py:168:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\ngoogle/cloud/bigquery/client.py:601: in create_dataset\r\n api_response = self._call_api(\r\ngoogle/cloud/bigquery/client.py:741: in _call_api\r\n return call()\r\n.nox/prerelease_deps-3-8/lib/python3.8/site-packages/google/api_core/retry.py:285: in retry_wrapped_func\r\n return retry_target(\r\n.nox/prerelease_deps-3-8/lib/python3.8/site-packages/google/api_core/retry.py:188: in retry_target\r\n return target()\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nself = <google.cloud.bigquery._http.Connection object at 0x7f8a928e5af0>\r\nmethod = 'POST', path = '/projects/precise-truck-742/datasets'\r\nquery_params = None\r\ndata = '{\"datasetReference\": {\"projectId\": \"precise-truck-742\", \"datasetId\": \"python_bigquery_tests_system_bq_system_tests_1625850117931\"}, \"labels\": {}}'\r\ncontent_type = 'application/json', headers = None, api_base_url = None\r\napi_version = None, expect_json = True, _target_object = None, timeout = None\r\n\r\n def api_request(\r\n self,\r\n method,\r\n path,\r\n query_params=None,\r\n data=None,\r\n content_type=None,\r\n headers=None,\r\n api_base_url=None,\r\n api_version=None,\r\n expect_json=True,\r\n _target_object=None,\r\n timeout=_DEFAULT_TIMEOUT,\r\n ):\r\n \"\"\"Make a request over the HTTP transport to the API.\r\n\r\n You shouldn't need to use this method, but if you plan to\r\n interact with the API using these primitives, this is the\r\n correct one to use.\r\n\r\n :type method: str\r\n :param method: The HTTP method name (ie, ``GET``, ``POST``, etc).\r\n Required.\r\n\r\n :type path: str\r\n :param path: The path to the resource (ie, ``'/b/bucket-name'``).\r\n Required.\r\n\r\n :type query_params: dict or list\r\n :param query_params: A dictionary of keys and values (or list of\r\n key-value pairs) to insert into the query\r\n string of the URL.\r\n\r\n :type data: str\r\n :param data: The data to send as the body of the request. Default is\r\n the empty string.\r\n\r\n :type content_type: str\r\n :param content_type: The proper MIME type of the data provided. Default\r\n is None.\r\n\r\n :type headers: dict\r\n :param headers: extra HTTP headers to be sent with the request.\r\n\r\n :type api_base_url: str\r\n :param api_base_url: The base URL for the API endpoint.\r\n Typically you won't have to provide this.\r\n Default is the standard API base URL.\r\n\r\n :type api_version: str\r\n :param api_version: The version of the API to call. Typically\r\n you shouldn't provide this and instead use\r\n the default for the library. Default is the\r\n latest API version supported by\r\n google-cloud-python.\r\n\r\n :type expect_json: bool\r\n :param expect_json: If True, this method will try to parse the\r\n response as JSON and raise an exception if\r\n that cannot be done. Default is True.\r\n\r\n :type _target_object: :class:`object`\r\n :param _target_object:\r\n (Optional) Protected argument to be used by library callers. This\r\n can allow custom behavior, for example, to defer an HTTP request\r\n and complete initialization of the object at a later time.\r\n\r\n :type timeout: float or tuple\r\n :param timeout: (optional) The amount of time, in seconds, to wait\r\n for the server response.\r\n\r\n Can also be passed as a tuple (connect_timeout, read_timeout).\r\n See :meth:`requests.Session.request` documentation for details.\r\n\r\n :raises ~google.cloud.exceptions.GoogleCloudError: if the response code\r\n is not 200 OK.\r\n :raises ValueError: if the response content type is not JSON.\r\n :rtype: dict or str\r\n :returns: The API response payload, either as a raw string or\r\n a dictionary if the response is valid JSON.\r\n \"\"\"\r\n url = self.build_api_url(\r\n path=path,\r\n query_params=query_params,\r\n api_base_url=api_base_url,\r\n api_version=api_version,\r\n )\r\n\r\n # Making the executive decision that any dictionary\r\n # data will be sent properly as JSON.\r\n if data and isinstance(data, dict):\r\n data = json.dumps(data)\r\n content_type = \"application/json\"\r\n\r\n response = self._make_request(\r\n method=method,\r\n url=url,\r\n data=data,\r\n content_type=content_type,\r\n headers=headers,\r\n target_object=_target_object,\r\n timeout=timeout,\r\n )\r\n\r\n if not 200 <= response.status_code < 300:\r\n> raise exceptions.from_http_response(response)\r\nE google.api_core.exceptions.Conflict: 409 POST https://bigquery.googleapis.com/bigquery/v2/projects/precise-truck-742/datasets?prettyPrint=false: Already Exists: Dataset precise-truck-742:python_bigquery_tests_system_bq_system_tests_1625850117931\r\n```\r\n\r\nLooks like we could use a little more robust naming. The \"prefixer\" adds a random component that should avoid these name conflicts."
1.0
use test-utils prefixer in system tests - https://source.cloud.google.com/results/invocations/31b4c7f9-e7d2-4e7a-8bdf-2dc79be1a586/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-bigquery%2Fpresubmit%2Fprerelease-deps-3.8/log ``` tests/system/test_client.py ...........................................F [ 52%] ....................... [ 79%] tests/system/test_magics.py . [ 80%] tests/system/test_pandas.py .............. [ 97%] tests/system/test_structs.py .. [100%] =================================== FAILURES =================================== ___________ TestBigQuery.test_load_table_from_json_schema_autodetect ___________ self = <tests.system.test_client.TestBigQuery testMethod=test_load_table_from_json_schema_autodetect> def setUp(self): Config.DATASET = _make_dataset_id("bq_system_tests") > dataset = Config.CLIENT.create_dataset(Config.DATASET) tests/system/test_client.py:168: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ google/cloud/bigquery/client.py:601: in create_dataset api_response = self._call_api( google/cloud/bigquery/client.py:741: in _call_api return call() .nox/prerelease_deps-3-8/lib/python3.8/site-packages/google/api_core/retry.py:285: in retry_wrapped_func return retry_target( .nox/prerelease_deps-3-8/lib/python3.8/site-packages/google/api_core/retry.py:188: in retry_target return target() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.cloud.bigquery._http.Connection object at 0x7f8a928e5af0> method = 'POST', path = '/projects/precise-truck-742/datasets' query_params = None data = '{"datasetReference": {"projectId": "precise-truck-742", "datasetId": "python_bigquery_tests_system_bq_system_tests_1625850117931"}, "labels": {}}' content_type = 'application/json', headers = None, api_base_url = None api_version = None, expect_json = True, _target_object = None, timeout = None def api_request( self, method, path, query_params=None, data=None, content_type=None, headers=None, api_base_url=None, api_version=None, expect_json=True, _target_object=None, timeout=_DEFAULT_TIMEOUT, ): """Make a request over the HTTP transport to the API. You shouldn't need to use this method, but if you plan to interact with the API using these primitives, this is the correct one to use. :type method: str :param method: The HTTP method name (ie, ``GET``, ``POST``, etc). Required. :type path: str :param path: The path to the resource (ie, ``'/b/bucket-name'``). Required. :type query_params: dict or list :param query_params: A dictionary of keys and values (or list of key-value pairs) to insert into the query string of the URL. :type data: str :param data: The data to send as the body of the request. Default is the empty string. :type content_type: str :param content_type: The proper MIME type of the data provided. Default is None. :type headers: dict :param headers: extra HTTP headers to be sent with the request. :type api_base_url: str :param api_base_url: The base URL for the API endpoint. Typically you won't have to provide this. Default is the standard API base URL. :type api_version: str :param api_version: The version of the API to call. Typically you shouldn't provide this and instead use the default for the library. Default is the latest API version supported by google-cloud-python. :type expect_json: bool :param expect_json: If True, this method will try to parse the response as JSON and raise an exception if that cannot be done. Default is True. :type _target_object: :class:`object` :param _target_object: (Optional) Protected argument to be used by library callers. This can allow custom behavior, for example, to defer an HTTP request and complete initialization of the object at a later time. :type timeout: float or tuple :param timeout: (optional) The amount of time, in seconds, to wait for the server response. Can also be passed as a tuple (connect_timeout, read_timeout). See :meth:`requests.Session.request` documentation for details. :raises ~google.cloud.exceptions.GoogleCloudError: if the response code is not 200 OK. :raises ValueError: if the response content type is not JSON. :rtype: dict or str :returns: The API response payload, either as a raw string or a dictionary if the response is valid JSON. """ url = self.build_api_url( path=path, query_params=query_params, api_base_url=api_base_url, api_version=api_version, ) # Making the executive decision that any dictionary # data will be sent properly as JSON. if data and isinstance(data, dict): data = json.dumps(data) content_type = "application/json" response = self._make_request( method=method, url=url, data=data, content_type=content_type, headers=headers, target_object=_target_object, timeout=timeout, ) if not 200 <= response.status_code < 300: > raise exceptions.from_http_response(response) E google.api_core.exceptions.Conflict: 409 POST https://bigquery.googleapis.com/bigquery/v2/projects/precise-truck-742/datasets?prettyPrint=false: Already Exists: Dataset precise-truck-742:python_bigquery_tests_system_bq_system_tests_1625850117931 ``` Looks like we could use a little more robust naming. The "prefixer" adds a random component that should avoid these name conflicts.
process
use test utils prefixer in system tests tests system test client py f tests system test magics py tests system test pandas py tests system test structs py failures testbigquery test load table from json schema autodetect self def setup self config dataset make dataset id bq system tests dataset config client create dataset config dataset tests system test client py google cloud bigquery client py in create dataset api response self call api google cloud bigquery client py in call api return call nox prerelease deps lib site packages google api core retry py in retry wrapped func return retry target nox prerelease deps lib site packages google api core retry py in retry target return target self method post path projects precise truck datasets query params none data datasetreference projectid precise truck datasetid python bigquery tests system bq system tests labels content type application json headers none api base url none api version none expect json true target object none timeout none def api request self method path query params none data none content type none headers none api base url none api version none expect json true target object none timeout default timeout make a request over the http transport to the api you shouldn t need to use this method but if you plan to interact with the api using these primitives this is the correct one to use type method str param method the http method name ie get post etc required type path str param path the path to the resource ie b bucket name required type query params dict or list param query params a dictionary of keys and values or list of key value pairs to insert into the query string of the url type data str param data the data to send as the body of the request default is the empty string type content type str param content type the proper mime type of the data provided default is none type headers dict param headers extra http headers to be sent with the request type api base url str param api base url the base url for the api endpoint typically you won t have to provide this default is the standard api base url type api version str param api version the version of the api to call typically you shouldn t provide this and instead use the default for the library default is the latest api version supported by google cloud python type expect json bool param expect json if true this method will try to parse the response as json and raise an exception if that cannot be done default is true type target object class object param target object optional protected argument to be used by library callers this can allow custom behavior for example to defer an http request and complete initialization of the object at a later time type timeout float or tuple param timeout optional the amount of time in seconds to wait for the server response can also be passed as a tuple connect timeout read timeout see meth requests session request documentation for details raises google cloud exceptions googleclouderror if the response code is not ok raises valueerror if the response content type is not json rtype dict or str returns the api response payload either as a raw string or a dictionary if the response is valid json url self build api url path path query params query params api base url api base url api version api version making the executive decision that any dictionary data will be sent properly as json if data and isinstance data dict data json dumps data content type application json response self make request method method url url data data content type content type headers headers target object target object timeout timeout if not response status code raise exceptions from http response response e google api core exceptions conflict post already exists dataset precise truck python bigquery tests system bq system tests looks like we could use a little more robust naming the prefixer adds a random component that should avoid these name conflicts
1
22,707
3,792,374,114
IssuesEvent
2016-03-22 09:25:46
CaliOpen/Caliopen
https://api.github.com/repos/CaliOpen/Caliopen
opened
[messages list] Add the name of the contacts to identify the conversation
design
Add the name of the contacts to the right of the avatar: ![name of the contacts](https://cloud.githubusercontent.com/assets/6246015/13946923/e962da9e-f017-11e5-9903-77c8d2ae77bc.png) In case of multiple contacts discussion, display some names (number to define) and add a counter at the end of the line: ![multiple contacts discussion](https://cloud.githubusercontent.com/assets/6246015/13946988/432eb4da-f018-11e5-9c5e-f8e08fbf4776.png)
1.0
[messages list] Add the name of the contacts to identify the conversation - Add the name of the contacts to the right of the avatar: ![name of the contacts](https://cloud.githubusercontent.com/assets/6246015/13946923/e962da9e-f017-11e5-9903-77c8d2ae77bc.png) In case of multiple contacts discussion, display some names (number to define) and add a counter at the end of the line: ![multiple contacts discussion](https://cloud.githubusercontent.com/assets/6246015/13946988/432eb4da-f018-11e5-9c5e-f8e08fbf4776.png)
non_process
add the name of the contacts to identify the conversation add the name of the contacts to the right of the avatar in case of multiple contacts discussion display some names number to define and add a counter at the end of the line
0
14,290
17,264,569,381
IssuesEvent
2021-07-22 12:18:13
CATcher-org/CATcher
https://api.github.com/repos/CATcher-org/CATcher
closed
Add Pull Request Templates
aspect-Process difficulty.Easy
Discussion Link: [Discussion #718](https://github.com/CATcher-org/CATcher/discussions/718) We should consider using Pull Request Templates to help developers better describe the data what they are doing and write good commit messages such that future developers are able to understand the purpose of that specific squashed commit (of the PR). It should contain the following - A Summary of what the PR does - A Description of the changes - The proposed commit message upon merge of Pull Request @CATcher-org/2021-devs Do feel free to add or remove any additional details which should be mentioned in the templates! Alternatively, suggest other PR templates that can be adopted! 😄
1.0
Add Pull Request Templates - Discussion Link: [Discussion #718](https://github.com/CATcher-org/CATcher/discussions/718) We should consider using Pull Request Templates to help developers better describe the data what they are doing and write good commit messages such that future developers are able to understand the purpose of that specific squashed commit (of the PR). It should contain the following - A Summary of what the PR does - A Description of the changes - The proposed commit message upon merge of Pull Request @CATcher-org/2021-devs Do feel free to add or remove any additional details which should be mentioned in the templates! Alternatively, suggest other PR templates that can be adopted! 😄
process
add pull request templates discussion link we should consider using pull request templates to help developers better describe the data what they are doing and write good commit messages such that future developers are able to understand the purpose of that specific squashed commit of the pr it should contain the following a summary of what the pr does a description of the changes the proposed commit message upon merge of pull request catcher org devs do feel free to add or remove any additional details which should be mentioned in the templates alternatively suggest other pr templates that can be adopted 😄
1
7,590
10,702,834,501
IssuesEvent
2019-10-24 08:21:28
ESMValGroup/ESMValCore
https://api.github.com/repos/ESMValGroup/ESMValCore
closed
Derivation of variables that need fx variables should be done with fx VARIABLES not fx files
enhancement preprocessor variable derivation workshop
#170 implements the handling of fx variables as regular variables. The derivation of derived variables that need fx data should now be done by treating the fx variables as any regular derivation variables eg: for `nbp_grid.py`: ``` class DerivedVariable(DerivedVariableBase): """Derivation of variable `nbp_grid`.""" # Required variables required = [{ 'short_name': 'nbp', 'fx_files': ['sftlf'], }] ``` should change to ``` class DerivedVariable(DerivedVariableBase): """Derivation of variable `nbp_grid`.""" # Required variables required = [{ 'short_name': 'nbp'}, {'short_name': 'sftlf'}] ``` This will insure the correct CMOR checks and variable treatment are applied to the fx data. I will implement this after #170 is merged.
1.0
Derivation of variables that need fx variables should be done with fx VARIABLES not fx files - #170 implements the handling of fx variables as regular variables. The derivation of derived variables that need fx data should now be done by treating the fx variables as any regular derivation variables eg: for `nbp_grid.py`: ``` class DerivedVariable(DerivedVariableBase): """Derivation of variable `nbp_grid`.""" # Required variables required = [{ 'short_name': 'nbp', 'fx_files': ['sftlf'], }] ``` should change to ``` class DerivedVariable(DerivedVariableBase): """Derivation of variable `nbp_grid`.""" # Required variables required = [{ 'short_name': 'nbp'}, {'short_name': 'sftlf'}] ``` This will insure the correct CMOR checks and variable treatment are applied to the fx data. I will implement this after #170 is merged.
process
derivation of variables that need fx variables should be done with fx variables not fx files implements the handling of fx variables as regular variables the derivation of derived variables that need fx data should now be done by treating the fx variables as any regular derivation variables eg for nbp grid py class derivedvariable derivedvariablebase derivation of variable nbp grid required variables required short name nbp fx files should change to class derivedvariable derivedvariablebase derivation of variable nbp grid required variables required short name nbp short name sftlf this will insure the correct cmor checks and variable treatment are applied to the fx data i will implement this after is merged
1
2,018
4,838,642,669
IssuesEvent
2016-11-09 04:58:06
factor/factor
https://api.github.com/repos/factor/factor
closed
Respect process timeouts in process-reader
hacktoberfest process-launcher
This doesn't timeout in 1 second (it waits for the full 5 seconds and "succeeds"): ``` factor <process> { "sleep" "5" } >>command 1 seconds >>timeout utf8 [ contents ] with-process-reader ``` While this properly fails after a second: ``` factor <process> "sleep 5" >>command 2 seconds >>timeout run-process ``` I haven't looked yet, but it might be because process-reader uses `run-detached`?
1.0
Respect process timeouts in process-reader - This doesn't timeout in 1 second (it waits for the full 5 seconds and "succeeds"): ``` factor <process> { "sleep" "5" } >>command 1 seconds >>timeout utf8 [ contents ] with-process-reader ``` While this properly fails after a second: ``` factor <process> "sleep 5" >>command 2 seconds >>timeout run-process ``` I haven't looked yet, but it might be because process-reader uses `run-detached`?
process
respect process timeouts in process reader this doesn t timeout in second it waits for the full seconds and succeeds factor sleep command seconds timeout with process reader while this properly fails after a second factor sleep command seconds timeout run process i haven t looked yet but it might be because process reader uses run detached
1
12,760
15,115,739,650
IssuesEvent
2021-02-09 05:14:47
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Use GDAL settings in "Settings > options" for the Processing tools
Bug Processing
<!-- Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone. If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix Checklist before submitting - [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists - [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles). - [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue --> ### **Describe the bug** <!-- A clear and concise description of what the bug is. --> GDAL algs dialog don't load default Create Options defined in GDAL settings **GDAL Settings:** ![image](https://user-images.githubusercontent.com/39594821/99945493-00a26b80-2d75-11eb-82c1-e17d8783d953.png) **"Save raster layer as" dialog** ![image](https://user-images.githubusercontent.com/39594821/99945598-30517380-2d75-11eb-92d4-5a78ce04a970.png) Everything's fine. **"Clip raster by mask layer" dialog** ![image](https://user-images.githubusercontent.com/39594821/99945767-82929480-2d75-11eb-87de-b68cebbbe8de.png) Erg empty create options ! Not fine ! I took this algs as example but similar issue on every gdal algs dialog. ### **QGIS and OS versions** Tested on a fresh new profile Win 10 OSGeo4W Qgis 3.16.1 <!-- In the QGIS Help menu -> About, click in the table, Ctrl+A and then Ctrl+C. Finally paste here -->
1.0
Use GDAL settings in "Settings > options" for the Processing tools - <!-- Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone. If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix Checklist before submitting - [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists - [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles). - [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue --> ### **Describe the bug** <!-- A clear and concise description of what the bug is. --> GDAL algs dialog don't load default Create Options defined in GDAL settings **GDAL Settings:** ![image](https://user-images.githubusercontent.com/39594821/99945493-00a26b80-2d75-11eb-82c1-e17d8783d953.png) **"Save raster layer as" dialog** ![image](https://user-images.githubusercontent.com/39594821/99945598-30517380-2d75-11eb-92d4-5a78ce04a970.png) Everything's fine. **"Clip raster by mask layer" dialog** ![image](https://user-images.githubusercontent.com/39594821/99945767-82929480-2d75-11eb-87de-b68cebbbe8de.png) Erg empty create options ! Not fine ! I took this algs as example but similar issue on every gdal algs dialog. ### **QGIS and OS versions** Tested on a fresh new profile Win 10 OSGeo4W Qgis 3.16.1 <!-- In the QGIS Help menu -> About, click in the table, Ctrl+A and then Ctrl+C. Finally paste here -->
process
use gdal settings in settings options for the processing tools bug fixing and feature development is a community responsibility and not the responsibility of the qgis project alone if this bug report or feature request is high priority for you we suggest engaging a qgis developer or support organisation and financially sponsoring a fix checklist before submitting search through existing issue reports and gis stackexchange com to check whether the issue already exists test with a create a light and self contained sample dataset and project file which demonstrates the issue describe the bug gdal algs dialog don t load default create options defined in gdal settings gdal settings save raster layer as dialog everything s fine clip raster by mask layer dialog erg empty create options not fine i took this algs as example but similar issue on every gdal algs dialog qgis and os versions tested on a fresh new profile win qgis about click in the table ctrl a and then ctrl c finally paste here
1
14,844
18,239,342,016
IssuesEvent
2021-10-01 10:57:10
googleapis/python-bigquery
https://api.github.com/repos/googleapis/python-bigquery
closed
[Breaking Change] Remove `google.cloud.bigquery_v2` directory and related code generation
api: bigquery type: process semver: major
The `google.cloud.bigquery_v2` modules are frequently out-of-date. This primarily affect the usability of BigQuery ML. Since BigQuery is a REST API, protobuf changes must be synced manually in internal code repo before they can be published to https://github.com/googleapis/googleapis In the years since we first introduced `google.cloud.bigquery_v2`, we've seen that BigQuery ML adds model stats and model types much more frequently than the protobuf changes are actually published. We've worked around this by: * I and @shollyman (mostly @shollyman lately) republish the protos when a customer has a problem, such as in https://github.com/googleapis/python-bigquery/issues/293 * Avoiding the worst exceptions when an enum or property isn't present, such as in https://github.com/googleapis/python-bigquery/issues/334 **Solution** * [x] Remove `google.cloud.bigquery_v2` modules * [x] Update owlbot config to stop generating "client" * [x] In the BigQuery ML classes, whereever we were returning protobuf object, return the JSON-parsed API response, instead. * [x] Complex types become dictionaries. * [x] Enums become strings. * [x] In the routines logic, create manual wrappers and enums where needed. Supersedes https://github.com/googleapis/python-bigquery/issues/319
1.0
[Breaking Change] Remove `google.cloud.bigquery_v2` directory and related code generation - The `google.cloud.bigquery_v2` modules are frequently out-of-date. This primarily affect the usability of BigQuery ML. Since BigQuery is a REST API, protobuf changes must be synced manually in internal code repo before they can be published to https://github.com/googleapis/googleapis In the years since we first introduced `google.cloud.bigquery_v2`, we've seen that BigQuery ML adds model stats and model types much more frequently than the protobuf changes are actually published. We've worked around this by: * I and @shollyman (mostly @shollyman lately) republish the protos when a customer has a problem, such as in https://github.com/googleapis/python-bigquery/issues/293 * Avoiding the worst exceptions when an enum or property isn't present, such as in https://github.com/googleapis/python-bigquery/issues/334 **Solution** * [x] Remove `google.cloud.bigquery_v2` modules * [x] Update owlbot config to stop generating "client" * [x] In the BigQuery ML classes, whereever we were returning protobuf object, return the JSON-parsed API response, instead. * [x] Complex types become dictionaries. * [x] Enums become strings. * [x] In the routines logic, create manual wrappers and enums where needed. Supersedes https://github.com/googleapis/python-bigquery/issues/319
process
remove google cloud bigquery directory and related code generation the google cloud bigquery modules are frequently out of date this primarily affect the usability of bigquery ml since bigquery is a rest api protobuf changes must be synced manually in internal code repo before they can be published to in the years since we first introduced google cloud bigquery we ve seen that bigquery ml adds model stats and model types much more frequently than the protobuf changes are actually published we ve worked around this by i and shollyman mostly shollyman lately republish the protos when a customer has a problem such as in avoiding the worst exceptions when an enum or property isn t present such as in solution remove google cloud bigquery modules update owlbot config to stop generating client in the bigquery ml classes whereever we were returning protobuf object return the json parsed api response instead complex types become dictionaries enums become strings in the routines logic create manual wrappers and enums where needed supersedes
1
17,835
23,775,754,139
IssuesEvent
2022-09-01 20:45:31
qgis/QGIS-Documentation
https://api.github.com/repos/qgis/QGIS-Documentation
closed
[processing] Add generic option to show feature count for vector outputs (Request in QGIS)
Easy fix Processing Project/Configuration 3.28
### Request for documentation From pull request QGIS/qgis#49849 Author: @gacarrillor QGIS version: 3.28 **[processing] Add generic option to show feature count for vector outputs** ### PR Description: After some discussion in #39522, there is some agreement in that having a `Show feature count for output vector layers` option is a good idea, since it gives users a first glimpse on what they're getting from a Processing algorithm. I'm here implementing that Processing option and setting it to `False` by default, since in some circumstances it might lead to unwanted long waiting times (which by the way could be mentioned in a followup PR for documentation). Fix #39522 This is how it looks like: ![image](https://user-images.githubusercontent.com/652785/185598057-bb220235-290e-482f-92c4-51271239b5eb.png) ### Commits tagged with [need-docs] or [FEATURE]
1.0
[processing] Add generic option to show feature count for vector outputs (Request in QGIS) - ### Request for documentation From pull request QGIS/qgis#49849 Author: @gacarrillor QGIS version: 3.28 **[processing] Add generic option to show feature count for vector outputs** ### PR Description: After some discussion in #39522, there is some agreement in that having a `Show feature count for output vector layers` option is a good idea, since it gives users a first glimpse on what they're getting from a Processing algorithm. I'm here implementing that Processing option and setting it to `False` by default, since in some circumstances it might lead to unwanted long waiting times (which by the way could be mentioned in a followup PR for documentation). Fix #39522 This is how it looks like: ![image](https://user-images.githubusercontent.com/652785/185598057-bb220235-290e-482f-92c4-51271239b5eb.png) ### Commits tagged with [need-docs] or [FEATURE]
process
add generic option to show feature count for vector outputs request in qgis request for documentation from pull request qgis qgis author gacarrillor qgis version add generic option to show feature count for vector outputs pr description after some discussion in there is some agreement in that having a show feature count for output vector layers option is a good idea since it gives users a first glimpse on what they re getting from a processing algorithm i m here implementing that processing option and setting it to false by default since in some circumstances it might lead to unwanted long waiting times which by the way could be mentioned in a followup pr for documentation fix this is how it looks like commits tagged with or
1
191,031
6,824,938,440
IssuesEvent
2017-11-08 08:44:02
adonisjs/adonis-lucid
https://api.github.com/repos/adonisjs/adonis-lucid
closed
Simplify count() behavior
Priority: Medium Status: Accepted Type: Enhancement
I wonder if the default behavior of `count()` can be simplified. Right now, we need to do ```javascript let count = (await User.query().where('is_active', '=', true).count('* as total'))[0].total ``` What if `count(expression = null)` means to use `* as total` and return `[0].total` from the call? ```javascript let count = await User.query().where('is_active', '=', true).count() ``` Seems like the vast majority of cases would use this simple form.
1.0
Simplify count() behavior - I wonder if the default behavior of `count()` can be simplified. Right now, we need to do ```javascript let count = (await User.query().where('is_active', '=', true).count('* as total'))[0].total ``` What if `count(expression = null)` means to use `* as total` and return `[0].total` from the call? ```javascript let count = await User.query().where('is_active', '=', true).count() ``` Seems like the vast majority of cases would use this simple form.
non_process
simplify count behavior i wonder if the default behavior of count can be simplified right now we need to do javascript let count await user query where is active true count as total total what if count expression null means to use as total and return total from the call javascript let count await user query where is active true count seems like the vast majority of cases would use this simple form
0
3,081
6,097,001,417
IssuesEvent
2017-06-20 01:14:42
duckinator/todo
https://api.github.com/repos/duckinator/todo
opened
Document RubyGem's high-level system architecture
Project: RubyGems Work type: process/meta
Document RubyGem's high-level system architecture. This should be exciting.
1.0
Document RubyGem's high-level system architecture - Document RubyGem's high-level system architecture. This should be exciting.
process
document rubygem s high level system architecture document rubygem s high level system architecture this should be exciting
1
1,623
4,237,287,774
IssuesEvent
2016-07-05 21:16:56
pelias/api
https://api.github.com/repos/pelias/api
closed
Label generation does not handle duplicate heirarchy components
processed
Anytime the components of the label are duplicates, the label schema we use will break. In the example, "Luxembourg, Luxembourg" (the city in the country), the generated label is "Luxembourg". This happens because the list of label parts is de-duped which can result in inadvertently leaving out pertinent information. For example: http://pelias.github.io/compare/#/v1/search%3Ftext=luxembourg,%20luxembourg There are all sorts of bad labels in the results. 1. "Luxembourg, Canton de Luxembourg" - this is a city/county label where the label schema calls for country at the end, so it should be "Luxembourg, Canton de Luxembourg, Luxembourg" 2. "Luxembourg" - this is a city label, but the label schema calls for "Luxembourg, Luxembourg" 3. "Grand Duchy of Luxembourg, Luxembourg" - this is a name/locality label, where as the label schema says it should be "Grand Duchy of Luxembourg, Luxembourg, Luxembourg" It appears that the schema doesn't take into account cases where a lower administrative area or venue name can have the same name as the country. Or a venue name having the same name as any other administrative area. @riordan directed me to this the other day: https://i18napis.appspot.com/address It contains the addressing schemes for most (if not all) of the countries. We should probably go with that.
1.0
Label generation does not handle duplicate heirarchy components - Anytime the components of the label are duplicates, the label schema we use will break. In the example, "Luxembourg, Luxembourg" (the city in the country), the generated label is "Luxembourg". This happens because the list of label parts is de-duped which can result in inadvertently leaving out pertinent information. For example: http://pelias.github.io/compare/#/v1/search%3Ftext=luxembourg,%20luxembourg There are all sorts of bad labels in the results. 1. "Luxembourg, Canton de Luxembourg" - this is a city/county label where the label schema calls for country at the end, so it should be "Luxembourg, Canton de Luxembourg, Luxembourg" 2. "Luxembourg" - this is a city label, but the label schema calls for "Luxembourg, Luxembourg" 3. "Grand Duchy of Luxembourg, Luxembourg" - this is a name/locality label, where as the label schema says it should be "Grand Duchy of Luxembourg, Luxembourg, Luxembourg" It appears that the schema doesn't take into account cases where a lower administrative area or venue name can have the same name as the country. Or a venue name having the same name as any other administrative area. @riordan directed me to this the other day: https://i18napis.appspot.com/address It contains the addressing schemes for most (if not all) of the countries. We should probably go with that.
process
label generation does not handle duplicate heirarchy components anytime the components of the label are duplicates the label schema we use will break in the example luxembourg luxembourg the city in the country the generated label is luxembourg this happens because the list of label parts is de duped which can result in inadvertently leaving out pertinent information for example there are all sorts of bad labels in the results luxembourg canton de luxembourg this is a city county label where the label schema calls for country at the end so it should be luxembourg canton de luxembourg luxembourg luxembourg this is a city label but the label schema calls for luxembourg luxembourg grand duchy of luxembourg luxembourg this is a name locality label where as the label schema says it should be grand duchy of luxembourg luxembourg luxembourg it appears that the schema doesn t take into account cases where a lower administrative area or venue name can have the same name as the country or a venue name having the same name as any other administrative area riordan directed me to this the other day it contains the addressing schemes for most if not all of the countries we should probably go with that
1
116,277
24,892,540,288
IssuesEvent
2022-10-28 13:16:09
llvm/llvm-project
https://api.github.com/repos/llvm/llvm-project
closed
A/F: `Elt->getBitWidth( ) == EltVT.getSizeInBits() && "APInt size does not match type size!"'
llvm:codegen
Two of our internal tests recently hit an assertion failure when compiling which I bisected back to commit 54eeadcf442df91aed0fb7244fe7885cdf1b1f3d. I was able to reduce the failing code to the following c++ sample: ```c++ template <typename a, typename> a b(a c, int) { return c; } typedef char d; typedef d __attribute__((ext_vector_type(2))) e; typedef char __attribute__((ext_vector_type(2))) f; #define g(h, i) (b<e, d>(h, 2) % 2) e j; void k() { e l{}, m = __builtin_shufflevector(l, j, 3, 1), n = m.yx, o g(n, ); volatile f p(o); } ``` To reproduce the assertion failure, compile the above code with optimizations enabled (-O2): ``` $ ~/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang -c -O2 test.cpp clang: /home/dyung/src/upstream/llvm_clean_git/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp:1593: llvm::SDValue llvm::SelectionDAG::getConstant(const llvm::ConstantInt&, const llvm::SDLoc&, llvm::EVT, bool, bool): Assertion `Elt->getBitWidth( ) == EltVT.getSizeInBits() && "APInt size does not match type size!"' failed. PLEASE submit a bug report to https://github.com/llvm/llvm-project/issues/ and include the crash backtrace, preprocessed source, and associated run script. Stack dump: 0. Program arguments: /home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang -c -O2 test.cpp 1. <eof> parser at end of file 2. Code generation 3. Running pass 'Function Pass Manager' on module 'test.cpp'. 4. Running pass 'X86 DAG->DAG Instruction Selection' on function '@_Z1kv' #0 0x000056036638b464 PrintStackTraceSignalHandler(void*) Signals.cpp:0:0 #1 0x00005603663891fc llvm::sys::CleanupOnSignal(unsigned long) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x3eff1fc) #2 0x00005603662c4a78 CrashRecoverySignalHandler(int) CrashRecoveryContext.cpp:0:0 #3 0x00007f50242b5420 __restore_rt (/lib/x86_64-linux-gnu/libpthread.so.0+0x14420) #4 0x00007f5023d8200b raise /build/glibc-SzIz7B/glibc-2.31/signal/../sysdeps/unix/sysv/linux/raise.c:51:1 #5 0x00007f5023d61859 abort /build/glibc-SzIz7B/glibc-2.31/stdlib/abort.c:81:7 #6 0x00007f5023d61729 get_sysdep_segment_value /build/glibc-SzIz7B/glibc-2.31/intl/loadmsgcat.c:509:8 #7 0x00007f5023d61729 _nl_load_domain /build/glibc-SzIz7B/glibc-2.31/intl/loadmsgcat.c:970:34 #8 0x00007f5023d72fd6 (/lib/x86_64-linux-gnu/libc.so.6+0x33fd6) #9 0x00005603675289a3 llvm::SelectionDAG::getConstant(llvm::ConstantInt const&, llvm::SDLoc const&, llvm::EVT, bool, bool) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x509e9a3) #10 0x00005603673bc088 (anonymous namespace)::DAGCombiner::visitSCALAR_TO_VECTOR(llvm::SDNode*) DAGCombiner.cpp:0:0 #11 0x000056036743459e (anonymous namespace)::DAGCombiner::visit(llvm::SDNode*) DAGCombiner.cpp:0:0 #12 0x0000560367436c35 (anonymous namespace)::DAGCombiner::combine(llvm::SDNode*) DAGCombiner.cpp:0:0 #13 0x00005603674383a0 llvm::SelectionDAG::Combine(llvm::CombineLevel, llvm::AAResults*, llvm::CodeGenOpt::Level) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x4fae3a0) #14 0x0000560367557edd llvm::SelectionDAGISel::CodeGenAndEmitDAG() (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x50cdedd) #15 0x000056036755ba10 llvm::SelectionDAGISel::SelectAllBasicBlocks(llvm::Function const&) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x50d1a10) #16 0x000056036755dbbd llvm::SelectionDAGISel::runOnMachineFunction(llvm::MachineFunction&) (.part.0) SelectionDAGISel.cpp:0:0 #17 0x0000560364dbafd0 (anonymous namespace)::X86DAGToDAGISel::runOnMachineFunction(llvm::MachineFunction&) X86ISelDAGToDAG.cpp:0:0 #18 0x000056036558acbe llvm::MachineFunctionPass::runOnFunction(llvm::Function&) (.part.0) MachineFunctionPass.cpp:0:0 #19 0x0000560365ae42d5 llvm::FPPassManager::runOnFunction(llvm::Function&) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x365a2d5) #20 0x0000560365ae4519 llvm::FPPassManager::runOnModule(llvm::Module&) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x365a519) #21 0x0000560365ae4d62 llvm::legacy::PassManagerImpl::run(llvm::Module&) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x365ad62) #22 0x0000560366761184 clang::EmitBackendOutput(clang::DiagnosticsEngine&, clang::HeaderSearchOptions const&, clang::CodeGenOptions const&, clang::TargetOptions const&, clang::LangOptions const&, llvm::StringRef, llvm::Module*, clang::BackendAc tion, std::unique_ptr<llvm::raw_pwrite_stream, std::default_delete<llvm::raw_pwrite_stream>>) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x42d7184) #23 0x00005603676b824b clang::BackendConsumer::HandleTranslationUnit(clang::ASTContext&) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x522e24b) #24 0x00005603685caa55 clang::ParseAST(clang::Sema&, bool, bool) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x6140a55) #25 0x00005603676b6c78 clang::CodeGenAction::ExecuteAction() (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x522cc78) #26 0x0000560366f485a9 clang::FrontendAction::Execute() (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x4abe5a9) #27 0x0000560366ecf4be clang::CompilerInstance::ExecuteAction(clang::FrontendAction&) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x4a454be) #28 0x000056036702e113 clang::ExecuteCompilerInvocation(clang::CompilerInstance*) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x4ba4113) #29 0x000056036384ca54 cc1_main(llvm::ArrayRef<char const*>, char const*, void*) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x13c2a54) #30 0x0000560363845b68 ExecuteCC1Tool(llvm::SmallVectorImpl<char const*>&) driver.cpp:0:0 #31 0x0000560366d3c229 void llvm::function_ref<void ()>::callback_fn<clang::driver::CC1Command::Execute(llvm::ArrayRef<llvm::Optional<llvm::StringRef>>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>>*, bool*) con st::'lambda'()>(long) Job.cpp:0:0 #32 0x00005603662c521a llvm::CrashRecoveryContext::RunSafely(llvm::function_ref<void ()>) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x3e3b21a) #33 0x0000560366d3ca7f clang::driver::CC1Command::Execute(llvm::ArrayRef<llvm::Optional<llvm::StringRef>>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>>*, bool*) const (.part.0) Job.cpp:0:0 #34 0x0000560366d05f49 clang::driver::Compilation::ExecuteCommand(clang::driver::Command const&, clang::driver::Command const*&, bool) const (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x487bf49) #35 0x0000560366d069cd clang::driver::Compilation::ExecuteJobs(clang::driver::JobList const&, llvm::SmallVectorImpl<std::pair<int, clang::driver::Command const*>>&, bool) const (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d- linux/bin/clang+0x487c9cd) #36 0x0000560366d0ffec clang::driver::Driver::ExecuteCompilation(clang::driver::Compilation&, llvm::SmallVectorImpl<std::pair<int, clang::driver::Command const*>>&) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/cl ang+0x4885fec) #37 0x000056036384ae93 clang_main(int, char**) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x13c0e93) #38 0x00007f5023d63083 __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/../csu/libc-start.c:342:3 #39 0x000056036384576e _start (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x13bb76e) clang-16: error: clang frontend command failed with exit code 134 (use -v to see invocation) clang version 16.0.0 (https://github.com/llvm/llvm-project.git 54eeadcf442df91aed0fb7244fe7885cdf1b1f3d) Target: x86_64-unknown-linux-gnu ```
1.0
A/F: `Elt->getBitWidth( ) == EltVT.getSizeInBits() && "APInt size does not match type size!"' - Two of our internal tests recently hit an assertion failure when compiling which I bisected back to commit 54eeadcf442df91aed0fb7244fe7885cdf1b1f3d. I was able to reduce the failing code to the following c++ sample: ```c++ template <typename a, typename> a b(a c, int) { return c; } typedef char d; typedef d __attribute__((ext_vector_type(2))) e; typedef char __attribute__((ext_vector_type(2))) f; #define g(h, i) (b<e, d>(h, 2) % 2) e j; void k() { e l{}, m = __builtin_shufflevector(l, j, 3, 1), n = m.yx, o g(n, ); volatile f p(o); } ``` To reproduce the assertion failure, compile the above code with optimizations enabled (-O2): ``` $ ~/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang -c -O2 test.cpp clang: /home/dyung/src/upstream/llvm_clean_git/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp:1593: llvm::SDValue llvm::SelectionDAG::getConstant(const llvm::ConstantInt&, const llvm::SDLoc&, llvm::EVT, bool, bool): Assertion `Elt->getBitWidth( ) == EltVT.getSizeInBits() && "APInt size does not match type size!"' failed. PLEASE submit a bug report to https://github.com/llvm/llvm-project/issues/ and include the crash backtrace, preprocessed source, and associated run script. Stack dump: 0. Program arguments: /home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang -c -O2 test.cpp 1. <eof> parser at end of file 2. Code generation 3. Running pass 'Function Pass Manager' on module 'test.cpp'. 4. Running pass 'X86 DAG->DAG Instruction Selection' on function '@_Z1kv' #0 0x000056036638b464 PrintStackTraceSignalHandler(void*) Signals.cpp:0:0 #1 0x00005603663891fc llvm::sys::CleanupOnSignal(unsigned long) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x3eff1fc) #2 0x00005603662c4a78 CrashRecoverySignalHandler(int) CrashRecoveryContext.cpp:0:0 #3 0x00007f50242b5420 __restore_rt (/lib/x86_64-linux-gnu/libpthread.so.0+0x14420) #4 0x00007f5023d8200b raise /build/glibc-SzIz7B/glibc-2.31/signal/../sysdeps/unix/sysv/linux/raise.c:51:1 #5 0x00007f5023d61859 abort /build/glibc-SzIz7B/glibc-2.31/stdlib/abort.c:81:7 #6 0x00007f5023d61729 get_sysdep_segment_value /build/glibc-SzIz7B/glibc-2.31/intl/loadmsgcat.c:509:8 #7 0x00007f5023d61729 _nl_load_domain /build/glibc-SzIz7B/glibc-2.31/intl/loadmsgcat.c:970:34 #8 0x00007f5023d72fd6 (/lib/x86_64-linux-gnu/libc.so.6+0x33fd6) #9 0x00005603675289a3 llvm::SelectionDAG::getConstant(llvm::ConstantInt const&, llvm::SDLoc const&, llvm::EVT, bool, bool) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x509e9a3) #10 0x00005603673bc088 (anonymous namespace)::DAGCombiner::visitSCALAR_TO_VECTOR(llvm::SDNode*) DAGCombiner.cpp:0:0 #11 0x000056036743459e (anonymous namespace)::DAGCombiner::visit(llvm::SDNode*) DAGCombiner.cpp:0:0 #12 0x0000560367436c35 (anonymous namespace)::DAGCombiner::combine(llvm::SDNode*) DAGCombiner.cpp:0:0 #13 0x00005603674383a0 llvm::SelectionDAG::Combine(llvm::CombineLevel, llvm::AAResults*, llvm::CodeGenOpt::Level) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x4fae3a0) #14 0x0000560367557edd llvm::SelectionDAGISel::CodeGenAndEmitDAG() (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x50cdedd) #15 0x000056036755ba10 llvm::SelectionDAGISel::SelectAllBasicBlocks(llvm::Function const&) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x50d1a10) #16 0x000056036755dbbd llvm::SelectionDAGISel::runOnMachineFunction(llvm::MachineFunction&) (.part.0) SelectionDAGISel.cpp:0:0 #17 0x0000560364dbafd0 (anonymous namespace)::X86DAGToDAGISel::runOnMachineFunction(llvm::MachineFunction&) X86ISelDAGToDAG.cpp:0:0 #18 0x000056036558acbe llvm::MachineFunctionPass::runOnFunction(llvm::Function&) (.part.0) MachineFunctionPass.cpp:0:0 #19 0x0000560365ae42d5 llvm::FPPassManager::runOnFunction(llvm::Function&) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x365a2d5) #20 0x0000560365ae4519 llvm::FPPassManager::runOnModule(llvm::Module&) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x365a519) #21 0x0000560365ae4d62 llvm::legacy::PassManagerImpl::run(llvm::Module&) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x365ad62) #22 0x0000560366761184 clang::EmitBackendOutput(clang::DiagnosticsEngine&, clang::HeaderSearchOptions const&, clang::CodeGenOptions const&, clang::TargetOptions const&, clang::LangOptions const&, llvm::StringRef, llvm::Module*, clang::BackendAc tion, std::unique_ptr<llvm::raw_pwrite_stream, std::default_delete<llvm::raw_pwrite_stream>>) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x42d7184) #23 0x00005603676b824b clang::BackendConsumer::HandleTranslationUnit(clang::ASTContext&) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x522e24b) #24 0x00005603685caa55 clang::ParseAST(clang::Sema&, bool, bool) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x6140a55) #25 0x00005603676b6c78 clang::CodeGenAction::ExecuteAction() (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x522cc78) #26 0x0000560366f485a9 clang::FrontendAction::Execute() (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x4abe5a9) #27 0x0000560366ecf4be clang::CompilerInstance::ExecuteAction(clang::FrontendAction&) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x4a454be) #28 0x000056036702e113 clang::ExecuteCompilerInvocation(clang::CompilerInstance*) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x4ba4113) #29 0x000056036384ca54 cc1_main(llvm::ArrayRef<char const*>, char const*, void*) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x13c2a54) #30 0x0000560363845b68 ExecuteCC1Tool(llvm::SmallVectorImpl<char const*>&) driver.cpp:0:0 #31 0x0000560366d3c229 void llvm::function_ref<void ()>::callback_fn<clang::driver::CC1Command::Execute(llvm::ArrayRef<llvm::Optional<llvm::StringRef>>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>>*, bool*) con st::'lambda'()>(long) Job.cpp:0:0 #32 0x00005603662c521a llvm::CrashRecoveryContext::RunSafely(llvm::function_ref<void ()>) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x3e3b21a) #33 0x0000560366d3ca7f clang::driver::CC1Command::Execute(llvm::ArrayRef<llvm::Optional<llvm::StringRef>>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>>*, bool*) const (.part.0) Job.cpp:0:0 #34 0x0000560366d05f49 clang::driver::Compilation::ExecuteCommand(clang::driver::Command const&, clang::driver::Command const*&, bool) const (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x487bf49) #35 0x0000560366d069cd clang::driver::Compilation::ExecuteJobs(clang::driver::JobList const&, llvm::SmallVectorImpl<std::pair<int, clang::driver::Command const*>>&, bool) const (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d- linux/bin/clang+0x487c9cd) #36 0x0000560366d0ffec clang::driver::Driver::ExecuteCompilation(clang::driver::Compilation&, llvm::SmallVectorImpl<std::pair<int, clang::driver::Command const*>>&) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/cl ang+0x4885fec) #37 0x000056036384ae93 clang_main(int, char**) (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x13c0e93) #38 0x00007f5023d63083 __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/../csu/libc-start.c:342:3 #39 0x000056036384576e _start (/home/dyung/src/upstream/54eeadcf442df91aed0fb7244fe7885cdf1b1f3d-linux/bin/clang+0x13bb76e) clang-16: error: clang frontend command failed with exit code 134 (use -v to see invocation) clang version 16.0.0 (https://github.com/llvm/llvm-project.git 54eeadcf442df91aed0fb7244fe7885cdf1b1f3d) Target: x86_64-unknown-linux-gnu ```
non_process
a f elt getbitwidth eltvt getsizeinbits apint size does not match type size two of our internal tests recently hit an assertion failure when compiling which i bisected back to commit i was able to reduce the failing code to the following c sample c template a b a c int return c typedef char d typedef d attribute ext vector type e typedef char attribute ext vector type f define g h i b h e j void k e l m builtin shufflevector l j n m yx o g n volatile f p o to reproduce the assertion failure compile the above code with optimizations enabled src upstream linux bin clang c test cpp clang home dyung src upstream llvm clean git llvm lib codegen selectiondag selectiondag cpp llvm sdvalue llvm selectiondag getconstant const llvm constantint const llvm sdloc llvm evt bool bool assertion elt getbitwidth eltvt getsizeinbits apint size does not match type size failed please submit a bug report to and include the crash backtrace preprocessed source and associated run script stack dump program arguments home dyung src upstream linux bin clang c test cpp parser at end of file code generation running pass function pass manager on module test cpp running pass dag dag instruction selection on function printstacktracesignalhandler void signals cpp llvm sys cleanuponsignal unsigned long home dyung src upstream linux bin clang crashrecoverysignalhandler int crashrecoverycontext cpp restore rt lib linux gnu libpthread so raise build glibc glibc signal sysdeps unix sysv linux raise c abort build glibc glibc stdlib abort c get sysdep segment value build glibc glibc intl loadmsgcat c nl load domain build glibc glibc intl loadmsgcat c lib linux gnu libc so llvm selectiondag getconstant llvm constantint const llvm sdloc const llvm evt bool bool home dyung src upstream linux bin clang anonymous namespace dagcombiner visitscalar to vector llvm sdnode dagcombiner cpp anonymous namespace dagcombiner visit llvm sdnode dagcombiner cpp anonymous namespace dagcombiner combine llvm sdnode dagcombiner cpp llvm selectiondag combine llvm combinelevel llvm aaresults llvm codegenopt level home dyung src upstream linux bin clang llvm selectiondagisel codegenandemitdag home dyung src upstream linux bin clang llvm selectiondagisel selectallbasicblocks llvm function const home dyung src upstream linux bin clang llvm selectiondagisel runonmachinefunction llvm machinefunction part selectiondagisel cpp anonymous namespace runonmachinefunction llvm machinefunction cpp llvm machinefunctionpass runonfunction llvm function part machinefunctionpass cpp llvm fppassmanager runonfunction llvm function home dyung src upstream linux bin clang llvm fppassmanager runonmodule llvm module home dyung src upstream linux bin clang llvm legacy passmanagerimpl run llvm module home dyung src upstream linux bin clang clang emitbackendoutput clang diagnosticsengine clang headersearchoptions const clang codegenoptions const clang targetoptions const clang langoptions const llvm stringref llvm module clang backendac tion std unique ptr home dyung src upstream linux bin clang clang backendconsumer handletranslationunit clang astcontext home dyung src upstream linux bin clang clang parseast clang sema bool bool home dyung src upstream linux bin clang clang codegenaction executeaction home dyung src upstream linux bin clang clang frontendaction execute home dyung src upstream linux bin clang clang compilerinstance executeaction clang frontendaction home dyung src upstream linux bin clang clang executecompilerinvocation clang compilerinstance home dyung src upstream linux bin clang main llvm arrayref char const void home dyung src upstream linux bin clang llvm smallvectorimpl driver cpp void llvm function ref callback fn std basic string std allocator bool con st lambda long job cpp llvm crashrecoverycontext runsafely llvm function ref home dyung src upstream linux bin clang clang driver execute llvm arrayref std basic string std allocator bool const part job cpp clang driver compilation executecommand clang driver command const clang driver command const bool const home dyung src upstream linux bin clang clang driver compilation executejobs clang driver joblist const llvm smallvectorimpl bool const home dyung src upstream linux bin clang clang driver driver executecompilation clang driver compilation llvm smallvectorimpl home dyung src upstream linux bin cl ang clang main int char home dyung src upstream linux bin clang libc start main build glibc glibc csu csu libc start c start home dyung src upstream linux bin clang clang error clang frontend command failed with exit code use v to see invocation clang version target unknown linux gnu
0
14,371
3,392,717,041
IssuesEvent
2015-11-30 20:49:20
grails/grails-core
https://api.github.com/repos/grails/grails-core
closed
Grails 2.4.4: withFilters is not accessing a namespaced controller
Acknowledged Testing v2.x Won't Fix
Running with Grails 2.4.4, a Spock test using `withFilters` is not accessing a controller when that controller is defined to use a namespace in the filter. For example, define the following in a unit test: `withFilters(controller: 'foo', namespace: 'bar', action: 'post') { controller.post() }` then in the filter class: ` myFilter(controller: 'foo', namespace: 'bar', action: 'post') { before = {...` The filter will not get invoked. However, if you remove the namespace from both places, the filter does get invoked.
1.0
Grails 2.4.4: withFilters is not accessing a namespaced controller - Running with Grails 2.4.4, a Spock test using `withFilters` is not accessing a controller when that controller is defined to use a namespace in the filter. For example, define the following in a unit test: `withFilters(controller: 'foo', namespace: 'bar', action: 'post') { controller.post() }` then in the filter class: ` myFilter(controller: 'foo', namespace: 'bar', action: 'post') { before = {...` The filter will not get invoked. However, if you remove the namespace from both places, the filter does get invoked.
non_process
grails withfilters is not accessing a namespaced controller running with grails a spock test using withfilters is not accessing a controller when that controller is defined to use a namespace in the filter for example define the following in a unit test withfilters controller foo namespace bar action post controller post then in the filter class myfilter controller foo namespace bar action post before the filter will not get invoked however if you remove the namespace from both places the filter does get invoked
0
277,690
24,095,773,944
IssuesEvent
2022-09-19 18:36:04
drifting-in-space/spawner
https://api.github.com/repos/drifting-in-space/spawner
closed
Add additional scheduler tests
test
- [ ] Drone sends status message but then goes offline - [ ] Multiple drones are online
1.0
Add additional scheduler tests - - [ ] Drone sends status message but then goes offline - [ ] Multiple drones are online
non_process
add additional scheduler tests drone sends status message but then goes offline multiple drones are online
0
10,963
13,768,298,599
IssuesEvent
2020-10-07 16:53:25
lukechu10/Minecraft-Box-Launcher
https://api.github.com/repos/lukechu10/Minecraft-Box-Launcher
opened
[TODO] Change framework to React (or Preact) and Redux
components enhancement renderer-process templates
Use React (or Preact) for better UI logic and Redux for better state management. The current UI logic is very messy and uses a lot of work arounds.
1.0
[TODO] Change framework to React (or Preact) and Redux - Use React (or Preact) for better UI logic and Redux for better state management. The current UI logic is very messy and uses a lot of work arounds.
process
change framework to react or preact and redux use react or preact for better ui logic and redux for better state management the current ui logic is very messy and uses a lot of work arounds
1
701
3,197,762,770
IssuesEvent
2015-10-01 07:58:36
nodejs/node-v0.x-archive
https://api.github.com/repos/nodejs/node-v0.x-archive
closed
send socket to child process error
child_process
i create a tcp server on parent process, and send socket to child process ```javascript net.createServer(function(s) { s.pause(); var worker = workers.shift(); try{ worker.send('c',s); }catch(ex){} workers.push(worker); }).listen(80); ``` and i catch socket in child process like this: ```javascript process.on("message", function(msg,socket) { process.nextTick(function(){ if(msg == 'c' && socket) { socket.readable = socket.writable = true; socket.resume(); //server.connections++; socket.server = server; server.emit("connection", socket); socket.emit("connect"); } }); }); ``` now, there is an error when my server running a while ``` TypeError: Cannot set property 'onread' of null TypeError: Cannot set property 'onread' of null at ChildProcess.handleConversion.net.Socket.send (child_process.js:134:21) at ChildProcess.target.send (child_process.js:439:52) at child_process.js:372:16 at Array.forEach (native) at ChildProcess.<anonymous> (child_process.js:371:13) at ChildProcess.EventEmitter.emit (events.js:117:20) at handleMessage (child_process.js:318:10) at Pipe.channel.onread (child_process.js:345:11) ```
1.0
send socket to child process error - i create a tcp server on parent process, and send socket to child process ```javascript net.createServer(function(s) { s.pause(); var worker = workers.shift(); try{ worker.send('c',s); }catch(ex){} workers.push(worker); }).listen(80); ``` and i catch socket in child process like this: ```javascript process.on("message", function(msg,socket) { process.nextTick(function(){ if(msg == 'c' && socket) { socket.readable = socket.writable = true; socket.resume(); //server.connections++; socket.server = server; server.emit("connection", socket); socket.emit("connect"); } }); }); ``` now, there is an error when my server running a while ``` TypeError: Cannot set property 'onread' of null TypeError: Cannot set property 'onread' of null at ChildProcess.handleConversion.net.Socket.send (child_process.js:134:21) at ChildProcess.target.send (child_process.js:439:52) at child_process.js:372:16 at Array.forEach (native) at ChildProcess.<anonymous> (child_process.js:371:13) at ChildProcess.EventEmitter.emit (events.js:117:20) at handleMessage (child_process.js:318:10) at Pipe.channel.onread (child_process.js:345:11) ```
process
send socket to child process error i create a tcp server on parent process and send socket to child process javascript net createserver function s s pause var worker workers shift try worker send c s catch ex workers push worker listen and i catch socket in child process like this javascript process on message function msg socket process nexttick function if msg c socket socket readable socket writable true socket resume server connections socket server server server emit connection socket socket emit connect now there is an error when my server running a while typeerror cannot set property onread of null typeerror cannot set property onread of null at childprocess handleconversion net socket send child process js at childprocess target send child process js at child process js at array foreach native at childprocess child process js at childprocess eventemitter emit events js at handlemessage child process js at pipe channel onread child process js
1
7,022
10,171,509,589
IssuesEvent
2019-08-08 08:34:02
zotero/zotero
https://api.github.com/repos/zotero/zotero
opened
Export discards manually edited citations
Word Processor Integration
Would be easy to solve if setting the field codes for libreoffice fields didn't reset styling. Thus we'll need to update all `citation.properties.dontUpdate: true` citation field codes with their modified `citation.properties.plaintext` upon export and restore it upon import.
1.0
Export discards manually edited citations - Would be easy to solve if setting the field codes for libreoffice fields didn't reset styling. Thus we'll need to update all `citation.properties.dontUpdate: true` citation field codes with their modified `citation.properties.plaintext` upon export and restore it upon import.
process
export discards manually edited citations would be easy to solve if setting the field codes for libreoffice fields didn t reset styling thus we ll need to update all citation properties dontupdate true citation field codes with their modified citation properties plaintext upon export and restore it upon import
1
58,569
3,089,709,363
IssuesEvent
2015-08-25 23:10:40
google/googletest
https://api.github.com/repos/google/googletest
opened
need to print return value last when the default value needs to be returned but isn't set
auto-migrated OpSys-All Priority-Low Type-Enhancement Usability
_From @GoogleCodeExporter on August 24, 2015 22:38_ ``` When GoogleMock sees an unexpected call, it prints out information in the following order: 1. Which function is being called and with what arguments, 2. What value it returns, and 3. What expectations gMock has tried to match the call with. The reason for picking this order is that #1 and #2 are usually short while #3 can be very lengthy. If we print #3 before #2, it can be very hard to tell where #3 ends and spot the function's return value, which in my opinion is bad user experience. This usually works fine. However, if the function returns a class type, no default action is set for it, and no default value is set for the return type, the program will crash in step #2, and #3 never gets a chance to be printed. It should be possible for GoogleMock to do #3 before #2 when no default return value is set. ``` Original issue reported on code.google.com by `zhanyong...@gmail.com` on 4 Mar 2009 at 7:09 _Copied from original issue: google/googlemock#36_
1.0
need to print return value last when the default value needs to be returned but isn't set - _From @GoogleCodeExporter on August 24, 2015 22:38_ ``` When GoogleMock sees an unexpected call, it prints out information in the following order: 1. Which function is being called and with what arguments, 2. What value it returns, and 3. What expectations gMock has tried to match the call with. The reason for picking this order is that #1 and #2 are usually short while #3 can be very lengthy. If we print #3 before #2, it can be very hard to tell where #3 ends and spot the function's return value, which in my opinion is bad user experience. This usually works fine. However, if the function returns a class type, no default action is set for it, and no default value is set for the return type, the program will crash in step #2, and #3 never gets a chance to be printed. It should be possible for GoogleMock to do #3 before #2 when no default return value is set. ``` Original issue reported on code.google.com by `zhanyong...@gmail.com` on 4 Mar 2009 at 7:09 _Copied from original issue: google/googlemock#36_
non_process
need to print return value last when the default value needs to be returned but isn t set from googlecodeexporter on august when googlemock sees an unexpected call it prints out information in the following order which function is being called and with what arguments what value it returns and what expectations gmock has tried to match the call with the reason for picking this order is that and are usually short while can be very lengthy if we print before it can be very hard to tell where ends and spot the function s return value which in my opinion is bad user experience this usually works fine however if the function returns a class type no default action is set for it and no default value is set for the return type the program will crash in step and never gets a chance to be printed it should be possible for googlemock to do before when no default return value is set original issue reported on code google com by zhanyong gmail com on mar at copied from original issue google googlemock
0