Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
27,298
4,958,037,622
IssuesEvent
2016-12-02 08:09:54
TNGSB/eWallet
https://api.github.com/repos/TNGSB/eWallet
closed
eWallet_MobileApp(Profile Update) #111
Defect - Medium (Sev-3)
[Defect_Mobile App #111.xlsx](https://github.com/TNGSB/eWallet/files/593589/Defect_Mobile.App.111.xlsx) [Defect_Mobile App #110.xlsx](https://github.com/TNGSB/eWallet/files/594057/Defect_Mobile.App.110.xlsx) Test Description : To verify the navigation from Touch N Go to social network link - **Twitter & Facebook** Defect Description : For Android, no twitter and facebook icon displayed in the overlay Tested in Build : 31 Android and 34 IOS Refer attachment for POT
1.0
eWallet_MobileApp(Profile Update) #111 - [Defect_Mobile App #111.xlsx](https://github.com/TNGSB/eWallet/files/593589/Defect_Mobile.App.111.xlsx) [Defect_Mobile App #110.xlsx](https://github.com/TNGSB/eWallet/files/594057/Defect_Mobile.App.110.xlsx) Test Description : To verify the navigation from Touch N Go to social network link - **Twitter & Facebook** Defect Description : For Android, no twitter and facebook icon displayed in the overlay Tested in Build : 31 Android and 34 IOS Refer attachment for POT
non_process
ewallet mobileapp profile update test description to verify the navigation from touch n go to social network link twitter facebook defect description for android no twitter and facebook icon displayed in the overlay tested in build android and ios refer attachment for pot
0
218,061
16,748,301,325
IssuesEvent
2021-06-11 18:38:18
worldbank/gld
https://api.github.com/repos/worldbank/gld
opened
Education levels documentation
I2D2 documentation variable coding
For most years, the raw variable for highest level of education has factor levels that are self-explanatory. But for some years, the factor levels are quite extensive and require documentation for proper interpretation that I can't find. Neither the [ILO](https://www.ilo.org/surveyLib/index.php/catalog/1933/related-materials), [PSA technical note](https://psa.gov.ph/content/technical-notes-labor-force-survey-lfs), nor the[ PSA documentation](http://psada.psa.gov.ph/index.php/catalog/175/datafile/F1) provide an explanation of the education levels (ie, what "college" vs "post-secondary" vs "tertiary" education). I'd like to know more before recoding.
1.0
Education levels documentation - For most years, the raw variable for highest level of education has factor levels that are self-explanatory. But for some years, the factor levels are quite extensive and require documentation for proper interpretation that I can't find. Neither the [ILO](https://www.ilo.org/surveyLib/index.php/catalog/1933/related-materials), [PSA technical note](https://psa.gov.ph/content/technical-notes-labor-force-survey-lfs), nor the[ PSA documentation](http://psada.psa.gov.ph/index.php/catalog/175/datafile/F1) provide an explanation of the education levels (ie, what "college" vs "post-secondary" vs "tertiary" education). I'd like to know more before recoding.
non_process
education levels documentation for most years the raw variable for highest level of education has factor levels that are self explanatory but for some years the factor levels are quite extensive and require documentation for proper interpretation that i can t find neither the nor the provide an explanation of the education levels ie what college vs post secondary vs tertiary education i d like to know more before recoding
0
13,889
16,655,460,367
IssuesEvent
2021-06-05 12:47:32
paul-buerkner/brms
https://api.github.com/repos/paul-buerkner/brms
closed
projpred not working with gr() grouping variables
bug duplicate post-processing wontfix
Hi Paul, I noticed that the `varsel()` function (from the projpred package) doesn't work with grouping variables that use the `gr()` function. For example: `fit1 <- brm(count ~ Trt + Age + (1|patient), data = epilepsy)` `varsel(fit1)` [1] "30% of terms selected." [1] "60% of terms selected." [1] "100% of terms selected." size elpd elpd.se 2 0 -927.35 37.66 3 1 -726.41 25.32 4 2 -726.41 25.32 5 3 -726.41 25.32 Then: `fit2 <- brm(count ~ Trt + Age + (1|gr(patient)), data = epilepsy)` `varsel(fit2)` Error in model.frame.default(data = data, weights = weights, drop.unused.levels = TRUE, : invalid type (list) for variable 'gr(patient)' I first noticed this when working with some phylogenetic regression models similar to those in the brms phylogenetic vignette. I realize this may be a complicated issue with projpred, but I wanted to let you know just in case it is a simple fix. Thanks!
1.0
projpred not working with gr() grouping variables - Hi Paul, I noticed that the `varsel()` function (from the projpred package) doesn't work with grouping variables that use the `gr()` function. For example: `fit1 <- brm(count ~ Trt + Age + (1|patient), data = epilepsy)` `varsel(fit1)` [1] "30% of terms selected." [1] "60% of terms selected." [1] "100% of terms selected." size elpd elpd.se 2 0 -927.35 37.66 3 1 -726.41 25.32 4 2 -726.41 25.32 5 3 -726.41 25.32 Then: `fit2 <- brm(count ~ Trt + Age + (1|gr(patient)), data = epilepsy)` `varsel(fit2)` Error in model.frame.default(data = data, weights = weights, drop.unused.levels = TRUE, : invalid type (list) for variable 'gr(patient)' I first noticed this when working with some phylogenetic regression models similar to those in the brms phylogenetic vignette. I realize this may be a complicated issue with projpred, but I wanted to let you know just in case it is a simple fix. Thanks!
process
projpred not working with gr grouping variables hi paul i noticed that the varsel function from the projpred package doesn t work with grouping variables that use the gr function for example brm count trt age patient data epilepsy varsel of terms selected of terms selected of terms selected size elpd elpd se then brm count trt age gr patient data epilepsy varsel error in model frame default data data weights weights drop unused levels true invalid type list for variable gr patient i first noticed this when working with some phylogenetic regression models similar to those in the brms phylogenetic vignette i realize this may be a complicated issue with projpred but i wanted to let you know just in case it is a simple fix thanks
1
21,361
29,194,078,825
IssuesEvent
2023-05-20 00:31:34
devssa/onde-codar-em-salvador
https://api.github.com/repos/devssa/onde-codar-em-salvador
closed
[Hibrido / Belo Horizonte, Minas Gerais, Brazil] Backend Java Developer (Pleno - Híbrido em Belo Horizonte - MG) na Coodesh
SALVADOR BACK-END PJ JAVA MYSQL JAVASCRIPT PLENO PRIMEFACES JSF SPRING SQL GIT HIBERNATE MAVEN REST SOAP JSON ANGULAR REQUISITOS NGINX PROCESSOS INOVAÇÃO BACKEND GITHUB APACHE UMA C DOCUMENTAÇÃO WILDFLY HTTP MANUTENÇÃO HIBRIDO ALOCADO Stale
## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/backend-java-developer-pleno-hibrido-em-belo-horizonte-mg-142428692?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>A<strong> Prime Results</strong> está em busca de <strong><ins>Backend Java Developer</ins></strong> para compor seu time!</p> <p>Acreditamos no poder de transformação social realizado pelas empresas. Acreditamos no poder transformador das pessoas, aliado à gestão e tecnologia. Compartilhamos nosso conhecimento para solucionar problemas complexos e gerar valor para nossos clientes.</p> <p><strong>Responsabilidades:</strong></p> <ul> <li>Desenvolvimento/implementação e manutenção de aplicações;</li> <li>Participar da análise e execução dos projetos e execução dos tickets;</li> <li>Definir as atividades necessárias para a realização de projetos, analisando os impactos em sistemas e processos através do entendimento da necessidade, conhecimento técnico e arquitetônico dos sistemas;</li> <li>Desenvolver códigos para atendimento às áreas e empresas clientes, proporcionando o esclarecimento de dúvidas relacionados ao projeto, contribuindo para uma melhor análise de impactos de processos e sistemas sob sua responsabilidade;&nbsp;</li> <li>Participar das atividades de planejamento para a liberação do produto para homologação e produção, por meio da validação de testes de aceite, assim como documentação de não conformidades avaliando e planejando a execução das correções reportadas;</li> <li>Participar da rotina de SQUADs.&nbsp;</li> </ul> ## Prime Results : <p>O Best Seller Simon Sinek, diz que a maioria das empresas sabem o que fazem, porém não sabem por que o fazem. Não é o nosso caso. A Prime Results é uma empresa especializada em gestão organizacional que usa seu potencial de transformação em empresas que geram impacto positivo na sociedade. Nossos clientes hoje, fazem a diferença na vida de mais de 250.000 brasileiros, nas áreas de proteção patrimonial, saúde e assistência 24 horas.&nbsp;</p> <p>Nosso objetivo central é criar um ambiente criativo, dinâmico e engajado, sempre aliados a métodos, processos inteligentes e muita inovação.</p><a href='https://coodesh.com/empresas/prime-results'>Veja mais no site</a> ## Habilidades: - Spring - Java - MySQL ## Local: Belo Horizonte, Minas Gerais, Brazil ## Requisitos: - Experiência em Java: JSF, Spring, PrimeFaces, Hibernate, JasperReports; - Conhecimentos em modelagem e desenvolvimento de Bancos de Dados relacionais: MySQL, SQL Server; - Conhecimento de Arquiteturas Web e Serviços (HTTP, SOAP, REST ou JSON); - Conhecimentos nas ferramentas: GIT e Maven; - Conhecimentos técnicos em servidores de aplicação (Wildfly - J2EE), servidores web (Apache e NGINX) e Spring Boot. ## Diferenciais: - Conhecimentos em Tecnologias Web: HTML5, CSS e Frameworks JavaScript, Angular. ## Benefícios: - GymPass; - Assistência Médica após período de experiência. ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Backend Java Developer (Pleno - Híbrido em Belo Horizonte - MG) na Prime Results ](https://coodesh.com/vagas/backend-java-developer-pleno-hibrido-em-belo-horizonte-mg-142428692?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Alocado #### Regime PJ #### Categoria Back-End
1.0
[Hibrido / Belo Horizonte, Minas Gerais, Brazil] Backend Java Developer (Pleno - Híbrido em Belo Horizonte - MG) na Coodesh - ## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/backend-java-developer-pleno-hibrido-em-belo-horizonte-mg-142428692?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>A<strong> Prime Results</strong> está em busca de <strong><ins>Backend Java Developer</ins></strong> para compor seu time!</p> <p>Acreditamos no poder de transformação social realizado pelas empresas. Acreditamos no poder transformador das pessoas, aliado à gestão e tecnologia. Compartilhamos nosso conhecimento para solucionar problemas complexos e gerar valor para nossos clientes.</p> <p><strong>Responsabilidades:</strong></p> <ul> <li>Desenvolvimento/implementação e manutenção de aplicações;</li> <li>Participar da análise e execução dos projetos e execução dos tickets;</li> <li>Definir as atividades necessárias para a realização de projetos, analisando os impactos em sistemas e processos através do entendimento da necessidade, conhecimento técnico e arquitetônico dos sistemas;</li> <li>Desenvolver códigos para atendimento às áreas e empresas clientes, proporcionando o esclarecimento de dúvidas relacionados ao projeto, contribuindo para uma melhor análise de impactos de processos e sistemas sob sua responsabilidade;&nbsp;</li> <li>Participar das atividades de planejamento para a liberação do produto para homologação e produção, por meio da validação de testes de aceite, assim como documentação de não conformidades avaliando e planejando a execução das correções reportadas;</li> <li>Participar da rotina de SQUADs.&nbsp;</li> </ul> ## Prime Results : <p>O Best Seller Simon Sinek, diz que a maioria das empresas sabem o que fazem, porém não sabem por que o fazem. Não é o nosso caso. A Prime Results é uma empresa especializada em gestão organizacional que usa seu potencial de transformação em empresas que geram impacto positivo na sociedade. Nossos clientes hoje, fazem a diferença na vida de mais de 250.000 brasileiros, nas áreas de proteção patrimonial, saúde e assistência 24 horas.&nbsp;</p> <p>Nosso objetivo central é criar um ambiente criativo, dinâmico e engajado, sempre aliados a métodos, processos inteligentes e muita inovação.</p><a href='https://coodesh.com/empresas/prime-results'>Veja mais no site</a> ## Habilidades: - Spring - Java - MySQL ## Local: Belo Horizonte, Minas Gerais, Brazil ## Requisitos: - Experiência em Java: JSF, Spring, PrimeFaces, Hibernate, JasperReports; - Conhecimentos em modelagem e desenvolvimento de Bancos de Dados relacionais: MySQL, SQL Server; - Conhecimento de Arquiteturas Web e Serviços (HTTP, SOAP, REST ou JSON); - Conhecimentos nas ferramentas: GIT e Maven; - Conhecimentos técnicos em servidores de aplicação (Wildfly - J2EE), servidores web (Apache e NGINX) e Spring Boot. ## Diferenciais: - Conhecimentos em Tecnologias Web: HTML5, CSS e Frameworks JavaScript, Angular. ## Benefícios: - GymPass; - Assistência Médica após período de experiência. ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Backend Java Developer (Pleno - Híbrido em Belo Horizonte - MG) na Prime Results ](https://coodesh.com/vagas/backend-java-developer-pleno-hibrido-em-belo-horizonte-mg-142428692?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Alocado #### Regime PJ #### Categoria Back-End
process
backend java developer pleno híbrido em belo horizonte mg na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a prime results está em busca de backend java developer para compor seu time acreditamos no poder de transformação social realizado pelas empresas acreditamos no poder transformador das pessoas aliado à gestão e tecnologia compartilhamos nosso conhecimento para solucionar problemas complexos e gerar valor para nossos clientes responsabilidades desenvolvimento implementação e manutenção de aplicações participar da análise e execução dos projetos e execução dos tickets definir as atividades necessárias para a realização de projetos analisando os impactos em sistemas e processos através do entendimento da necessidade conhecimento técnico e arquitetônico dos sistemas desenvolver códigos para atendimento às áreas e empresas clientes proporcionando o esclarecimento de dúvidas relacionados ao projeto contribuindo para uma melhor análise de impactos de processos e sistemas sob sua responsabilidade nbsp participar das atividades de planejamento para a liberação do produto para homologação e produção por meio da validação de testes de aceite assim como documentação de não conformidades avaliando e planejando a execução das correções reportadas participar da rotina de squads nbsp prime results o best seller simon sinek diz que a maioria das empresas sabem o que fazem porém não sabem por que o fazem não é o nosso caso a prime results é uma empresa especializada em gestão organizacional que usa seu potencial de transformação em empresas que geram impacto positivo na sociedade nossos clientes hoje fazem a diferença na vida de mais de brasileiros nas áreas de proteção patrimonial saúde e assistência horas nbsp nosso objetivo central é criar um ambiente criativo dinâmico e engajado sempre aliados a métodos processos inteligentes e muita inovação habilidades spring java mysql local belo horizonte minas gerais brazil requisitos experiência em java jsf spring primefaces hibernate jasperreports conhecimentos em modelagem e desenvolvimento de bancos de dados relacionais mysql sql server conhecimento de arquiteturas web e serviços http soap rest ou json conhecimentos nas ferramentas git e maven conhecimentos técnicos em servidores de aplicação wildfly servidores web apache e nginx e spring boot diferenciais conhecimentos em tecnologias web css e frameworks javascript angular benefícios gympass assistência médica após período de experiência como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação alocado regime pj categoria back end
1
58,044
6,569,336,211
IssuesEvent
2017-09-09 06:20:10
w3c/csswg-drafts
https://api.github.com/repos/w3c/csswg-drafts
opened
[css-writing-modes] Missing Testcases
css-writing-modes-3 Needs Testcase (WPT)
Need to make sure we have tests for these: https://lists.w3.org/Archives/Public/www-style/2017May/0002.html https://lists.w3.org/Archives/Public/www-style/2015Sep/0326.html (I suppose WPT has some process for tracking such things as missing coverage but until it's documented in http://web-platform-tests.org/ I'm just going to assume it doesn't exist.)
1.0
[css-writing-modes] Missing Testcases - Need to make sure we have tests for these: https://lists.w3.org/Archives/Public/www-style/2017May/0002.html https://lists.w3.org/Archives/Public/www-style/2015Sep/0326.html (I suppose WPT has some process for tracking such things as missing coverage but until it's documented in http://web-platform-tests.org/ I'm just going to assume it doesn't exist.)
non_process
missing testcases need to make sure we have tests for these i suppose wpt has some process for tracking such things as missing coverage but until it s documented in i m just going to assume it doesn t exist
0
236
2,663,134,481
IssuesEvent
2015-03-20 01:21:48
hammerlab/pileup.js
https://api.github.com/repos/hammerlab/pileup.js
closed
Run jsxhint
process
This works pretty well for catching unused vars, missing `'use strict'` and other issues in CycleDash. pileup.js should use it as well.
1.0
Run jsxhint - This works pretty well for catching unused vars, missing `'use strict'` and other issues in CycleDash. pileup.js should use it as well.
process
run jsxhint this works pretty well for catching unused vars missing use strict and other issues in cycledash pileup js should use it as well
1
370,087
25,887,119,993
IssuesEvent
2022-12-14 15:16:52
appsmithorg/appsmith-docs
https://api.github.com/repos/appsmithorg/appsmith-docs
closed
[Docs]: Dynamic Menu Items - Menu Button Widget
Documentation High Ready for Doc Team User Education Pod
### Is there an existing issue for this? - [X] I have searched the existing issues ### Engineering Ticket Link https://github.com/appsmithorg/appsmith/pull/17652 ### Release Date Merged into release on 1 Dec, 2022 ### First Draft [PR Here](https://github.com/appsmithorg/appsmith-docs/pull/649) ### Loom video [Here's an Unlisted YouTube Video](https://www.youtube.com/watch?v=ajzBkooI7CE) ### PRD https://github.com/appsmithorg/appsmith/issues/12362#issuecomment-1202023433 ### Use cases or user requests A popular use case is [documented here](https://github.com/appsmithorg/appsmith/issues/12362) in the issue description.
1.0
[Docs]: Dynamic Menu Items - Menu Button Widget - ### Is there an existing issue for this? - [X] I have searched the existing issues ### Engineering Ticket Link https://github.com/appsmithorg/appsmith/pull/17652 ### Release Date Merged into release on 1 Dec, 2022 ### First Draft [PR Here](https://github.com/appsmithorg/appsmith-docs/pull/649) ### Loom video [Here's an Unlisted YouTube Video](https://www.youtube.com/watch?v=ajzBkooI7CE) ### PRD https://github.com/appsmithorg/appsmith/issues/12362#issuecomment-1202023433 ### Use cases or user requests A popular use case is [documented here](https://github.com/appsmithorg/appsmith/issues/12362) in the issue description.
non_process
dynamic menu items menu button widget is there an existing issue for this i have searched the existing issues engineering ticket link release date merged into release on dec first draft loom video prd use cases or user requests a popular use case is in the issue description
0
10,928
13,727,860,617
IssuesEvent
2020-10-04 08:47:04
jgraley/inferno-cpp2v
https://api.github.com/repos/jgraley/inferno-cpp2v
opened
Don't use BY_VALUE for comapre criterion
Constraint Processing
It'll become confusing since CSPO terminology uses the word "value" as the finest form of values, which would correspond with what we call BY_LOCATION (based on the current problem mapping). Use BY_EQUIVALENCE instead - on the grounds that SimpleCompare matches for equivalence classes..
1.0
Don't use BY_VALUE for comapre criterion - It'll become confusing since CSPO terminology uses the word "value" as the finest form of values, which would correspond with what we call BY_LOCATION (based on the current problem mapping). Use BY_EQUIVALENCE instead - on the grounds that SimpleCompare matches for equivalence classes..
process
don t use by value for comapre criterion it ll become confusing since cspo terminology uses the word value as the finest form of values which would correspond with what we call by location based on the current problem mapping use by equivalence instead on the grounds that simplecompare matches for equivalence classes
1
448,108
12,944,007,674
IssuesEvent
2020-07-18 09:16:12
tugkan/aliexpress-scraper
https://api.github.com/repos/tugkan/aliexpress-scraper
opened
Add an option to have one variant per dataset row
Low priority enhancement
Most of the values will be the same but everything specific to a variant (price, stock, images, etc.) should be unique.
1.0
Add an option to have one variant per dataset row - Most of the values will be the same but everything specific to a variant (price, stock, images, etc.) should be unique.
non_process
add an option to have one variant per dataset row most of the values will be the same but everything specific to a variant price stock images etc should be unique
0
712,558
24,499,124,186
IssuesEvent
2022-10-10 11:17:33
0xPolygonHermez/zkevm-bridge-ui
https://api.github.com/repos/0xPolygonHermez/zkevm-bridge-ui
closed
Display terms and conditions in the Login page
priority: high type: enhancement
We need to display the terms and conditions in the `Login` page. You need to accept them for the first time only.
1.0
Display terms and conditions in the Login page - We need to display the terms and conditions in the `Login` page. You need to accept them for the first time only.
non_process
display terms and conditions in the login page we need to display the terms and conditions in the login page you need to accept them for the first time only
0
251,826
27,212,595,681
IssuesEvent
2023-02-20 17:55:14
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
opened
[Datadog] Develop custom roles for Platform teams
monitoring security needs-refinement needs-estimate platform-tech-team-2
## Description Custom Datadog roles and permissions should be created to offer Platform users access to the right features and data. The out-of-the-box roles don't offer fine-grained permissions customization. ## Background/context - Datadog offers custom role-based access control (RBAC) to control what data users can see and modify - Datadog permissions should be granted based on a user's role on the VA.Gov Platform - Generic admin, standard, and read-only roles are built-in - Datadog docs: - [Role Based Access Control](https://docs.datadoghq.com/account_management/rbac/?tab=datadogapplication) - [Datadog Role Permissions](https://docs.datadoghq.com/account_management/rbac/permissions/) ## Acceptance criteria _TBD_ ## Refinement Guidance - Check the following before working on this issue: - [x] _Team label assigned ("platform-tech-team-2")_ - [ ] _Epic assigned (if needed)_ - [ ] _Estimated (points assigned)_ - [ ] _Sprint assigned (once planned)_ - [ ] _Team member(s) assigned_
True
[Datadog] Develop custom roles for Platform teams - ## Description Custom Datadog roles and permissions should be created to offer Platform users access to the right features and data. The out-of-the-box roles don't offer fine-grained permissions customization. ## Background/context - Datadog offers custom role-based access control (RBAC) to control what data users can see and modify - Datadog permissions should be granted based on a user's role on the VA.Gov Platform - Generic admin, standard, and read-only roles are built-in - Datadog docs: - [Role Based Access Control](https://docs.datadoghq.com/account_management/rbac/?tab=datadogapplication) - [Datadog Role Permissions](https://docs.datadoghq.com/account_management/rbac/permissions/) ## Acceptance criteria _TBD_ ## Refinement Guidance - Check the following before working on this issue: - [x] _Team label assigned ("platform-tech-team-2")_ - [ ] _Epic assigned (if needed)_ - [ ] _Estimated (points assigned)_ - [ ] _Sprint assigned (once planned)_ - [ ] _Team member(s) assigned_
non_process
develop custom roles for platform teams description custom datadog roles and permissions should be created to offer platform users access to the right features and data the out of the box roles don t offer fine grained permissions customization background context datadog offers custom role based access control rbac to control what data users can see and modify datadog permissions should be granted based on a user s role on the va gov platform generic admin standard and read only roles are built in datadog docs acceptance criteria tbd refinement guidance check the following before working on this issue team label assigned platform tech team epic assigned if needed estimated points assigned sprint assigned once planned team member s assigned
0
332,027
24,334,772,532
IssuesEvent
2022-10-01 00:49:48
alanyee/valorantbp
https://api.github.com/repos/alanyee/valorantbp
opened
Add a README
documentation good first issue hacktoberfest
## Describe the ideal solution or feature request `README` is currently nonexistent, and it would help new users on how to use this CLI. Ideally, all options are explained and have examples. ## Difficulty, impact, and usage score | Technical difficulty | User goals | Usage frequency | |--------------------| --------------------| --------------------| | Small| Important | Daily| ## How does this tie into our current application? It will be document usage.
1.0
Add a README - ## Describe the ideal solution or feature request `README` is currently nonexistent, and it would help new users on how to use this CLI. Ideally, all options are explained and have examples. ## Difficulty, impact, and usage score | Technical difficulty | User goals | Usage frequency | |--------------------| --------------------| --------------------| | Small| Important | Daily| ## How does this tie into our current application? It will be document usage.
non_process
add a readme describe the ideal solution or feature request readme is currently nonexistent and it would help new users on how to use this cli ideally all options are explained and have examples difficulty impact and usage score technical difficulty user goals usage frequency small important daily how does this tie into our current application it will be document usage
0
21,691
30,186,699,019
IssuesEvent
2023-07-04 12:36:52
h4sh5/pypi-auto-scanner
https://api.github.com/repos/h4sh5/pypi-auto-scanner
opened
windbg2df 0.10 has 1 GuardDog issues
guarddog silent-process-execution
https://pypi.org/project/windbg2df https://inspector.pypi.io/project/windbg2df ```{ "dependency": "windbg2df", "version": "0.10", "result": { "issues": 1, "errors": {}, "results": { "silent-process-execution": [ { "location": "windbg2df-0.10/windbg2df/__init__.py:335", "code": " subprocess.Popen(\n command,\n stdin=subprocess.DEVNULL,\n bufsize=0,\n start_new_session=True,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n **invi...\n )", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmpbtjaytka/windbg2df" } }```
1.0
windbg2df 0.10 has 1 GuardDog issues - https://pypi.org/project/windbg2df https://inspector.pypi.io/project/windbg2df ```{ "dependency": "windbg2df", "version": "0.10", "result": { "issues": 1, "errors": {}, "results": { "silent-process-execution": [ { "location": "windbg2df-0.10/windbg2df/__init__.py:335", "code": " subprocess.Popen(\n command,\n stdin=subprocess.DEVNULL,\n bufsize=0,\n start_new_session=True,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n **invi...\n )", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmpbtjaytka/windbg2df" } }```
process
has guarddog issues dependency version result issues errors results silent process execution location init py code subprocess popen n command n stdin subprocess devnull n bufsize n start new session true n stdout subprocess devnull n stderr subprocess devnull n invi n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp tmpbtjaytka
1
22,136
30,681,701,589
IssuesEvent
2023-07-26 09:34:51
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
[receiver/prometheus + processor/cumulativetodelta] Handling unknown start timestamps on cumulative streams
bug Stale priority:p2 data:metrics processor/cumulativetodelta
### Component(s) processor/cumulativetodelta, receiver/prometheus ### What happened? ## Description When using a `prometheus` receiver to scrape targets that don't expose timestamps on metrics exported (e.g. another collector), the first scrape of a cumulative counter results in a data point with the same `StartTimestamp` and `Timestamp`, and the current value of the counter. For example, a collector accepting 20 spans per minute would report the following metric after it has been running for a while (when the scraping collector restarts and does the first scrape): ``` Metric #1 Descriptor: -> Name: otelcol_receiver_accepted_spans -> Description: Number of spans successfully pushed into the pipeline. -> Unit: -> DataType: Sum -> IsMonotonic: true -> AggregationTemporality: Cumulative NumberDataPoints #0 Data point attributes: -> receiver: Str(otlp) -> service_instance_id: Str(2be3b72c-51b4-4e97-9f34-df7f462d9833) -> service_name: Str(otelcol-contrib) -> service_version: Str(0.64.1) -> transport: Str(grpc) StartTimestamp: 2022-12-21 12:14:41.041 +0000 UTC Timestamp: 2022-12-21 12:14:41.041 +0000 UTC Value: 1256.000000 ``` This is the intended behaviour if these metrics are exported with cumulative temporality and the exporter disregards the `StartTimestamp` (as it would be with other Prometheus exporters). However, when using it with a `cumulativetodelta` processor, and exporting OTLP data points with delta temporality, it seems to violate the advice in https://opentelemetry.io/docs/reference/specification/metrics/data-model/#cumulative-streams-handling-unknown-start-time. Clearly, `prometheus` receiver reporting a `0` value here wouldn't be a good solution, because the second data point would have different `StartTimestamp` and `Timestamp` and be treated as a true counter reset, and I don't believe it'd be `prometheus` receiver responsibility to handle state and keep a "new" count of cumulative counters. I believe, according to spec, that it should be the `cumulativetodelta` processor responsibility to handle cases where `StartTimestamp` and `Timestamp` are treat that as a "start" value, dropping the metric, while storing the value. Although, looking at the code, I'm unsure if the `DeltaValue` here should be considered `valid` (not sure in what cases the first value of a monotonic sum should be considered valid, I'd expect to have at least two values to calculate delta): https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/4eaa9f8f8c89492e4df873bda483ea49da8b194f/processor/cumulativetodeltaprocessor/internal/tracking/tracker.go#L100 And the assumption here that only non-cumulative values should not report the first value, does it consider the case in question? https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/4eaa9f8f8c89492e4df873bda483ea49da8b194f/processor/cumulativetodeltaprocessor/processor.go#L176 ## Steps to Reproduce Run a collector scraping another collector (controlling scrape port via `prometheus.io/scrape_port` annotation) which is receiving approximately 20 spans per minute. The collector scraping Prometheus metric using the config below must be restarted a while after the collector being scraped to clearly see the behaviour. ## Expected Result The following metric data point reported with `Timestamp: 2022-12-21 13:34:32.866 +0000 UTC` and no data point reported with `Timestamp: 2022-12-21 13:34:02.866 +0000 UTC`: ``` Metric #4 Descriptor: -> Name: otelcol_receiver_accepted_spans -> Description: Number of spans successfully pushed into the pipeline. -> Unit: -> DataType: Sum -> IsMonotonic: true -> AggregationTemporality: Delta NumberDataPoints #0 Data point attributes: -> receiver: Str(otlp) -> service_instance_id: Str(2be3b72c-51b4-4e97-9f34-df7f462d9833) -> service_name: Str(otelcol-contrib) -> service_version: Str(0.64.1) -> transport: Str(grpc) StartTimestamp: 2022-12-21 13:34:02.866 +0000 UTC Timestamp: 2022-12-21 13:34:32.866 +0000 UTC Value: 10.000000 ``` ## Actual Result The following data point being reported ``` Metric #4 Descriptor: -> Name: otelcol_receiver_accepted_spans -> Description: Number of spans successfully pushed into the pipeline. -> Unit: -> DataType: Sum -> IsMonotonic: true -> AggregationTemporality: Delta NumberDataPoints #0 Data point attributes: -> receiver: Str(otlp) -> service_instance_id: Str(2be3b72c-51b4-4e97-9f34-df7f462d9833) -> service_name: Str(otelcol-contrib) -> service_version: Str(0.64.1) -> transport: Str(grpc) StartTimestamp: 2022-12-21 13:34:02.866 +0000 UTC Timestamp: 2022-12-21 13:34:32.866 +0000 UTC Value: 2970.000000 ``` ### Collector version v0.67.0 ### Environment information ## Environment `otel/opentelemetry-collector-contrib` Docker image running on Kubernetes v1.25.3 ### OpenTelemetry Collector configuration ```yaml receivers: prometheus: config: scrape_configs: - job_name: opentelemetry-collector kubernetes_sd_configs: - role: pod metric_relabel_configs: - action: keep regex: otelcol_receiver_accepted_spans source_labels: - __name__ relabel_configs: - action: keep regex: opentelemetry-collector source_labels: - __meta_kubernetes_pod_label_app_kubernetes_io_name - action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $$1:$$2 source_labels: - __address__ - __meta_kubernetes_pod_annotation_prometheus_io_scrape_port target_label: __address__ scrape_interval: 30s exporters: logging: verbosity: detailed processors: cumulativetodelta: null service: pipelines: metrics: exporters: - logging processors: - cumulativetodelta receivers: - prometheus ``` ### Log output _No response_ ### Additional context _No response_
1.0
[receiver/prometheus + processor/cumulativetodelta] Handling unknown start timestamps on cumulative streams - ### Component(s) processor/cumulativetodelta, receiver/prometheus ### What happened? ## Description When using a `prometheus` receiver to scrape targets that don't expose timestamps on metrics exported (e.g. another collector), the first scrape of a cumulative counter results in a data point with the same `StartTimestamp` and `Timestamp`, and the current value of the counter. For example, a collector accepting 20 spans per minute would report the following metric after it has been running for a while (when the scraping collector restarts and does the first scrape): ``` Metric #1 Descriptor: -> Name: otelcol_receiver_accepted_spans -> Description: Number of spans successfully pushed into the pipeline. -> Unit: -> DataType: Sum -> IsMonotonic: true -> AggregationTemporality: Cumulative NumberDataPoints #0 Data point attributes: -> receiver: Str(otlp) -> service_instance_id: Str(2be3b72c-51b4-4e97-9f34-df7f462d9833) -> service_name: Str(otelcol-contrib) -> service_version: Str(0.64.1) -> transport: Str(grpc) StartTimestamp: 2022-12-21 12:14:41.041 +0000 UTC Timestamp: 2022-12-21 12:14:41.041 +0000 UTC Value: 1256.000000 ``` This is the intended behaviour if these metrics are exported with cumulative temporality and the exporter disregards the `StartTimestamp` (as it would be with other Prometheus exporters). However, when using it with a `cumulativetodelta` processor, and exporting OTLP data points with delta temporality, it seems to violate the advice in https://opentelemetry.io/docs/reference/specification/metrics/data-model/#cumulative-streams-handling-unknown-start-time. Clearly, `prometheus` receiver reporting a `0` value here wouldn't be a good solution, because the second data point would have different `StartTimestamp` and `Timestamp` and be treated as a true counter reset, and I don't believe it'd be `prometheus` receiver responsibility to handle state and keep a "new" count of cumulative counters. I believe, according to spec, that it should be the `cumulativetodelta` processor responsibility to handle cases where `StartTimestamp` and `Timestamp` are treat that as a "start" value, dropping the metric, while storing the value. Although, looking at the code, I'm unsure if the `DeltaValue` here should be considered `valid` (not sure in what cases the first value of a monotonic sum should be considered valid, I'd expect to have at least two values to calculate delta): https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/4eaa9f8f8c89492e4df873bda483ea49da8b194f/processor/cumulativetodeltaprocessor/internal/tracking/tracker.go#L100 And the assumption here that only non-cumulative values should not report the first value, does it consider the case in question? https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/4eaa9f8f8c89492e4df873bda483ea49da8b194f/processor/cumulativetodeltaprocessor/processor.go#L176 ## Steps to Reproduce Run a collector scraping another collector (controlling scrape port via `prometheus.io/scrape_port` annotation) which is receiving approximately 20 spans per minute. The collector scraping Prometheus metric using the config below must be restarted a while after the collector being scraped to clearly see the behaviour. ## Expected Result The following metric data point reported with `Timestamp: 2022-12-21 13:34:32.866 +0000 UTC` and no data point reported with `Timestamp: 2022-12-21 13:34:02.866 +0000 UTC`: ``` Metric #4 Descriptor: -> Name: otelcol_receiver_accepted_spans -> Description: Number of spans successfully pushed into the pipeline. -> Unit: -> DataType: Sum -> IsMonotonic: true -> AggregationTemporality: Delta NumberDataPoints #0 Data point attributes: -> receiver: Str(otlp) -> service_instance_id: Str(2be3b72c-51b4-4e97-9f34-df7f462d9833) -> service_name: Str(otelcol-contrib) -> service_version: Str(0.64.1) -> transport: Str(grpc) StartTimestamp: 2022-12-21 13:34:02.866 +0000 UTC Timestamp: 2022-12-21 13:34:32.866 +0000 UTC Value: 10.000000 ``` ## Actual Result The following data point being reported ``` Metric #4 Descriptor: -> Name: otelcol_receiver_accepted_spans -> Description: Number of spans successfully pushed into the pipeline. -> Unit: -> DataType: Sum -> IsMonotonic: true -> AggregationTemporality: Delta NumberDataPoints #0 Data point attributes: -> receiver: Str(otlp) -> service_instance_id: Str(2be3b72c-51b4-4e97-9f34-df7f462d9833) -> service_name: Str(otelcol-contrib) -> service_version: Str(0.64.1) -> transport: Str(grpc) StartTimestamp: 2022-12-21 13:34:02.866 +0000 UTC Timestamp: 2022-12-21 13:34:32.866 +0000 UTC Value: 2970.000000 ``` ### Collector version v0.67.0 ### Environment information ## Environment `otel/opentelemetry-collector-contrib` Docker image running on Kubernetes v1.25.3 ### OpenTelemetry Collector configuration ```yaml receivers: prometheus: config: scrape_configs: - job_name: opentelemetry-collector kubernetes_sd_configs: - role: pod metric_relabel_configs: - action: keep regex: otelcol_receiver_accepted_spans source_labels: - __name__ relabel_configs: - action: keep regex: opentelemetry-collector source_labels: - __meta_kubernetes_pod_label_app_kubernetes_io_name - action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $$1:$$2 source_labels: - __address__ - __meta_kubernetes_pod_annotation_prometheus_io_scrape_port target_label: __address__ scrape_interval: 30s exporters: logging: verbosity: detailed processors: cumulativetodelta: null service: pipelines: metrics: exporters: - logging processors: - cumulativetodelta receivers: - prometheus ``` ### Log output _No response_ ### Additional context _No response_
process
handling unknown start timestamps on cumulative streams component s processor cumulativetodelta receiver prometheus what happened description when using a prometheus receiver to scrape targets that don t expose timestamps on metrics exported e g another collector the first scrape of a cumulative counter results in a data point with the same starttimestamp and timestamp and the current value of the counter for example a collector accepting spans per minute would report the following metric after it has been running for a while when the scraping collector restarts and does the first scrape metric descriptor name otelcol receiver accepted spans description number of spans successfully pushed into the pipeline unit datatype sum ismonotonic true aggregationtemporality cumulative numberdatapoints data point attributes receiver str otlp service instance id str service name str otelcol contrib service version str transport str grpc starttimestamp utc timestamp utc value this is the intended behaviour if these metrics are exported with cumulative temporality and the exporter disregards the starttimestamp as it would be with other prometheus exporters however when using it with a cumulativetodelta processor and exporting otlp data points with delta temporality it seems to violate the advice in clearly prometheus receiver reporting a value here wouldn t be a good solution because the second data point would have different starttimestamp and timestamp and be treated as a true counter reset and i don t believe it d be prometheus receiver responsibility to handle state and keep a new count of cumulative counters i believe according to spec that it should be the cumulativetodelta processor responsibility to handle cases where starttimestamp and timestamp are treat that as a start value dropping the metric while storing the value although looking at the code i m unsure if the deltavalue here should be considered valid not sure in what cases the first value of a monotonic sum should be considered valid i d expect to have at least two values to calculate delta and the assumption here that only non cumulative values should not report the first value does it consider the case in question steps to reproduce run a collector scraping another collector controlling scrape port via prometheus io scrape port annotation which is receiving approximately spans per minute the collector scraping prometheus metric using the config below must be restarted a while after the collector being scraped to clearly see the behaviour expected result the following metric data point reported with timestamp utc and no data point reported with timestamp utc metric descriptor name otelcol receiver accepted spans description number of spans successfully pushed into the pipeline unit datatype sum ismonotonic true aggregationtemporality delta numberdatapoints data point attributes receiver str otlp service instance id str service name str otelcol contrib service version str transport str grpc starttimestamp utc timestamp utc value actual result the following data point being reported metric descriptor name otelcol receiver accepted spans description number of spans successfully pushed into the pipeline unit datatype sum ismonotonic true aggregationtemporality delta numberdatapoints data point attributes receiver str otlp service instance id str service name str otelcol contrib service version str transport str grpc starttimestamp utc timestamp utc value collector version environment information environment otel opentelemetry collector contrib docker image running on kubernetes opentelemetry collector configuration yaml receivers prometheus config scrape configs job name opentelemetry collector kubernetes sd configs role pod metric relabel configs action keep regex otelcol receiver accepted spans source labels name relabel configs action keep regex opentelemetry collector source labels meta kubernetes pod label app kubernetes io name action replace regex d d replacement source labels address meta kubernetes pod annotation prometheus io scrape port target label address scrape interval exporters logging verbosity detailed processors cumulativetodelta null service pipelines metrics exporters logging processors cumulativetodelta receivers prometheus log output no response additional context no response
1
13,988
10,082,062,996
IssuesEvent
2019-07-25 10:14:51
wellcometrust/platform
https://api.github.com/repos/wellcometrust/platform
opened
Verifier: don't expose the full S3 path when reporting checksum errors
📦 Storage service
This is an example error from the verifier: ``` https://wellcomecollection-storage-staging-ingests.s3.eu-west-1.amazonaws.com/unpacked/alex-testing/2f55a087-b9ab-4fdb-8156-5d177db55d84/b2227683x/fetch.txt: Checksum values do not match! Expected: sha256:ada213f9ed885fafdd2c1faf48c8b3182fafb59e6dae84eeffc045aca90d1435, saw: sha256:aae34cb7e92493c6f71da185132d54905a49dbce8cf349b72c4cb67dc6777ca9 ``` This bit is an internal concern of the storage service and shouldn't be exposed in a user-facing message: ``` https://wellcomecollection-storage-staging-ingests.s3.eu-west-1.amazonaws.com/unpacked/alex-testing/2f55a087-b9ab-4fdb-8156-5d177db55d84 ```
1.0
Verifier: don't expose the full S3 path when reporting checksum errors - This is an example error from the verifier: ``` https://wellcomecollection-storage-staging-ingests.s3.eu-west-1.amazonaws.com/unpacked/alex-testing/2f55a087-b9ab-4fdb-8156-5d177db55d84/b2227683x/fetch.txt: Checksum values do not match! Expected: sha256:ada213f9ed885fafdd2c1faf48c8b3182fafb59e6dae84eeffc045aca90d1435, saw: sha256:aae34cb7e92493c6f71da185132d54905a49dbce8cf349b72c4cb67dc6777ca9 ``` This bit is an internal concern of the storage service and shouldn't be exposed in a user-facing message: ``` https://wellcomecollection-storage-staging-ingests.s3.eu-west-1.amazonaws.com/unpacked/alex-testing/2f55a087-b9ab-4fdb-8156-5d177db55d84 ```
non_process
verifier don t expose the full path when reporting checksum errors this is an example error from the verifier checksum values do not match expected saw this bit is an internal concern of the storage service and shouldn t be exposed in a user facing message
0
15,442
19,657,081,052
IssuesEvent
2022-01-10 13:37:19
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
Unification App: Write E2E tests around "Left Nav"
process: tests type: chore stage: needs review
### What would you like? Write end-to-end tests to cover the new Unification work in 10.0-release branch for "[Left Nav](https://docs.google.com/spreadsheets/d/1iPwi89aW6aYeA0VT1XOhYdAWLuScW0okrlfcL9fzh3s/edit#gid=0)" in the App. ### Why is this needed? _No response_ ### Other _No response_
1.0
Unification App: Write E2E tests around "Left Nav" - ### What would you like? Write end-to-end tests to cover the new Unification work in 10.0-release branch for "[Left Nav](https://docs.google.com/spreadsheets/d/1iPwi89aW6aYeA0VT1XOhYdAWLuScW0okrlfcL9fzh3s/edit#gid=0)" in the App. ### Why is this needed? _No response_ ### Other _No response_
process
unification app write tests around left nav what would you like write end to end tests to cover the new unification work in release branch for in the app why is this needed no response other no response
1
565,755
16,768,918,108
IssuesEvent
2021-06-14 12:36:26
Proof-Of-Humanity/proof-of-humanity-web
https://api.github.com/repos/Proof-Of-Humanity/proof-of-humanity-web
closed
Internationalization
priority: low status: available type: enhancement :sparkles:
I think that if we want PoH becomes global, we should have the platform translated to different languages. Does someone knows if this is already on the roadmap? if not, I can take the task of preparing the web to handle different languages and, if I still have some time, translate it to Spanish.
1.0
Internationalization - I think that if we want PoH becomes global, we should have the platform translated to different languages. Does someone knows if this is already on the roadmap? if not, I can take the task of preparing the web to handle different languages and, if I still have some time, translate it to Spanish.
non_process
internationalization i think that if we want poh becomes global we should have the platform translated to different languages does someone knows if this is already on the roadmap if not i can take the task of preparing the web to handle different languages and if i still have some time translate it to spanish
0
91,152
11,472,128,792
IssuesEvent
2020-02-09 15:31:52
DominicSherman/easy-budget-mobile
https://api.github.com/repos/DominicSherman/easy-budget-mobile
closed
Editing and deleting categories and expenses
design question
Should there be an `Edit` mode that you enter to change names and amounts of these things, and delete them? Or should each category or expense be selectable and take you to a screen where you can see more information about it and change information and delete it? I'm leaning towards the second option because we could show information for a variable category about all the expenses so far this month in this category as well.
1.0
Editing and deleting categories and expenses - Should there be an `Edit` mode that you enter to change names and amounts of these things, and delete them? Or should each category or expense be selectable and take you to a screen where you can see more information about it and change information and delete it? I'm leaning towards the second option because we could show information for a variable category about all the expenses so far this month in this category as well.
non_process
editing and deleting categories and expenses should there be an edit mode that you enter to change names and amounts of these things and delete them or should each category or expense be selectable and take you to a screen where you can see more information about it and change information and delete it i m leaning towards the second option because we could show information for a variable category about all the expenses so far this month in this category as well
0
18,230
24,297,831,078
IssuesEvent
2022-09-29 11:36:02
saibrotech/mentoria
https://api.github.com/repos/saibrotech/mentoria
closed
Fazer processo seletivo Santander Code 2022
processo seletivo
https://letscode.com.br/processos-seletivos/santander-coders ### Etapas - [x] Realizar inscrição para "Web Full Stack" - [x] Fazer curso online - 26/08 - [x] Fazer teste de Lógica - 21/09 - [ ] Participar de Dinâmica com especialistas - 14/09 - [ ] Realizar coding Tank - 27/09 - [ ] Verificar resultado - 14/10
1.0
Fazer processo seletivo Santander Code 2022 - https://letscode.com.br/processos-seletivos/santander-coders ### Etapas - [x] Realizar inscrição para "Web Full Stack" - [x] Fazer curso online - 26/08 - [x] Fazer teste de Lógica - 21/09 - [ ] Participar de Dinâmica com especialistas - 14/09 - [ ] Realizar coding Tank - 27/09 - [ ] Verificar resultado - 14/10
process
fazer processo seletivo santander code etapas realizar inscrição para web full stack fazer curso online fazer teste de lógica participar de dinâmica com especialistas realizar coding tank verificar resultado
1
17,871
23,815,215,812
IssuesEvent
2022-09-05 05:47:38
Open-Data-Product-Initiative/open-data-product-spec
https://api.github.com/repos/Open-Data-Product-Initiative/open-data-product-spec
opened
Data Act and Data Governance Act compliance - review possible required changes
enhancement unprocessed
**Idea Description** As ODPS is in a strong position to become part of the EU's Data Spaces toolkit, it would make sense to become compliant with the Data Act (DA) and possibly even Data Governance Act (DGA). One step towards compliance would be to review the possible changes needed in the standard from the DA and DGA points of view. After that based on the results see how and when the compliance could be achieved. About DGA: https://digital-strategy.ec.europa.eu/en/policies/data-governance-act About DA: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2022%3A68%3AFIN About Data Spaces: http://dataspaces.info/common-european-data-spaces/#page-content
1.0
Data Act and Data Governance Act compliance - review possible required changes - **Idea Description** As ODPS is in a strong position to become part of the EU's Data Spaces toolkit, it would make sense to become compliant with the Data Act (DA) and possibly even Data Governance Act (DGA). One step towards compliance would be to review the possible changes needed in the standard from the DA and DGA points of view. After that based on the results see how and when the compliance could be achieved. About DGA: https://digital-strategy.ec.europa.eu/en/policies/data-governance-act About DA: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2022%3A68%3AFIN About Data Spaces: http://dataspaces.info/common-european-data-spaces/#page-content
process
data act and data governance act compliance review possible required changes idea description as odps is in a strong position to become part of the eu s data spaces toolkit it would make sense to become compliant with the data act da and possibly even data governance act dga one step towards compliance would be to review the possible changes needed in the standard from the da and dga points of view after that based on the results see how and when the compliance could be achieved about dga about da about data spaces
1
20,498
27,160,235,888
IssuesEvent
2023-02-17 11:13:28
aiidateam/aiida-core
https://api.github.com/repos/aiidateam/aiida-core
opened
Allow exposing the inputs of a process function
type/feature request priority/nice-to-have topic/processes
The exposing functionality of a the `WorkChain` class is very useful to allow writing workflows that wrap subprocesses such as other `WorkChains` and `CalcJobs`. However, this behavior is not yet supported for process functions and so we have to resort to manually copying the input specifications. Since process functions have an associated `Process` implementation that is generated dynamically, we could try to fetch its `ProcessSpec` and allow exposing its inputs.
1.0
Allow exposing the inputs of a process function - The exposing functionality of a the `WorkChain` class is very useful to allow writing workflows that wrap subprocesses such as other `WorkChains` and `CalcJobs`. However, this behavior is not yet supported for process functions and so we have to resort to manually copying the input specifications. Since process functions have an associated `Process` implementation that is generated dynamically, we could try to fetch its `ProcessSpec` and allow exposing its inputs.
process
allow exposing the inputs of a process function the exposing functionality of a the workchain class is very useful to allow writing workflows that wrap subprocesses such as other workchains and calcjobs however this behavior is not yet supported for process functions and so we have to resort to manually copying the input specifications since process functions have an associated process implementation that is generated dynamically we could try to fetch its processspec and allow exposing its inputs
1
87,411
8,073,865,413
IssuesEvent
2018-08-06 20:46:58
italia/spid-testenv2
https://api.github.com/repos/italia/spid-testenv2
closed
AuthnRequest: ProtocolBinding non viene validato correttamente
bug needs regression test
Ho mandato via HTTP-Redirect una AuthnRequest contenente `ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"` e non ho ricevuto segnalazioni di errore.
1.0
AuthnRequest: ProtocolBinding non viene validato correttamente - Ho mandato via HTTP-Redirect una AuthnRequest contenente `ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"` e non ho ricevuto segnalazioni di errore.
non_process
authnrequest protocolbinding non viene validato correttamente ho mandato via http redirect una authnrequest contenente protocolbinding urn oasis names tc saml bindings http post e non ho ricevuto segnalazioni di errore
0
128,484
10,540,279,811
IssuesEvent
2019-10-02 08:00:51
ansible/ansible
https://api.github.com/repos/ansible/ansible
closed
Allow mysql_variables module to set SESSION variables
affects_2.4 database feature module mysql support:community test
<!--- Verify first that your issue/request is not already reported on GitHub. Also test if the latest release, and devel branch are affected too. --> ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME mysql_variables ##### ANSIBLE VERSION ``` ansible 2.4.3.0 config file = None configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible executable location = /usr/local/bin/ansible python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609] ``` ##### OS / ENVIRONMENT N/A ##### SUMMARY Right now, there is no way to only set mysql variables per session ##### EXPECTED RESULTS A way to set the mysql variables for the current session only would be desirable
1.0
Allow mysql_variables module to set SESSION variables - <!--- Verify first that your issue/request is not already reported on GitHub. Also test if the latest release, and devel branch are affected too. --> ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME mysql_variables ##### ANSIBLE VERSION ``` ansible 2.4.3.0 config file = None configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible executable location = /usr/local/bin/ansible python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609] ``` ##### OS / ENVIRONMENT N/A ##### SUMMARY Right now, there is no way to only set mysql variables per session ##### EXPECTED RESULTS A way to set the mysql variables for the current session only would be desirable
non_process
allow mysql variables module to set session variables verify first that your issue request is not already reported on github also test if the latest release and devel branch are affected too issue type feature idea component name mysql variables ansible version ansible config file none configured module search path ansible python module location usr local lib dist packages ansible executable location usr local bin ansible python version default dec os environment n a summary right now there is no way to only set mysql variables per session expected results a way to set the mysql variables for the current session only would be desirable
0
8,089
11,258,640,278
IssuesEvent
2020-01-13 05:36:57
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
Fix navigation_spec "page load" flaky test.
process: tests stage: needs review type: chore
### Current behavior: navigation_spec `does not log "page load" events` is really flaky. ### Desired behavior: It should always pass. ### Steps to reproduce: (app code and test code) It fails a lot on PR. ### Versions develop.
1.0
Fix navigation_spec "page load" flaky test. - ### Current behavior: navigation_spec `does not log "page load" events` is really flaky. ### Desired behavior: It should always pass. ### Steps to reproduce: (app code and test code) It fails a lot on PR. ### Versions develop.
process
fix navigation spec page load flaky test current behavior navigation spec does not log page load events is really flaky desired behavior it should always pass steps to reproduce app code and test code it fails a lot on pr versions develop
1
8,406
11,572,667,124
IssuesEvent
2020-02-21 00:49:02
MineCake147E/MonoAudio
https://api.github.com/repos/MineCake147E/MonoAudio
opened
Idea: Faster Calculation of SinusoidSource
Feature: Signal Processing 🎛️ Kind: High Latency 🐌 Priority: High 🚅 Status: Help Wanted 🤔
I have to fix the currently extremely-slow `SinusoidSource` which sucked load tests for `SplineResampler`. I have to seek THE FASTEST IN THE WHOLE WORLD FOR ALL PLATFORMS in order to spread MonoAudio all over the world, so it DEFINITELY ABSOLUTELY SUCKS. So I came across the ideas below: - Real part of the `Complex` number rotation - Simple oscillation simulation e.g. Spring - Oscillate square wave at twice the target frequency sampled and resample it
1.0
Idea: Faster Calculation of SinusoidSource - I have to fix the currently extremely-slow `SinusoidSource` which sucked load tests for `SplineResampler`. I have to seek THE FASTEST IN THE WHOLE WORLD FOR ALL PLATFORMS in order to spread MonoAudio all over the world, so it DEFINITELY ABSOLUTELY SUCKS. So I came across the ideas below: - Real part of the `Complex` number rotation - Simple oscillation simulation e.g. Spring - Oscillate square wave at twice the target frequency sampled and resample it
process
idea faster calculation of sinusoidsource i have to fix the currently extremely slow sinusoidsource which sucked load tests for splineresampler i have to seek the fastest in the whole world for all platforms in order to spread monoaudio all over the world so it definitely absolutely sucks so i came across the ideas below real part of the complex number rotation simple oscillation simulation e g spring oscillate square wave at twice the target frequency sampled and resample it
1
74,344
7,398,708,522
IssuesEvent
2018-03-19 07:43:44
SunwellTracker/issues
https://api.github.com/repos/SunwellTracker/issues
closed
Into The Wild Green Yonder
Works locally | Requires testing duplicate
Decription: Into The Wild Green Yonder does not work as it should. How it works: Well you fly on a proto drake and you cant pick up the captured crusaders. They just talk and nothing else happens. How it should work: You fly on a proto drake to rescue 3 Crusaders that is captured and fly back with them to The Argen Vanguard. Source (you should point out proofs of your report, please give us some source):
1.0
Into The Wild Green Yonder - Decription: Into The Wild Green Yonder does not work as it should. How it works: Well you fly on a proto drake and you cant pick up the captured crusaders. They just talk and nothing else happens. How it should work: You fly on a proto drake to rescue 3 Crusaders that is captured and fly back with them to The Argen Vanguard. Source (you should point out proofs of your report, please give us some source):
non_process
into the wild green yonder decription into the wild green yonder does not work as it should how it works well you fly on a proto drake and you cant pick up the captured crusaders they just talk and nothing else happens how it should work you fly on a proto drake to rescue crusaders that is captured and fly back with them to the argen vanguard source you should point out proofs of your report please give us some source
0
220,360
7,359,392,000
IssuesEvent
2018-03-10 05:54:08
STEP-tw/battleship-phoenix
https://api.github.com/repos/STEP-tw/battleship-phoenix
closed
highlighting sunk ship on enemy's base
Medium Priority bug enhancement small
As a _player_ I want to _see my enemy's ships that I have made sunk_ So that I _can know whether a ship sunk or not_ **Acceptance Criteria** - [x] Criteria 1 - Given _enemy's base_ - When _player hit on it and one whole ship is sunk_ - Then _that ship should be highlighted_
1.0
highlighting sunk ship on enemy's base - As a _player_ I want to _see my enemy's ships that I have made sunk_ So that I _can know whether a ship sunk or not_ **Acceptance Criteria** - [x] Criteria 1 - Given _enemy's base_ - When _player hit on it and one whole ship is sunk_ - Then _that ship should be highlighted_
non_process
highlighting sunk ship on enemy s base as a player i want to see my enemy s ships that i have made sunk so that i can know whether a ship sunk or not acceptance criteria criteria given enemy s base when player hit on it and one whole ship is sunk then that ship should be highlighted
0
123,501
12,199,668,020
IssuesEvent
2020-04-30 02:19:25
temporalio/temporal
https://api.github.com/repos/temporalio/temporal
closed
Link to CLI docs is broken in CLI README
bug documentation
Currently the root README suggests users learn the CLI by visiting this sub-README https://github.com/temporalio/temporal/blob/master/tools/cli/README.md which then links to our external site https://docs.temporal.io/docs/08_cli. The external link is outdated and instead should be: https://docs.temporal.io/docs/learn-cli Alex suggested that it might be worthwhile to do a pass over all repos looking for broken links.
1.0
Link to CLI docs is broken in CLI README - Currently the root README suggests users learn the CLI by visiting this sub-README https://github.com/temporalio/temporal/blob/master/tools/cli/README.md which then links to our external site https://docs.temporal.io/docs/08_cli. The external link is outdated and instead should be: https://docs.temporal.io/docs/learn-cli Alex suggested that it might be worthwhile to do a pass over all repos looking for broken links.
non_process
link to cli docs is broken in cli readme currently the root readme suggests users learn the cli by visiting this sub readme which then links to our external site the external link is outdated and instead should be alex suggested that it might be worthwhile to do a pass over all repos looking for broken links
0
198,594
15,713,126,548
IssuesEvent
2021-03-27 14:56:51
AY2021S2-CS2103-T14-1/tp
https://api.github.com/repos/AY2021S2-CS2103-T14-1/tp
opened
Code Style Bug: CommandResult booleans not following naming style
priority.High severity.Low type.Documentation
## Classes Affected * `seedu.address.logic.commands.CommandResult` ## Expected Boolean variables follow the [naming convention](https://se-education.org/guides/conventions/java/index.html). ## Actual ![image](https://user-images.githubusercontent.com/77471078/112724640-494e7900-8f4f-11eb-9e19-76aa866b5b13.png)
1.0
Code Style Bug: CommandResult booleans not following naming style - ## Classes Affected * `seedu.address.logic.commands.CommandResult` ## Expected Boolean variables follow the [naming convention](https://se-education.org/guides/conventions/java/index.html). ## Actual ![image](https://user-images.githubusercontent.com/77471078/112724640-494e7900-8f4f-11eb-9e19-76aa866b5b13.png)
non_process
code style bug commandresult booleans not following naming style classes affected seedu address logic commands commandresult expected boolean variables follow the actual
0
578,911
17,156,551,520
IssuesEvent
2021-07-14 07:41:40
googleapis/java-bigtable-hbase
https://api.github.com/repos/googleapis/java-bigtable-hbase
closed
bigtable.hbase.TestFilters: testInterleaveNoDuplicateCells failed
api: bigtable flakybot: issue priority: p1 type: bug
This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: a891335ce3179c45fade4f3683b7e09d38d0107a buildURL: [Build Status](https://source.cloud.google.com/results/invocations/64905140-8b04-4270-9ed2-a91e73d97e39), [Sponge](http://sponge2/64905140-8b04-4270-9ed2-a91e73d97e39) status: failed <details><summary>Test output</summary><br><pre>org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 100 actions: UnauthenticatedException: 100 times, servers with issues: bigtable.googleapis.com at com.google.cloud.bigtable.hbase.BatchExecutor.batchCallback(BatchExecutor.java:308) at com.google.cloud.bigtable.hbase.BatchExecutor.batch(BatchExecutor.java:237) at com.google.cloud.bigtable.hbase.BatchExecutor.batch(BatchExecutor.java:231) at com.google.cloud.bigtable.hbase.AbstractBigtableTable.put(AbstractBigtableTable.java:375) at com.google.cloud.bigtable.hbase.AbstractTestFilters.addDataForTesting(AbstractTestFilters.java:2349) at com.google.cloud.bigtable.hbase.AbstractTestFilters.testInterleaveNoDuplicateCells(AbstractTestFilters.java:2129) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.apache.maven.surefire.junitcore.pc.Scheduler$1.run(Scheduler.java:410) at org.apache.maven.surefire.junitcore.pc.InvokerStrategy.schedule(InvokerStrategy.java:54) at org.apache.maven.surefire.junitcore.pc.Scheduler.schedule(Scheduler.java:367) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.apache.maven.surefire.junitcore.pc.Scheduler$1.run(Scheduler.java:410) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Suppressed: com.google.api.gax.batching.BatchingException: Batching finished with 1 batches failed to apply due to: 1 ApiException(1 INTERNAL) and 0 partial failures. at com.google.api.gax.batching.BatcherStats.asException(BatcherStats.java:147) at com.google.api.gax.batching.BatcherImpl.close(BatcherImpl.java:290) at com.google.cloud.bigtable.hbase.wrappers.veneer.BulkMutationVeneerApi.close(BulkMutationVeneerApi.java:68) at com.google.cloud.bigtable.hbase.BigtableBufferedMutatorHelper.close(BigtableBufferedMutatorHelper.java:91) at com.google.cloud.bigtable.hbase.BatchExecutor.close(BatchExecutor.java:150) at com.google.cloud.bigtable.hbase.AbstractBigtableTable.put(AbstractBigtableTable.java:376) ... 33 more </pre></details>
1.0
bigtable.hbase.TestFilters: testInterleaveNoDuplicateCells failed - This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: a891335ce3179c45fade4f3683b7e09d38d0107a buildURL: [Build Status](https://source.cloud.google.com/results/invocations/64905140-8b04-4270-9ed2-a91e73d97e39), [Sponge](http://sponge2/64905140-8b04-4270-9ed2-a91e73d97e39) status: failed <details><summary>Test output</summary><br><pre>org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 100 actions: UnauthenticatedException: 100 times, servers with issues: bigtable.googleapis.com at com.google.cloud.bigtable.hbase.BatchExecutor.batchCallback(BatchExecutor.java:308) at com.google.cloud.bigtable.hbase.BatchExecutor.batch(BatchExecutor.java:237) at com.google.cloud.bigtable.hbase.BatchExecutor.batch(BatchExecutor.java:231) at com.google.cloud.bigtable.hbase.AbstractBigtableTable.put(AbstractBigtableTable.java:375) at com.google.cloud.bigtable.hbase.AbstractTestFilters.addDataForTesting(AbstractTestFilters.java:2349) at com.google.cloud.bigtable.hbase.AbstractTestFilters.testInterleaveNoDuplicateCells(AbstractTestFilters.java:2129) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.apache.maven.surefire.junitcore.pc.Scheduler$1.run(Scheduler.java:410) at org.apache.maven.surefire.junitcore.pc.InvokerStrategy.schedule(InvokerStrategy.java:54) at org.apache.maven.surefire.junitcore.pc.Scheduler.schedule(Scheduler.java:367) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.apache.maven.surefire.junitcore.pc.Scheduler$1.run(Scheduler.java:410) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Suppressed: com.google.api.gax.batching.BatchingException: Batching finished with 1 batches failed to apply due to: 1 ApiException(1 INTERNAL) and 0 partial failures. at com.google.api.gax.batching.BatcherStats.asException(BatcherStats.java:147) at com.google.api.gax.batching.BatcherImpl.close(BatcherImpl.java:290) at com.google.cloud.bigtable.hbase.wrappers.veneer.BulkMutationVeneerApi.close(BulkMutationVeneerApi.java:68) at com.google.cloud.bigtable.hbase.BigtableBufferedMutatorHelper.close(BigtableBufferedMutatorHelper.java:91) at com.google.cloud.bigtable.hbase.BatchExecutor.close(BatchExecutor.java:150) at com.google.cloud.bigtable.hbase.AbstractBigtableTable.put(AbstractBigtableTable.java:376) ... 33 more </pre></details>
non_process
bigtable hbase testfilters testinterleavenoduplicatecells failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output org apache hadoop hbase client retriesexhaustedwithdetailsexception failed actions unauthenticatedexception times servers with issues bigtable googleapis com at com google cloud bigtable hbase batchexecutor batchcallback batchexecutor java at com google cloud bigtable hbase batchexecutor batch batchexecutor java at com google cloud bigtable hbase batchexecutor batch batchexecutor java at com google cloud bigtable hbase abstractbigtabletable put abstractbigtabletable java at com google cloud bigtable hbase abstracttestfilters adddatafortesting abstracttestfilters java at com google cloud bigtable hbase abstracttestfilters testinterleavenoduplicatecells abstracttestfilters java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore pc scheduler run scheduler java at org apache maven surefire junitcore pc invokerstrategy schedule invokerstrategy java at org apache maven surefire junitcore pc scheduler schedule scheduler java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore pc scheduler run scheduler java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java suppressed com google api gax batching batchingexception batching finished with batches failed to apply due to apiexception internal and partial failures at com google api gax batching batcherstats asexception batcherstats java at com google api gax batching batcherimpl close batcherimpl java at com google cloud bigtable hbase wrappers veneer bulkmutationveneerapi close bulkmutationveneerapi java at com google cloud bigtable hbase bigtablebufferedmutatorhelper close bigtablebufferedmutatorhelper java at com google cloud bigtable hbase batchexecutor close batchexecutor java at com google cloud bigtable hbase abstractbigtabletable put abstractbigtabletable java more
0
7,683
10,853,937,675
IssuesEvent
2019-11-13 15:34:44
LucaFalasca/Stoocky
https://api.github.com/repos/LucaFalasca/Stoocky
opened
Deleting element fro the Shopping list
Functional Requirement
The system shall provide a button to delete a shopping list on request.
1.0
Deleting element fro the Shopping list - The system shall provide a button to delete a shopping list on request.
non_process
deleting element fro the shopping list the system shall provide a button to delete a shopping list on request
0
11,376
14,219,173,048
IssuesEvent
2020-11-17 12:55:01
aiidateam/aiida-core
https://api.github.com/repos/aiidateam/aiida-core
opened
Remove `process_class` argument from `Process.exposed_outputs()`
topic/processes type/feature request
### Is your feature request related to a problem? Please describe The [`Process.exposed_outputs()` method](https://github.com/aiidateam/aiida-core/blob/develop/aiida/engine/processes/process.py#L823) requires both the `node` that emits the outputs and its corresponding `process_class` as input arguments. The latter seems redundant, since the `process_class` can be obtained from the the `ProcessNode.process_class` property from the `node` argument. Is there some reason why we can't use this property here? ### Describe the solution you'd like Deprecate the `process_class` input argument from the `Process.exposed_outputs()` method, and remove it in v2.0.0.
1.0
Remove `process_class` argument from `Process.exposed_outputs()` - ### Is your feature request related to a problem? Please describe The [`Process.exposed_outputs()` method](https://github.com/aiidateam/aiida-core/blob/develop/aiida/engine/processes/process.py#L823) requires both the `node` that emits the outputs and its corresponding `process_class` as input arguments. The latter seems redundant, since the `process_class` can be obtained from the the `ProcessNode.process_class` property from the `node` argument. Is there some reason why we can't use this property here? ### Describe the solution you'd like Deprecate the `process_class` input argument from the `Process.exposed_outputs()` method, and remove it in v2.0.0.
process
remove process class argument from process exposed outputs is your feature request related to a problem please describe the requires both the node that emits the outputs and its corresponding process class as input arguments the latter seems redundant since the process class can be obtained from the the processnode process class property from the node argument is there some reason why we can t use this property here describe the solution you d like deprecate the process class input argument from the process exposed outputs method and remove it in
1
33,553
16,015,773,124
IssuesEvent
2021-04-20 15:49:48
alphagov/govuk-design-system
https://api.github.com/repos/alphagov/govuk-design-system
opened
Add Google Analytics to the design system
analytics epic performance
<!-- This is a template for any issues that aren’t bug reports or new feature requests. The headings in this section provide examples of the information you might want to include, but feel free to add/delete sections where appropriate. --> ## What This work looks at reinstating Google Tag Manager and Google Analytics on [design-system.service.gov.uk](https://design-system.service.gov.uk/) and [frontend.design-system.service.gov.uk](https://frontend.design-system.service.gov.uk/#gov-uk-frontend). These used to be used, but were [removed as a temporary solution](https://github.com/alphagov/govuk-design-system/pull/1106) when the ICO guidance on cookie consent was published. ## Who needs to know about this Developers, Performance Analyst, Content Designer ## Done when - [ ] [Review Google Tag Manager set up](https://github.com/alphagov/govuk-design-system/issues/1605) - [ ] [Explore moving to GA4](https://github.com/alphagov/govuk-design-system/issues/1607) - [ ] [Draft an IA plan for Google Analytics and Google Tag Manager](https://github.com/alphagov/govuk-design-system/issues/1608) - [ ] [Draft cookie banner and cookie page content](https://github.com/alphagov/govuk-design-system/issues/1609)
True
Add Google Analytics to the design system - <!-- This is a template for any issues that aren’t bug reports or new feature requests. The headings in this section provide examples of the information you might want to include, but feel free to add/delete sections where appropriate. --> ## What This work looks at reinstating Google Tag Manager and Google Analytics on [design-system.service.gov.uk](https://design-system.service.gov.uk/) and [frontend.design-system.service.gov.uk](https://frontend.design-system.service.gov.uk/#gov-uk-frontend). These used to be used, but were [removed as a temporary solution](https://github.com/alphagov/govuk-design-system/pull/1106) when the ICO guidance on cookie consent was published. ## Who needs to know about this Developers, Performance Analyst, Content Designer ## Done when - [ ] [Review Google Tag Manager set up](https://github.com/alphagov/govuk-design-system/issues/1605) - [ ] [Explore moving to GA4](https://github.com/alphagov/govuk-design-system/issues/1607) - [ ] [Draft an IA plan for Google Analytics and Google Tag Manager](https://github.com/alphagov/govuk-design-system/issues/1608) - [ ] [Draft cookie banner and cookie page content](https://github.com/alphagov/govuk-design-system/issues/1609)
non_process
add google analytics to the design system this is a template for any issues that aren’t bug reports or new feature requests the headings in this section provide examples of the information you might want to include but feel free to add delete sections where appropriate what this work looks at reinstating google tag manager and google analytics on and these used to be used but were when the ico guidance on cookie consent was published who needs to know about this developers performance analyst content designer done when
0
195,179
22,294,747,841
IssuesEvent
2022-06-12 22:03:00
opencad-app/OpenCAD-php
https://api.github.com/repos/opencad-app/OpenCAD-php
closed
Force PHP 8.0 Requirement
Security
Per PHP's EOL documentation v. 7.4 is EOL on 1 January 2023.. 8.0 should be recommended and 8.0 should be the minimum.
True
Force PHP 8.0 Requirement - Per PHP's EOL documentation v. 7.4 is EOL on 1 January 2023.. 8.0 should be recommended and 8.0 should be the minimum.
non_process
force php requirement per php s eol documentation v is eol on january should be recommended and should be the minimum
0
663,249
22,171,074,331
IssuesEvent
2022-06-06 00:37:07
GoogleCloudPlatform/cloud-sql-python-connector
https://api.github.com/repos/GoogleCloudPlatform/cloud-sql-python-connector
closed
Improve test coverage and infrastructure for Python Connector
type: cleanup priority: p1 api: cloudsql
- Move all mock test objects into a single file for better usage across tests. - Improve mocks to allow them to be easily altered for wide test cases (ability to pass in params and alter attributes) - Add mock for generating server cert and key: [Go Connector](https://github.com/GoogleCloudPlatform/cloud-sql-go-connector/blob/2203214c5a6c8aa25ecab3cf0466abac95529085/internal/mock/cloudsql.go#L239) - Add tests to improve coverage of `instance_connection_manager.py`
1.0
Improve test coverage and infrastructure for Python Connector - - Move all mock test objects into a single file for better usage across tests. - Improve mocks to allow them to be easily altered for wide test cases (ability to pass in params and alter attributes) - Add mock for generating server cert and key: [Go Connector](https://github.com/GoogleCloudPlatform/cloud-sql-go-connector/blob/2203214c5a6c8aa25ecab3cf0466abac95529085/internal/mock/cloudsql.go#L239) - Add tests to improve coverage of `instance_connection_manager.py`
non_process
improve test coverage and infrastructure for python connector move all mock test objects into a single file for better usage across tests improve mocks to allow them to be easily altered for wide test cases ability to pass in params and alter attributes add mock for generating server cert and key add tests to improve coverage of instance connection manager py
0
9,334
12,340,784,126
IssuesEvent
2020-05-14 20:35:17
DiSSCo/user-stories
https://api.github.com/repos/DiSSCo/user-stories
opened
a toolbox to link persistent identifiers
1. NH museum 1. Research 4. Data processing ICEDIG-SURVEY Specimen level
As a Scientist I want to link specimens that belong to the same species according my taxonomic expertise -- without a priori assigning species names -- so that I can tag all specimens that belong to one species for this I need a toolbox to link persistent identifiers
1.0
a toolbox to link persistent identifiers - As a Scientist I want to link specimens that belong to the same species according my taxonomic expertise -- without a priori assigning species names -- so that I can tag all specimens that belong to one species for this I need a toolbox to link persistent identifiers
process
a toolbox to link persistent identifiers as a scientist i want to link specimens that belong to the same species according my taxonomic expertise without a priori assigning species names so that i can tag all specimens that belong to one species for this i need a toolbox to link persistent identifiers
1
2,651
5,429,206,671
IssuesEvent
2017-03-03 17:48:53
neuropoly/spinalcordtoolbox
https://api.github.com/repos/neuropoly/spinalcordtoolbox
opened
check presence of vertfile if flag -vert (or -l, which is deprecated) is used
bug fix:minor sct_process_segmentation
to avoid this: ~~~ sct_process_segmentation -i t2s_segm.nii.gz -p csa -size 0 -l 1:7 -- Spinal Cord Toolbox (master/4159bcacd16124ed920d90bd2696299a287bc987) Running /Users/almartin/sct/scripts/sct_process_segmentation.py -i t2s_segm.nii.gz -p csa -size 0 -l 1:7 WARNING : -l is a deprecated argument and will no longer be updated in future versions. Changing argument to -vert. Check parameters: .. segmentation file: t2s_segm.nii.gz Create temporary folder... mkdir tmp.170302082521_807481/ Copying input data to tmp folder and convert to nii... sct_convert -i /Users/almartin/Documents/mr_data/e23242/t2s/t2s_segm.nii.gz -o tmp.170302082521_807481/segmentation.nii.gz Change orientation to RPI... sct_image -i segmentation.nii.gz -setorient RPI -o segmentation_RPI.nii.gz Open segmentation volume... Get data dimensions... 107 x 105 x 12 Smooth centerline/segmentation... .. Get center of mass of the centerline/segmentation... .. Computing physical coordinates of centerline/segmentation... .. Smoothing algo = nurbs /Users/almartin/sct/scripts/msct_smooth.py:279: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future. if z == None: Fitting centerline using B-spline approximation... Test: # of control points = 5 Error on approximation = 0.2 mm Test: # of control points = 6 WARNING: NURBS instability -> wrong reconstruction Test: # of control points = 7 WARNING: NURBS instability -> wrong reconstruction Test: # of control points = 8 Error on approximation = 0.03 mm Test: # of control points = 9 Error on approximation = 0.01 mm Test: # of control points = 10 Error on approximation = 0.14 mm Test: # of control points = 11 Error on approximation = 1.13 mm The fitting of the curve was done using 9 control points: the number that gave the best results. Error on approximation = 0.01 mm /Users/almartin/sct/scripts/msct_image.py:780: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future. if coordi != None: Compute CSA... Smooth CSA across slices... .. No smoothing! Create volume of CSA values... Create volume of angle values... sct_image -i csa_volume_RPI.nii.gz -setorient RPI -o csa_volume_in_initial_orientation.nii.gz sct_image -i angle_volume_RPI.nii.gz -setorient RPI -o angle_volume_in_initial_orientation.nii.gz Generate output files... WARNING: File /Users/almartin/Documents/mr_data/e23242/t2s/csa_image.nii.gz already exists. Deleting it... File created: /Users/almartin/Documents/mr_data/e23242/t2s/csa_image.nii.gz WARNING: File /Users/almartin/Documents/mr_data/e23242/t2s/angle_image.nii.gz already exists. Deleting it... File created: /Users/almartin/Documents/mr_data/e23242/t2s/angle_image.nii.gz Display CSA per slice: z = 0, CSA = 90.203696 mm^2, Angle = 10.851835 deg z = 1, CSA = 79.904919 mm^2, Angle = 6.384153 deg z = 2, CSA = 98.222682 mm^2, Angle = 5.705674 deg z = 3, CSA = 82.473333 mm^2, Angle = 4.168136 deg z = 4, CSA = 88.225493 mm^2, Angle = 2.880155 deg z = 5, CSA = 88.020798 mm^2, Angle = 0.911656 deg z = 6, CSA = 90.916242 mm^2, Angle = 1.023173 deg z = 7, CSA = 89.958422 mm^2, Angle = 2.037465 deg z = 8, CSA = 87.843971 mm^2, Angle = 1.626386 deg z = 9, CSA = 90.442237 mm^2, Angle = 1.495058 deg z = 10, CSA = 90.619624 mm^2, Angle = 0.658208 deg z = 11, CSA = 89.020103 mm^2, Angle = 7.852606 deg Save results in: /Users/almartin/Documents/mr_data/e23242/t2s/csa_per_slice.txt Save results in: /Users/almartin/Documents/mr_data/e23242/t2s/csa_per_slice.pickle Selected vertebral levels... 1:7 OK: ./label/template/PAM50_levels.nii.gz Find slices corresponding to vertebral levels based on the centerline... Traceback (most recent call last): File "/Users/almartin/sct/scripts/sct_process_segmentation.py", line 1236, in <module> main(sys.argv[1:]) File "/Users/almartin/sct/scripts/sct_process_segmentation.py", line 255, in main compute_csa(fname_segmentation, output_folder, overwrite, verbose, remove_temp_files, step, smoothing_param, figure_fit, slices, vert_lev, fname_vertebral_labeling, algo_fitting=param.algo_fitting, type_window=param.type_window, window_length=param.window_length, angle_correction=angle_correction, use_phys_coord=use_phys_coord) File "/Users/almartin/sct/scripts/sct_process_segmentation.py", line 820, in compute_csa slices, vert_levels_list, warning = get_slices_matching_with_vertebral_levels_based_centerline(vert_levels, im_vertebral_labeling.data, z_centerline) File "/Users/almartin/sct/scripts/sct_process_segmentation.py", line 1113, in get_slices_matching_with_vertebral_levels_based_centerline vertebral_labeling_slice_zz = vertebral_labeling_data[:, :, int(zz)] IndexError: index -61 is out of bounds for axis 2 with size 12 ~~~
1.0
check presence of vertfile if flag -vert (or -l, which is deprecated) is used - to avoid this: ~~~ sct_process_segmentation -i t2s_segm.nii.gz -p csa -size 0 -l 1:7 -- Spinal Cord Toolbox (master/4159bcacd16124ed920d90bd2696299a287bc987) Running /Users/almartin/sct/scripts/sct_process_segmentation.py -i t2s_segm.nii.gz -p csa -size 0 -l 1:7 WARNING : -l is a deprecated argument and will no longer be updated in future versions. Changing argument to -vert. Check parameters: .. segmentation file: t2s_segm.nii.gz Create temporary folder... mkdir tmp.170302082521_807481/ Copying input data to tmp folder and convert to nii... sct_convert -i /Users/almartin/Documents/mr_data/e23242/t2s/t2s_segm.nii.gz -o tmp.170302082521_807481/segmentation.nii.gz Change orientation to RPI... sct_image -i segmentation.nii.gz -setorient RPI -o segmentation_RPI.nii.gz Open segmentation volume... Get data dimensions... 107 x 105 x 12 Smooth centerline/segmentation... .. Get center of mass of the centerline/segmentation... .. Computing physical coordinates of centerline/segmentation... .. Smoothing algo = nurbs /Users/almartin/sct/scripts/msct_smooth.py:279: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future. if z == None: Fitting centerline using B-spline approximation... Test: # of control points = 5 Error on approximation = 0.2 mm Test: # of control points = 6 WARNING: NURBS instability -> wrong reconstruction Test: # of control points = 7 WARNING: NURBS instability -> wrong reconstruction Test: # of control points = 8 Error on approximation = 0.03 mm Test: # of control points = 9 Error on approximation = 0.01 mm Test: # of control points = 10 Error on approximation = 0.14 mm Test: # of control points = 11 Error on approximation = 1.13 mm The fitting of the curve was done using 9 control points: the number that gave the best results. Error on approximation = 0.01 mm /Users/almartin/sct/scripts/msct_image.py:780: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future. if coordi != None: Compute CSA... Smooth CSA across slices... .. No smoothing! Create volume of CSA values... Create volume of angle values... sct_image -i csa_volume_RPI.nii.gz -setorient RPI -o csa_volume_in_initial_orientation.nii.gz sct_image -i angle_volume_RPI.nii.gz -setorient RPI -o angle_volume_in_initial_orientation.nii.gz Generate output files... WARNING: File /Users/almartin/Documents/mr_data/e23242/t2s/csa_image.nii.gz already exists. Deleting it... File created: /Users/almartin/Documents/mr_data/e23242/t2s/csa_image.nii.gz WARNING: File /Users/almartin/Documents/mr_data/e23242/t2s/angle_image.nii.gz already exists. Deleting it... File created: /Users/almartin/Documents/mr_data/e23242/t2s/angle_image.nii.gz Display CSA per slice: z = 0, CSA = 90.203696 mm^2, Angle = 10.851835 deg z = 1, CSA = 79.904919 mm^2, Angle = 6.384153 deg z = 2, CSA = 98.222682 mm^2, Angle = 5.705674 deg z = 3, CSA = 82.473333 mm^2, Angle = 4.168136 deg z = 4, CSA = 88.225493 mm^2, Angle = 2.880155 deg z = 5, CSA = 88.020798 mm^2, Angle = 0.911656 deg z = 6, CSA = 90.916242 mm^2, Angle = 1.023173 deg z = 7, CSA = 89.958422 mm^2, Angle = 2.037465 deg z = 8, CSA = 87.843971 mm^2, Angle = 1.626386 deg z = 9, CSA = 90.442237 mm^2, Angle = 1.495058 deg z = 10, CSA = 90.619624 mm^2, Angle = 0.658208 deg z = 11, CSA = 89.020103 mm^2, Angle = 7.852606 deg Save results in: /Users/almartin/Documents/mr_data/e23242/t2s/csa_per_slice.txt Save results in: /Users/almartin/Documents/mr_data/e23242/t2s/csa_per_slice.pickle Selected vertebral levels... 1:7 OK: ./label/template/PAM50_levels.nii.gz Find slices corresponding to vertebral levels based on the centerline... Traceback (most recent call last): File "/Users/almartin/sct/scripts/sct_process_segmentation.py", line 1236, in <module> main(sys.argv[1:]) File "/Users/almartin/sct/scripts/sct_process_segmentation.py", line 255, in main compute_csa(fname_segmentation, output_folder, overwrite, verbose, remove_temp_files, step, smoothing_param, figure_fit, slices, vert_lev, fname_vertebral_labeling, algo_fitting=param.algo_fitting, type_window=param.type_window, window_length=param.window_length, angle_correction=angle_correction, use_phys_coord=use_phys_coord) File "/Users/almartin/sct/scripts/sct_process_segmentation.py", line 820, in compute_csa slices, vert_levels_list, warning = get_slices_matching_with_vertebral_levels_based_centerline(vert_levels, im_vertebral_labeling.data, z_centerline) File "/Users/almartin/sct/scripts/sct_process_segmentation.py", line 1113, in get_slices_matching_with_vertebral_levels_based_centerline vertebral_labeling_slice_zz = vertebral_labeling_data[:, :, int(zz)] IndexError: index -61 is out of bounds for axis 2 with size 12 ~~~
process
check presence of vertfile if flag vert or l which is deprecated is used to avoid this sct process segmentation i segm nii gz p csa size l spinal cord toolbox master running users almartin sct scripts sct process segmentation py i segm nii gz p csa size l warning l is a deprecated argument and will no longer be updated in future versions changing argument to vert check parameters segmentation file segm nii gz create temporary folder mkdir tmp copying input data to tmp folder and convert to nii sct convert i users almartin documents mr data segm nii gz o tmp segmentation nii gz change orientation to rpi sct image i segmentation nii gz setorient rpi o segmentation rpi nii gz open segmentation volume get data dimensions x x smooth centerline segmentation get center of mass of the centerline segmentation computing physical coordinates of centerline segmentation smoothing algo nurbs users almartin sct scripts msct smooth py futurewarning comparison to none will result in an elementwise object comparison in the future if z none fitting centerline using b spline approximation test of control points error on approximation mm test of control points warning nurbs instability wrong reconstruction test of control points warning nurbs instability wrong reconstruction test of control points error on approximation mm test of control points error on approximation mm test of control points error on approximation mm test of control points error on approximation mm the fitting of the curve was done using control points the number that gave the best results error on approximation mm users almartin sct scripts msct image py futurewarning comparison to none will result in an elementwise object comparison in the future if coordi none compute csa smooth csa across slices no smoothing create volume of csa values create volume of angle values sct image i csa volume rpi nii gz setorient rpi o csa volume in initial orientation nii gz sct image i angle volume rpi nii gz setorient rpi o angle volume in initial orientation nii gz generate output files warning file users almartin documents mr data csa image nii gz already exists deleting it file created users almartin documents mr data csa image nii gz warning file users almartin documents mr data angle image nii gz already exists deleting it file created users almartin documents mr data angle image nii gz display csa per slice z csa mm angle deg z csa mm angle deg z csa mm angle deg z csa mm angle deg z csa mm angle deg z csa mm angle deg z csa mm angle deg z csa mm angle deg z csa mm angle deg z csa mm angle deg z csa mm angle deg z csa mm angle deg save results in users almartin documents mr data csa per slice txt save results in users almartin documents mr data csa per slice pickle selected vertebral levels ok label template levels nii gz find slices corresponding to vertebral levels based on the centerline traceback most recent call last file users almartin sct scripts sct process segmentation py line in main sys argv file users almartin sct scripts sct process segmentation py line in main compute csa fname segmentation output folder overwrite verbose remove temp files step smoothing param figure fit slices vert lev fname vertebral labeling algo fitting param algo fitting type window param type window window length param window length angle correction angle correction use phys coord use phys coord file users almartin sct scripts sct process segmentation py line in compute csa slices vert levels list warning get slices matching with vertebral levels based centerline vert levels im vertebral labeling data z centerline file users almartin sct scripts sct process segmentation py line in get slices matching with vertebral levels based centerline vertebral labeling slice zz vertebral labeling data indexerror index is out of bounds for axis with size
1
9,095
12,166,198,865
IssuesEvent
2020-04-27 08:52:16
dotenv-linter/dotenv-linter
https://api.github.com/repos/dotenv-linter/dotenv-linter
closed
Automate AUR releases
help wanted process
@mstruebing Hi there! Is it possible to automate releases for [dotenv-linter-bin](https://aur.archlinux.org/packages/dotenv-linter-bin) via GitHub Actions? What do you think about it?
1.0
Automate AUR releases - @mstruebing Hi there! Is it possible to automate releases for [dotenv-linter-bin](https://aur.archlinux.org/packages/dotenv-linter-bin) via GitHub Actions? What do you think about it?
process
automate aur releases mstruebing hi there is it possible to automate releases for via github actions what do you think about it
1
19,928
26,396,882,504
IssuesEvent
2023-01-12 20:18:11
Jigsaw-Code/outline-client
https://api.github.com/repos/Jigsaw-Code/outline-client
closed
Release Go stack
prioritised os/windows os/linux release process
The network stack is selected based on the `NETWORK_STACK` variable, which is [set by webpack](https://github.com/Jigsaw-Code/outline-client/blob/aabedadbe703345ff4e2663ec12fd1b8d0615a15/src/electron/electron_main.webpack.js#L42) and can be overridden with a env variable with the same name at build time. We can migrate Linux and Windows independently by setting the env variable when calling the `release_{linux,windows}` action. We need a way to validate it's using the right stack, perhaps we can look at the debug logs. - [x] Pre-check (Windows & Linux) - [x] Update code repository - [ ] Release pipeline dry run for Linux - [ ] Post-validate and publish (Linux) - [ ] Manual release for Windows - [ ] Post-validate and publish (Windows) - [ ] Delete `NETWORK_STACK`, https://github.com/Jigsaw-Code/outline-client/blob/master/src/electron/sslibev_badvpn_tunnel.ts and its dependencies @daniellacosse it may be helpful to prioritize linux in the release process, since it doesn't need to be signed and we can leverage the clean build from the CI. For Windows we currently have to build it locally anyway, so it's not blocked by the release process.
1.0
Release Go stack - The network stack is selected based on the `NETWORK_STACK` variable, which is [set by webpack](https://github.com/Jigsaw-Code/outline-client/blob/aabedadbe703345ff4e2663ec12fd1b8d0615a15/src/electron/electron_main.webpack.js#L42) and can be overridden with a env variable with the same name at build time. We can migrate Linux and Windows independently by setting the env variable when calling the `release_{linux,windows}` action. We need a way to validate it's using the right stack, perhaps we can look at the debug logs. - [x] Pre-check (Windows & Linux) - [x] Update code repository - [ ] Release pipeline dry run for Linux - [ ] Post-validate and publish (Linux) - [ ] Manual release for Windows - [ ] Post-validate and publish (Windows) - [ ] Delete `NETWORK_STACK`, https://github.com/Jigsaw-Code/outline-client/blob/master/src/electron/sslibev_badvpn_tunnel.ts and its dependencies @daniellacosse it may be helpful to prioritize linux in the release process, since it doesn't need to be signed and we can leverage the clean build from the CI. For Windows we currently have to build it locally anyway, so it's not blocked by the release process.
process
release go stack the network stack is selected based on the network stack variable which is and can be overridden with a env variable with the same name at build time we can migrate linux and windows independently by setting the env variable when calling the release linux windows action we need a way to validate it s using the right stack perhaps we can look at the debug logs pre check windows linux update code repository release pipeline dry run for linux post validate and publish linux manual release for windows post validate and publish windows delete network stack and its dependencies daniellacosse it may be helpful to prioritize linux in the release process since it doesn t need to be signed and we can leverage the clean build from the ci for windows we currently have to build it locally anyway so it s not blocked by the release process
1
9,890
12,890,144,523
IssuesEvent
2020-07-13 15:32:51
nodejs/node
https://api.github.com/repos/nodejs/node
closed
Making node_process and part of bootstrapper.cc an internalBinding
discuss process
Right now some bindings used to setup the process object are placed in bootstrapper.cc and are put onto a big object during bootstrap, then passed into other smaller `lib/internal/process/something.js` modules for further setup: https://github.com/nodejs/node/blob/b416dafb87e50b66479a7a73970a930f1c7dcada/src/bootstrapper.cc#L133-L168 Note that not all the definitions of these methods are in `bootstrapper.cc`, some of those are placed in `node_process.cc` with declarations in `node_internals.h`, so for example, to bootstrap `process.setgid`, on the C++ side: ```c++ // 1. node_internals.h: declaration void SetGid(const v8::FunctionCallbackInfo<v8::Value>& args); // 2. node_process.cc: definition void SetGid(const FunctionCallbackInfo<Value>& args) { ... } // 3. bootstrapper.cc: SetupBootstrapObject() put it onto the bootstrap object void SetupBootstrapObject(...) { ... BOOTSTRAP_METHOD(_setgid, SetGid); } // 4. node.cc: call SetupBootstrapObject() during bootstrap and pass the object into node.js SetupBootstrapObject(env, bootstrapper); ``` On the JS side: ```js // 1. bootstrap/node.js: get this through a big bootstrap object passed from C++ const { _setgid } = bootstrappers; // 2. bootstrap/node.js: Then it pass this to a method exposed by main_thread_only.js mainThreadSetup.setupProcessMethods(... _setgid ...); // 3. main_thread_only.js: wrap this binding with some validation, and // directly write the method to the process object function setupProcessMethods(_setgid) { if (_setgid !== undefined) { // POSIX setupPosixMethods(..._setgid..); } } function setupPosixMethods (..._setgid..) { process.setgid = function setgid(id) { return execId(id, 'Group', _setgid); }; } ``` There are several problems with the current setup: 1. The code involved in bootstrapping these small methods span across too many files which make them hard to track down 2. We write to the `process` object in a separate file, doing this everywhere makes it difficult to figure out the order of write access to the `process` object and the state of it, which creates difficulty for the v8 snapshot effort 3. Methods like `process.setgid` is not that frequently used, we don't really need these code to be in the highlight of the bootstrap process I propose we refactor these process methods to this setup: ```c++ // 1. Remove declaration in node_internals.h and declaration bootstrapper.cc // 2. Make node_process.cc a binding available through `internalBinding('process')`, // then SetGid would be available to JS land as `internalBinding('process').setgid` // node_process.cc: contains both definition and initialization void SetGid(const FunctionCallbackInfo<Value>& args) { ... } void Initialize(...) { env->SetMethod(target, "setgid", SetGid); } NODE_MODULE_CONTEXT_AWARE_INTERNAL(process, node::process::Initialize) ``` In JS ```js // 1. main_thread_only.js: return an implementation of setgid in a side-effect free manner, and load // the binding directly in this file const binding = internalBinding('process'); exports.hasPosixCredentials = !!binding.setgid; if (exports.hasPosixCredentials) { // TODO: this can be passed into the bootstrap script directly exports.setgid = function setgid(id) { return execId(id, 'Group', binding.setgid); }; } // 2. bootstrap/node.js: re-export the implementation to process by writing to the process object if (isMainThread) { const mainThreadSetup = NativeModule.require('internal/process/main_thread_only'); if (mainThreadSetup.hasPosixCredentials) { process.setgid = mainThreadSetup.setgid; } } ``` This way, the number of files involved in the implementation is reduced from 4(C++) + 2(JS) to 1(C++) + 2(JS), and the write to the process object are centralized in `bootstrap/node.js` so it's easier to figure out the state of the process object during the bootstrap. WDYT? cc @nodejs/process @addaleax @jasnell @devsnek
1.0
Making node_process and part of bootstrapper.cc an internalBinding - Right now some bindings used to setup the process object are placed in bootstrapper.cc and are put onto a big object during bootstrap, then passed into other smaller `lib/internal/process/something.js` modules for further setup: https://github.com/nodejs/node/blob/b416dafb87e50b66479a7a73970a930f1c7dcada/src/bootstrapper.cc#L133-L168 Note that not all the definitions of these methods are in `bootstrapper.cc`, some of those are placed in `node_process.cc` with declarations in `node_internals.h`, so for example, to bootstrap `process.setgid`, on the C++ side: ```c++ // 1. node_internals.h: declaration void SetGid(const v8::FunctionCallbackInfo<v8::Value>& args); // 2. node_process.cc: definition void SetGid(const FunctionCallbackInfo<Value>& args) { ... } // 3. bootstrapper.cc: SetupBootstrapObject() put it onto the bootstrap object void SetupBootstrapObject(...) { ... BOOTSTRAP_METHOD(_setgid, SetGid); } // 4. node.cc: call SetupBootstrapObject() during bootstrap and pass the object into node.js SetupBootstrapObject(env, bootstrapper); ``` On the JS side: ```js // 1. bootstrap/node.js: get this through a big bootstrap object passed from C++ const { _setgid } = bootstrappers; // 2. bootstrap/node.js: Then it pass this to a method exposed by main_thread_only.js mainThreadSetup.setupProcessMethods(... _setgid ...); // 3. main_thread_only.js: wrap this binding with some validation, and // directly write the method to the process object function setupProcessMethods(_setgid) { if (_setgid !== undefined) { // POSIX setupPosixMethods(..._setgid..); } } function setupPosixMethods (..._setgid..) { process.setgid = function setgid(id) { return execId(id, 'Group', _setgid); }; } ``` There are several problems with the current setup: 1. The code involved in bootstrapping these small methods span across too many files which make them hard to track down 2. We write to the `process` object in a separate file, doing this everywhere makes it difficult to figure out the order of write access to the `process` object and the state of it, which creates difficulty for the v8 snapshot effort 3. Methods like `process.setgid` is not that frequently used, we don't really need these code to be in the highlight of the bootstrap process I propose we refactor these process methods to this setup: ```c++ // 1. Remove declaration in node_internals.h and declaration bootstrapper.cc // 2. Make node_process.cc a binding available through `internalBinding('process')`, // then SetGid would be available to JS land as `internalBinding('process').setgid` // node_process.cc: contains both definition and initialization void SetGid(const FunctionCallbackInfo<Value>& args) { ... } void Initialize(...) { env->SetMethod(target, "setgid", SetGid); } NODE_MODULE_CONTEXT_AWARE_INTERNAL(process, node::process::Initialize) ``` In JS ```js // 1. main_thread_only.js: return an implementation of setgid in a side-effect free manner, and load // the binding directly in this file const binding = internalBinding('process'); exports.hasPosixCredentials = !!binding.setgid; if (exports.hasPosixCredentials) { // TODO: this can be passed into the bootstrap script directly exports.setgid = function setgid(id) { return execId(id, 'Group', binding.setgid); }; } // 2. bootstrap/node.js: re-export the implementation to process by writing to the process object if (isMainThread) { const mainThreadSetup = NativeModule.require('internal/process/main_thread_only'); if (mainThreadSetup.hasPosixCredentials) { process.setgid = mainThreadSetup.setgid; } } ``` This way, the number of files involved in the implementation is reduced from 4(C++) + 2(JS) to 1(C++) + 2(JS), and the write to the process object are centralized in `bootstrap/node.js` so it's easier to figure out the state of the process object during the bootstrap. WDYT? cc @nodejs/process @addaleax @jasnell @devsnek
process
making node process and part of bootstrapper cc an internalbinding right now some bindings used to setup the process object are placed in bootstrapper cc and are put onto a big object during bootstrap then passed into other smaller lib internal process something js modules for further setup note that not all the definitions of these methods are in bootstrapper cc some of those are placed in node process cc with declarations in node internals h so for example to bootstrap process setgid on the c side c node internals h declaration void setgid const functioncallbackinfo args node process cc definition void setgid const functioncallbackinfo args bootstrapper cc setupbootstrapobject put it onto the bootstrap object void setupbootstrapobject bootstrap method setgid setgid node cc call setupbootstrapobject during bootstrap and pass the object into node js setupbootstrapobject env bootstrapper on the js side js bootstrap node js get this through a big bootstrap object passed from c const setgid bootstrappers bootstrap node js then it pass this to a method exposed by main thread only js mainthreadsetup setupprocessmethods setgid main thread only js wrap this binding with some validation and directly write the method to the process object function setupprocessmethods setgid if setgid undefined posix setupposixmethods setgid function setupposixmethods setgid process setgid function setgid id return execid id group setgid there are several problems with the current setup the code involved in bootstrapping these small methods span across too many files which make them hard to track down we write to the process object in a separate file doing this everywhere makes it difficult to figure out the order of write access to the process object and the state of it which creates difficulty for the snapshot effort methods like process setgid is not that frequently used we don t really need these code to be in the highlight of the bootstrap process i propose we refactor these process methods to this setup c remove declaration in node internals h and declaration bootstrapper cc make node process cc a binding available through internalbinding process then setgid would be available to js land as internalbinding process setgid node process cc contains both definition and initialization void setgid const functioncallbackinfo args void initialize env setmethod target setgid setgid node module context aware internal process node process initialize in js js main thread only js return an implementation of setgid in a side effect free manner and load the binding directly in this file const binding internalbinding process exports hasposixcredentials binding setgid if exports hasposixcredentials todo this can be passed into the bootstrap script directly exports setgid function setgid id return execid id group binding setgid bootstrap node js re export the implementation to process by writing to the process object if ismainthread const mainthreadsetup nativemodule require internal process main thread only if mainthreadsetup hasposixcredentials process setgid mainthreadsetup setgid this way the number of files involved in the implementation is reduced from c js to c js and the write to the process object are centralized in bootstrap node js so it s easier to figure out the state of the process object during the bootstrap wdyt cc nodejs process addaleax jasnell devsnek
1
16,788
22,035,257,074
IssuesEvent
2022-05-28 13:19:40
GoldenGnu/jeveassets
https://api.github.com/repos/GoldenGnu/jeveassets
closed
Reprocessed Value is not station relevant
bug done reprocessing
Hi, I think the calculation of the "Price if asset was reprocessed" in column "Reprocessed" is wrong. The reprocessing value does not depend on the Station (except for ore). In the Settings you can set a "Station Refining Equipment" of 50% or a custom value. However this value has only impact on reprocessing ore. For reprocessing of an item like Metal Scraps only the Scrapmetal Processing Skill is relevant. But when I change the "Station Refining Equipment" the value for the reprocessing column change. Greetings Klaus
1.0
Reprocessed Value is not station relevant - Hi, I think the calculation of the "Price if asset was reprocessed" in column "Reprocessed" is wrong. The reprocessing value does not depend on the Station (except for ore). In the Settings you can set a "Station Refining Equipment" of 50% or a custom value. However this value has only impact on reprocessing ore. For reprocessing of an item like Metal Scraps only the Scrapmetal Processing Skill is relevant. But when I change the "Station Refining Equipment" the value for the reprocessing column change. Greetings Klaus
process
reprocessed value is not station relevant hi i think the calculation of the price if asset was reprocessed in column reprocessed is wrong the reprocessing value does not depend on the station except for ore in the settings you can set a station refining equipment of or a custom value however this value has only impact on reprocessing ore for reprocessing of an item like metal scraps only the scrapmetal processing skill is relevant but when i change the station refining equipment the value for the reprocessing column change greetings klaus
1
438,997
30,672,941,760
IssuesEvent
2023-07-26 01:06:34
VocaDB/vocadb
https://api.github.com/repos/VocaDB/vocadb
closed
React conversion
complexity: high frontend documentation React
## Tasks - [x] #862 - [x] #901 - [x] #1133 - [x] #902 - [x] #906 - [x] #909 - [x] #1000 - [ ] #1067 - [ ] #853 - [ ] #1169 - [ ] #1170 ## Motivation Currently we are facing several issues with frontend development. It's impossible to find JavaScript errors in cshtml files at compile-time, rather than at run-time. JavaScript code is scattered around ts and cshtml files, which makes code splitting and dynamic imports difficult. Now that we are using Webpack through Laravel Mix (#5 and #540, powered by [ReMikus](https://github.com/VocaDB/ReMikus)), we can easily make use of [code splitting in React](https://reactjs.org/docs/code-splitting.html). See also #872. The jQuery and jQuery UI packages cannot be upgraded easily, because there seem to be some breaking changes. This is why #675 and #679 cannot be merged. It requires boilerplate code to translate user interfaces. In the current implementation, we need to 1. create/edit a resx file 1. update ResourcesApiController if needed 1. create a ResourceRepository and pass it to a viewmodel 1. create a ResourcesManager with the repository and current UI culture. 1. load resources , which is a lot of pain. By using [i18next](https://github.com/i18next/i18next) and [react-i18next](https://github.com/i18next/react-i18next), we can make this process a lot easier. We have two different libraries for the same thing, for example marked and MarkdownSharp (#38 and #904). ## Why React? There are several reasons why I chose React, but the main reasons are that 1. React is developed and maintained by Facebook and a community of individual developers and companies. 1. React is used by [DeviantArt](https://www.deviantart.com/), [Discord](https://discord.com/), [Mastodon](https://joinmastodon.org/) and [MusicBrainz](https://musicbrainz.org/). 1. TypeScript supports JSX. 1. I like the React way of thinking (one-way data flow, function components, JSX and etc.). 1. There is a way to [integrate with other libraries](https://reactjs.org/docs/integrating-with-other-libraries.html). 1. Using create-react-app is optional. 1. We are still using customized [Bootstrap 2](https://getbootstrap.com/2.3.2/components.html#alerts) and this could be replaced with [react-bootstrap](https://github.com/react-bootstrap/react-bootstrap). I decided to use MobX, not Redux, for state management. This is because MobX API is surprisingly similar to Knockout's. The conversion shouldn't be that hard, although there are subtle differences in behavior between them (e.g. `subscribe` and `reaction`). Of course this would be nearly impossible to do all at once, it has to be done gradually. ## Examples Before: ```html <div class="label label-info" data-bind="click: function() { $parent.advancedFilters.filters.remove($data); }"> ``` Who is the `$parent` in this context? After: ```tsx <div className="label label-info" onClick={(): void => advancedFilters.remove(filter)} > ```
1.0
React conversion - ## Tasks - [x] #862 - [x] #901 - [x] #1133 - [x] #902 - [x] #906 - [x] #909 - [x] #1000 - [ ] #1067 - [ ] #853 - [ ] #1169 - [ ] #1170 ## Motivation Currently we are facing several issues with frontend development. It's impossible to find JavaScript errors in cshtml files at compile-time, rather than at run-time. JavaScript code is scattered around ts and cshtml files, which makes code splitting and dynamic imports difficult. Now that we are using Webpack through Laravel Mix (#5 and #540, powered by [ReMikus](https://github.com/VocaDB/ReMikus)), we can easily make use of [code splitting in React](https://reactjs.org/docs/code-splitting.html). See also #872. The jQuery and jQuery UI packages cannot be upgraded easily, because there seem to be some breaking changes. This is why #675 and #679 cannot be merged. It requires boilerplate code to translate user interfaces. In the current implementation, we need to 1. create/edit a resx file 1. update ResourcesApiController if needed 1. create a ResourceRepository and pass it to a viewmodel 1. create a ResourcesManager with the repository and current UI culture. 1. load resources , which is a lot of pain. By using [i18next](https://github.com/i18next/i18next) and [react-i18next](https://github.com/i18next/react-i18next), we can make this process a lot easier. We have two different libraries for the same thing, for example marked and MarkdownSharp (#38 and #904). ## Why React? There are several reasons why I chose React, but the main reasons are that 1. React is developed and maintained by Facebook and a community of individual developers and companies. 1. React is used by [DeviantArt](https://www.deviantart.com/), [Discord](https://discord.com/), [Mastodon](https://joinmastodon.org/) and [MusicBrainz](https://musicbrainz.org/). 1. TypeScript supports JSX. 1. I like the React way of thinking (one-way data flow, function components, JSX and etc.). 1. There is a way to [integrate with other libraries](https://reactjs.org/docs/integrating-with-other-libraries.html). 1. Using create-react-app is optional. 1. We are still using customized [Bootstrap 2](https://getbootstrap.com/2.3.2/components.html#alerts) and this could be replaced with [react-bootstrap](https://github.com/react-bootstrap/react-bootstrap). I decided to use MobX, not Redux, for state management. This is because MobX API is surprisingly similar to Knockout's. The conversion shouldn't be that hard, although there are subtle differences in behavior between them (e.g. `subscribe` and `reaction`). Of course this would be nearly impossible to do all at once, it has to be done gradually. ## Examples Before: ```html <div class="label label-info" data-bind="click: function() { $parent.advancedFilters.filters.remove($data); }"> ``` Who is the `$parent` in this context? After: ```tsx <div className="label label-info" onClick={(): void => advancedFilters.remove(filter)} > ```
non_process
react conversion tasks motivation currently we are facing several issues with frontend development it s impossible to find javascript errors in cshtml files at compile time rather than at run time javascript code is scattered around ts and cshtml files which makes code splitting and dynamic imports difficult now that we are using webpack through laravel mix and powered by we can easily make use of see also the jquery and jquery ui packages cannot be upgraded easily because there seem to be some breaking changes this is why and cannot be merged it requires boilerplate code to translate user interfaces in the current implementation we need to create edit a resx file update resourcesapicontroller if needed create a resourcerepository and pass it to a viewmodel create a resourcesmanager with the repository and current ui culture load resources which is a lot of pain by using and we can make this process a lot easier we have two different libraries for the same thing for example marked and markdownsharp and why react there are several reasons why i chose react but the main reasons are that react is developed and maintained by facebook and a community of individual developers and companies react is used by and typescript supports jsx i like the react way of thinking one way data flow function components jsx and etc there is a way to using create react app is optional we are still using customized and this could be replaced with i decided to use mobx not redux for state management this is because mobx api is surprisingly similar to knockout s the conversion shouldn t be that hard although there are subtle differences in behavior between them e g subscribe and reaction of course this would be nearly impossible to do all at once it has to be done gradually examples before html who is the parent in this context after tsx div classname label label info onclick void advancedfilters remove filter
0
1,979
4,805,290,986
IssuesEvent
2016-11-02 15:42:11
AllenFang/react-bootstrap-table
https://api.github.com/repos/AllenFang/react-bootstrap-table
closed
Remove tooltip on hover on table header?
help wanted inprocess
When you mouseover on table header/column name, it shows a tooltip showing the column header name. To me its redundant. How can I disable it? Thanks.
1.0
Remove tooltip on hover on table header? - When you mouseover on table header/column name, it shows a tooltip showing the column header name. To me its redundant. How can I disable it? Thanks.
process
remove tooltip on hover on table header when you mouseover on table header column name it shows a tooltip showing the column header name to me its redundant how can i disable it thanks
1
142,219
19,074,371,654
IssuesEvent
2021-11-27 13:52:31
Joystream/joystream
https://api.github.com/repos/Joystream/joystream
closed
Guard input size to extrinsics
help wanted good first issue security
Very recently, using the version store to write a huge transaction, blocked the block producing validator from moving forward. There is a general problem here of how to constrain size of inputs prior to input validation in extrinsics, and it needs to be solved to not expose nodes to DoS attacks.
True
Guard input size to extrinsics - Very recently, using the version store to write a huge transaction, blocked the block producing validator from moving forward. There is a general problem here of how to constrain size of inputs prior to input validation in extrinsics, and it needs to be solved to not expose nodes to DoS attacks.
non_process
guard input size to extrinsics very recently using the version store to write a huge transaction blocked the block producing validator from moving forward there is a general problem here of how to constrain size of inputs prior to input validation in extrinsics and it needs to be solved to not expose nodes to dos attacks
0
988
3,452,613,781
IssuesEvent
2015-12-17 06:05:40
t3kt/vjzual2
https://api.github.com/repos/t3kt/vjzual2
closed
use angle and distance instead of x/y translate in transform module
enhancement video processing
probably support both with an option to switch between modes
1.0
use angle and distance instead of x/y translate in transform module - probably support both with an option to switch between modes
process
use angle and distance instead of x y translate in transform module probably support both with an option to switch between modes
1
12,213
14,742,936,567
IssuesEvent
2021-01-07 13:08:39
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
opened
SAB Report - Terminated Usage Accounts | Parent:1585
anc-process anc-report anp-1 ant-child/secondary ant-enhancement grt-reports pl-foran
In GitLab by @kdjstudios on Jun 26, 2019, 08:59 **Submitted by:** Gary Pudles <lgpudles@aol.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-06-25-41119 **Server:** Internal **Client/Site:** NA **Account:** NA **Issue:** Cori wrote: > I’d like to request a new SAB report that would be part of the billing process. > > I’d like a report that shows all Accounts that have usage but are terminated in SAB. A couple of things have come up lately where we’ve missed invoicing clients. The Accounts on the Exception report only include those that aren’t in SAB at all. Wanted to run this past you before putting in the request. Thanks Gary wrote: > I am copying cori's request because I would think that this could be seen in the RPM report. Do we report time but no money on terminated accounts on the RPM report?
1.0
SAB Report - Terminated Usage Accounts | Parent:1585 - In GitLab by @kdjstudios on Jun 26, 2019, 08:59 **Submitted by:** Gary Pudles <lgpudles@aol.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-06-25-41119 **Server:** Internal **Client/Site:** NA **Account:** NA **Issue:** Cori wrote: > I’d like to request a new SAB report that would be part of the billing process. > > I’d like a report that shows all Accounts that have usage but are terminated in SAB. A couple of things have come up lately where we’ve missed invoicing clients. The Accounts on the Exception report only include those that aren’t in SAB at all. Wanted to run this past you before putting in the request. Thanks Gary wrote: > I am copying cori's request because I would think that this could be seen in the RPM report. Do we report time but no money on terminated accounts on the RPM report?
process
sab report terminated usage accounts parent in gitlab by kdjstudios on jun submitted by gary pudles helpdesk server internal client site na account na issue cori wrote i’d like to request a new sab report that would be part of the billing process i’d like a report that shows all accounts that have usage but are terminated in sab a couple of things have come up lately where we’ve missed invoicing clients the accounts on the exception report only include those that aren’t in sab at all wanted to run this past you before putting in the request thanks gary wrote i am copying cori s request because i would think that this could be seen in the rpm report do we report time but no money on terminated accounts on the rpm report
1
9,810
7,835,410,455
IssuesEvent
2018-06-17 05:15:27
Cryptonomic/ConseilJS
https://api.github.com/repos/Cryptonomic/ConseilJS
opened
Enforce wallet password strength
enhancement security_audit
The password supplied for creating wallets must be at least eight characters long.
True
Enforce wallet password strength - The password supplied for creating wallets must be at least eight characters long.
non_process
enforce wallet password strength the password supplied for creating wallets must be at least eight characters long
0
11,501
14,380,343,568
IssuesEvent
2020-12-02 02:34:05
KevCor99/4a
https://api.github.com/repos/KevCor99/4a
closed
file_size_estimating_template
process-dashboard
-llenado de template de estimacion de lineas de coodigo en process dashboard - correr el PROBE wizard
1.0
file_size_estimating_template - -llenado de template de estimacion de lineas de coodigo en process dashboard - correr el PROBE wizard
process
file size estimating template llenado de template de estimacion de lineas de coodigo en process dashboard correr el probe wizard
1
187,318
14,427,551,302
IssuesEvent
2020-12-06 04:45:40
kalexmills/github-vet-tests-dec2020
https://api.github.com/repos/kalexmills/github-vet-tests-dec2020
closed
pbolla0818/oci_terraform: oci/apigateway_api_test.go; 16 LoC
fresh small test
Found a possible issue in [pbolla0818/oci_terraform](https://www.github.com/pbolla0818/oci_terraform) at [oci/apigateway_api_test.go](https://github.com/pbolla0818/oci_terraform/blob/c233d54c5fe32f12c234d6dceefba0a9b4ab3022/oci/apigateway_api_test.go#L277-L292) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > reference to apiId is reassigned at line 281 [Click here to see the code in its original context.](https://github.com/pbolla0818/oci_terraform/blob/c233d54c5fe32f12c234d6dceefba0a9b4ab3022/oci/apigateway_api_test.go#L277-L292) <details> <summary>Click here to show the 16 line(s) of Go which triggered the analyzer.</summary> ```go for _, apiId := range apiIds { if ok := SweeperDefaultResourceId[apiId]; !ok { deleteApiRequest := oci_apigateway.DeleteApiRequest{} deleteApiRequest.ApiId = &apiId deleteApiRequest.RequestMetadata.RetryPolicy = getRetryPolicy(true, "apigateway") _, error := apiGatewayClient.DeleteApi(context.Background(), deleteApiRequest) if error != nil { fmt.Printf("Error deleting Api %s %s, It is possible that the resource is already deleted. Please verify manually \n", apiId, error) continue } waitTillCondition(testAccProvider, &apiId, apiSweepWaitCondition, time.Duration(3*time.Minute), apiSweepResponseFetchOperation, "apigateway", true) } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: c233d54c5fe32f12c234d6dceefba0a9b4ab3022
1.0
pbolla0818/oci_terraform: oci/apigateway_api_test.go; 16 LoC - Found a possible issue in [pbolla0818/oci_terraform](https://www.github.com/pbolla0818/oci_terraform) at [oci/apigateway_api_test.go](https://github.com/pbolla0818/oci_terraform/blob/c233d54c5fe32f12c234d6dceefba0a9b4ab3022/oci/apigateway_api_test.go#L277-L292) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > reference to apiId is reassigned at line 281 [Click here to see the code in its original context.](https://github.com/pbolla0818/oci_terraform/blob/c233d54c5fe32f12c234d6dceefba0a9b4ab3022/oci/apigateway_api_test.go#L277-L292) <details> <summary>Click here to show the 16 line(s) of Go which triggered the analyzer.</summary> ```go for _, apiId := range apiIds { if ok := SweeperDefaultResourceId[apiId]; !ok { deleteApiRequest := oci_apigateway.DeleteApiRequest{} deleteApiRequest.ApiId = &apiId deleteApiRequest.RequestMetadata.RetryPolicy = getRetryPolicy(true, "apigateway") _, error := apiGatewayClient.DeleteApi(context.Background(), deleteApiRequest) if error != nil { fmt.Printf("Error deleting Api %s %s, It is possible that the resource is already deleted. Please verify manually \n", apiId, error) continue } waitTillCondition(testAccProvider, &apiId, apiSweepWaitCondition, time.Duration(3*time.Minute), apiSweepResponseFetchOperation, "apigateway", true) } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: c233d54c5fe32f12c234d6dceefba0a9b4ab3022
non_process
oci terraform oci apigateway api test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message reference to apiid is reassigned at line click here to show the line s of go which triggered the analyzer go for apiid range apiids if ok sweeperdefaultresourceid ok deleteapirequest oci apigateway deleteapirequest deleteapirequest apiid apiid deleteapirequest requestmetadata retrypolicy getretrypolicy true apigateway error apigatewayclient deleteapi context background deleteapirequest if error nil fmt printf error deleting api s s it is possible that the resource is already deleted please verify manually n apiid error continue waittillcondition testaccprovider apiid apisweepwaitcondition time duration time minute apisweepresponsefetchoperation apigateway true leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
0
58,668
14,447,193,859
IssuesEvent
2020-12-08 03:10:02
rust-skia/rust-skia
https://api.github.com/repos/rust-skia/rust-skia
closed
Link error on windows
bug build
I tried to build on windows and hit the linker error below. I tried several version and it seems to happen from 0.34.0 onward (0.33.0 builds for me). Any ideas? ``` error: linking with `link.exe` failed: exit code: 1120 = note: Creating library C:\dev\rust\skulpin\target\debug\examples\sdl2_renderer_only.lib and object C:\dev\rust\skulpin\target\debug\examples\sdl2_renderer_only.exp skia.lib(icu.SkLoadICU.obj) : error LNK2019: unresolved external symbol __imp___std_init_once_begin_initialize referenced in function "bool __cdecl SkLoadICU(void)" (?SkLoadICU@@YA_NXZ) skia.lib(icu.umutex.obj) : error LNK2001: unresolved external symbol __imp___std_init_once_begin_initialize skia.lib(icu.SkLoadICU.obj) : error LNK2019: unresolved external symbol __imp___std_init_once_complete referenced in function "bool __cdecl SkLoadICU(void)" (?SkLoadICU@@YA_NXZ) skia.lib(icu.umutex.obj) : error LNK2001: unresolved external symbol __imp___std_init_once_complete C:\dev\rust\skulpin\target\debug\examples\sdl2_renderer_only.exe : fatal error LNK1120: 2 unresolved externals ```
1.0
Link error on windows - I tried to build on windows and hit the linker error below. I tried several version and it seems to happen from 0.34.0 onward (0.33.0 builds for me). Any ideas? ``` error: linking with `link.exe` failed: exit code: 1120 = note: Creating library C:\dev\rust\skulpin\target\debug\examples\sdl2_renderer_only.lib and object C:\dev\rust\skulpin\target\debug\examples\sdl2_renderer_only.exp skia.lib(icu.SkLoadICU.obj) : error LNK2019: unresolved external symbol __imp___std_init_once_begin_initialize referenced in function "bool __cdecl SkLoadICU(void)" (?SkLoadICU@@YA_NXZ) skia.lib(icu.umutex.obj) : error LNK2001: unresolved external symbol __imp___std_init_once_begin_initialize skia.lib(icu.SkLoadICU.obj) : error LNK2019: unresolved external symbol __imp___std_init_once_complete referenced in function "bool __cdecl SkLoadICU(void)" (?SkLoadICU@@YA_NXZ) skia.lib(icu.umutex.obj) : error LNK2001: unresolved external symbol __imp___std_init_once_complete C:\dev\rust\skulpin\target\debug\examples\sdl2_renderer_only.exe : fatal error LNK1120: 2 unresolved externals ```
non_process
link error on windows i tried to build on windows and hit the linker error below i tried several version and it seems to happen from onward builds for me any ideas error linking with link exe failed exit code note creating library c dev rust skulpin target debug examples renderer only lib and object c dev rust skulpin target debug examples renderer only exp skia lib icu skloadicu obj error unresolved external symbol imp std init once begin initialize referenced in function bool cdecl skloadicu void skloadicu ya nxz skia lib icu umutex obj error unresolved external symbol imp std init once begin initialize skia lib icu skloadicu obj error unresolved external symbol imp std init once complete referenced in function bool cdecl skloadicu void skloadicu ya nxz skia lib icu umutex obj error unresolved external symbol imp std init once complete c dev rust skulpin target debug examples renderer only exe fatal error unresolved externals
0
6,987
10,133,964,398
IssuesEvent
2019-08-02 05:54:22
TheIndexingProject/contratospr-api
https://api.github.com/repos/TheIndexingProject/contratospr-api
opened
Importar data completa despues de rediseño de pagina de contratistas y entidades
internal-process
Una vez el trabajo de rediseño en [contratospr](https://github.com/TheIndexingProject/contratospr), necesitamos importar la data en su totalidad.
1.0
Importar data completa despues de rediseño de pagina de contratistas y entidades - Una vez el trabajo de rediseño en [contratospr](https://github.com/TheIndexingProject/contratospr), necesitamos importar la data en su totalidad.
process
importar data completa despues de rediseño de pagina de contratistas y entidades una vez el trabajo de rediseño en necesitamos importar la data en su totalidad
1
81,159
15,603,129,064
IssuesEvent
2021-03-19 01:14:58
billdiamond/cbapi-installer
https://api.github.com/repos/billdiamond/cbapi-installer
opened
CVE-2021-27291 (Medium) detected in multiple libraries
security vulnerability
## CVE-2021-27291 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>Pygments-2.4.2.tar.gz</b>, <b>Pygments-2.4.2-py2.py3-none-any.whl</b>, <b>Pygments-2.4.0-py2.py3-none-any.whl</b></p></summary> <p> <details><summary><b>Pygments-2.4.2.tar.gz</b></p></summary> <p>Pygments is a syntax highlighting package written in Python.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/7e/ae/26808275fc76bf2832deb10d3a3ed3107bc4de01b85dcccbe525f2cd6d1e/Pygments-2.4.2.tar.gz">https://files.pythonhosted.org/packages/7e/ae/26808275fc76bf2832deb10d3a3ed3107bc4de01b85dcccbe525f2cd6d1e/Pygments-2.4.2.tar.gz</a></p> <p>Path to vulnerable library: cbapi-installer/Pygments-2.4.2.tar.gz</p> <p> Dependency Hierarchy: - :x: **Pygments-2.4.2.tar.gz** (Vulnerable Library) </details> <details><summary><b>Pygments-2.4.2-py2.py3-none-any.whl</b></p></summary> <p>Pygments is a syntax highlighting package written in Python.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/5c/73/1dfa428150e3ccb0fa3e68db406e5be48698f2a979ccbcec795f28f44048/Pygments-2.4.2-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/5c/73/1dfa428150e3ccb0fa3e68db406e5be48698f2a979ccbcec795f28f44048/Pygments-2.4.2-py2.py3-none-any.whl</a></p> <p>Path to vulnerable library: cbapi-installer/Pygments-2.4.2-py2.py3-none-any.whl</p> <p> Dependency Hierarchy: - :x: **Pygments-2.4.2-py2.py3-none-any.whl** (Vulnerable Library) </details> <details><summary><b>Pygments-2.4.0-py2.py3-none-any.whl</b></p></summary> <p>Pygments is a syntax highlighting package written in Python.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/6e/00/c5cb5fc7c047da4af049005d0146b3a961b1a25d9cefbbe24bf0882a11ad/Pygments-2.4.0-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/6e/00/c5cb5fc7c047da4af049005d0146b3a961b1a25d9cefbbe24bf0882a11ad/Pygments-2.4.0-py2.py3-none-any.whl</a></p> <p>Path to dependency file: cbapi-installer/requests-2.22.0.tar/requests-2.22.0/Pipfile</p> <p>Path to vulnerable library: cbapi-installer/requests-2.22.0.tar/requests-2.22.0/Pipfile</p> <p> Dependency Hierarchy: - readme_renderer-24.0-py2.py3-none-any.whl (Root Library) - :x: **Pygments-2.4.0-py2.py3-none-any.whl** (Vulnerable Library) </details> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In pygments 1.1+, fixed in 2.7.4, the lexers used to parse programming languages rely heavily on regular expressions. Some of the regular expressions have exponential or cubic worst-case complexity and are vulnerable to ReDoS. By crafting malicious input, an attacker can cause a denial of service. <p>Publish Date: 2021-03-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27291>CVE-2021-27291</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/pygments/pygments/releases/tag/2.7.4">https://github.com/pygments/pygments/releases/tag/2.7.4</a></p> <p>Release Date: 2021-03-17</p> <p>Fix Resolution: Pygments - 2.7.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-27291 (Medium) detected in multiple libraries - ## CVE-2021-27291 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>Pygments-2.4.2.tar.gz</b>, <b>Pygments-2.4.2-py2.py3-none-any.whl</b>, <b>Pygments-2.4.0-py2.py3-none-any.whl</b></p></summary> <p> <details><summary><b>Pygments-2.4.2.tar.gz</b></p></summary> <p>Pygments is a syntax highlighting package written in Python.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/7e/ae/26808275fc76bf2832deb10d3a3ed3107bc4de01b85dcccbe525f2cd6d1e/Pygments-2.4.2.tar.gz">https://files.pythonhosted.org/packages/7e/ae/26808275fc76bf2832deb10d3a3ed3107bc4de01b85dcccbe525f2cd6d1e/Pygments-2.4.2.tar.gz</a></p> <p>Path to vulnerable library: cbapi-installer/Pygments-2.4.2.tar.gz</p> <p> Dependency Hierarchy: - :x: **Pygments-2.4.2.tar.gz** (Vulnerable Library) </details> <details><summary><b>Pygments-2.4.2-py2.py3-none-any.whl</b></p></summary> <p>Pygments is a syntax highlighting package written in Python.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/5c/73/1dfa428150e3ccb0fa3e68db406e5be48698f2a979ccbcec795f28f44048/Pygments-2.4.2-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/5c/73/1dfa428150e3ccb0fa3e68db406e5be48698f2a979ccbcec795f28f44048/Pygments-2.4.2-py2.py3-none-any.whl</a></p> <p>Path to vulnerable library: cbapi-installer/Pygments-2.4.2-py2.py3-none-any.whl</p> <p> Dependency Hierarchy: - :x: **Pygments-2.4.2-py2.py3-none-any.whl** (Vulnerable Library) </details> <details><summary><b>Pygments-2.4.0-py2.py3-none-any.whl</b></p></summary> <p>Pygments is a syntax highlighting package written in Python.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/6e/00/c5cb5fc7c047da4af049005d0146b3a961b1a25d9cefbbe24bf0882a11ad/Pygments-2.4.0-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/6e/00/c5cb5fc7c047da4af049005d0146b3a961b1a25d9cefbbe24bf0882a11ad/Pygments-2.4.0-py2.py3-none-any.whl</a></p> <p>Path to dependency file: cbapi-installer/requests-2.22.0.tar/requests-2.22.0/Pipfile</p> <p>Path to vulnerable library: cbapi-installer/requests-2.22.0.tar/requests-2.22.0/Pipfile</p> <p> Dependency Hierarchy: - readme_renderer-24.0-py2.py3-none-any.whl (Root Library) - :x: **Pygments-2.4.0-py2.py3-none-any.whl** (Vulnerable Library) </details> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In pygments 1.1+, fixed in 2.7.4, the lexers used to parse programming languages rely heavily on regular expressions. Some of the regular expressions have exponential or cubic worst-case complexity and are vulnerable to ReDoS. By crafting malicious input, an attacker can cause a denial of service. <p>Publish Date: 2021-03-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27291>CVE-2021-27291</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/pygments/pygments/releases/tag/2.7.4">https://github.com/pygments/pygments/releases/tag/2.7.4</a></p> <p>Release Date: 2021-03-17</p> <p>Fix Resolution: Pygments - 2.7.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries pygments tar gz pygments none any whl pygments none any whl pygments tar gz pygments is a syntax highlighting package written in python library home page a href path to vulnerable library cbapi installer pygments tar gz dependency hierarchy x pygments tar gz vulnerable library pygments none any whl pygments is a syntax highlighting package written in python library home page a href path to vulnerable library cbapi installer pygments none any whl dependency hierarchy x pygments none any whl vulnerable library pygments none any whl pygments is a syntax highlighting package written in python library home page a href path to dependency file cbapi installer requests tar requests pipfile path to vulnerable library cbapi installer requests tar requests pipfile dependency hierarchy readme renderer none any whl root library x pygments none any whl vulnerable library vulnerability details in pygments fixed in the lexers used to parse programming languages rely heavily on regular expressions some of the regular expressions have exponential or cubic worst case complexity and are vulnerable to redos by crafting malicious input an attacker can cause a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution pygments step up your open source security game with whitesource
0
19,864
26,276,658,022
IssuesEvent
2023-01-06 23:00:48
googleapis/gapic-generator-java
https://api.github.com/repos/googleapis/gapic-generator-java
closed
Warning: a recent release failed
type: process priority: p3
The following release PRs may have failed: * #1067 - The release job was triggered, but has not reported back success. * #1079 - The release job was triggered, but has not reported back success. * #1081 - The release job was triggered, but has not reported back success.
1.0
Warning: a recent release failed - The following release PRs may have failed: * #1067 - The release job was triggered, but has not reported back success. * #1079 - The release job was triggered, but has not reported back success. * #1081 - The release job was triggered, but has not reported back success.
process
warning a recent release failed the following release prs may have failed the release job was triggered but has not reported back success the release job was triggered but has not reported back success the release job was triggered but has not reported back success
1
18,203
24,258,491,940
IssuesEvent
2022-09-27 20:03:55
hashgraph/hedera-mirror-node
https://api.github.com/repos/hashgraph/hedera-mirror-node
opened
Release checklist 0.65
enhancement process
### Problem We need a checklist to verify the release is rolled out successfully. ### Solution - [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc) - [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.65.0) - [x] GitHub checks for branch are passing - [x] Automated Kubernetes deployment successful - [x] Tag release - [x] Upload release artifacts - [x] Publish release ## Integration - [x] Deploy to VM ## Performance - [x] Deploy to Kubernetes - [x] Deploy to VM - [ ] gRPC API performance tests - [ ] Importer performance tests - [ ] REST API performance tests - [x] Migrations tested against mainnet clone ## Previewnet - [ ] Deploy to VM ## Staging - [ ] Deploy to Kubernetes EU - [ ] Deploy to Kubernetes NA ## Testnet - [ ] Deploy to VM ## Mainnet - [ ] Deploy to Kubernetes EU - [ ] Deploy to Kubernetes NA - [ ] Deploy to VM - [ ] Deploy to ETL ### Alternatives _No response_
1.0
Release checklist 0.65 - ### Problem We need a checklist to verify the release is rolled out successfully. ### Solution - [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc) - [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.65.0) - [x] GitHub checks for branch are passing - [x] Automated Kubernetes deployment successful - [x] Tag release - [x] Upload release artifacts - [x] Publish release ## Integration - [x] Deploy to VM ## Performance - [x] Deploy to Kubernetes - [x] Deploy to VM - [ ] gRPC API performance tests - [ ] Importer performance tests - [ ] REST API performance tests - [x] Migrations tested against mainnet clone ## Previewnet - [ ] Deploy to VM ## Staging - [ ] Deploy to Kubernetes EU - [ ] Deploy to Kubernetes NA ## Testnet - [ ] Deploy to VM ## Mainnet - [ ] Deploy to Kubernetes EU - [ ] Deploy to Kubernetes NA - [ ] Deploy to VM - [ ] Deploy to ETL ### Alternatives _No response_
process
release checklist problem we need a checklist to verify the release is rolled out successfully solution milestone field populated on relevant nothing open for github checks for branch are passing automated kubernetes deployment successful tag release upload release artifacts publish release integration deploy to vm performance deploy to kubernetes deploy to vm grpc api performance tests importer performance tests rest api performance tests migrations tested against mainnet clone previewnet deploy to vm staging deploy to kubernetes eu deploy to kubernetes na testnet deploy to vm mainnet deploy to kubernetes eu deploy to kubernetes na deploy to vm deploy to etl alternatives no response
1
10,338
13,166,370,582
IssuesEvent
2020-08-11 08:26:11
arcus-azure/arcus.messaging
https://api.github.com/repos/arcus-azure/arcus.messaging
closed
Change key rotation test to typical rotation flow
area:message-processing area:security enhancement integration:service-bus testing
Change key rotation test to typical rotation flow which is: 1. Message pump starts up which is using primary connection string 2. Rotation process rolls the secondary key 3. Rotation process sets the secondary key in Key Vault 4. Rotation process rolls the primary key which revokes access 5. Message pump detects unauthorized exceptions 6. Message pump gracefully shuts down 7. Message pump starts up with new connection string from vault
1.0
Change key rotation test to typical rotation flow - Change key rotation test to typical rotation flow which is: 1. Message pump starts up which is using primary connection string 2. Rotation process rolls the secondary key 3. Rotation process sets the secondary key in Key Vault 4. Rotation process rolls the primary key which revokes access 5. Message pump detects unauthorized exceptions 6. Message pump gracefully shuts down 7. Message pump starts up with new connection string from vault
process
change key rotation test to typical rotation flow change key rotation test to typical rotation flow which is message pump starts up which is using primary connection string rotation process rolls the secondary key rotation process sets the secondary key in key vault rotation process rolls the primary key which revokes access message pump detects unauthorized exceptions message pump gracefully shuts down message pump starts up with new connection string from vault
1
441,918
30,806,024,741
IssuesEvent
2023-08-01 07:13:21
BacknPacker/large_scale_system_design
https://api.github.com/repos/BacknPacker/large_scale_system_design
closed
1장 사용자 수에 따른 규모 확장성 - 캐시 설계 전략
documentation
책에서 나오는 캐시 전략은 읽기 전략은 Read Through 패턴인거 같습니다. 쓰기 전략은 어떠한 방식인지는 안나오지만 캐시 설계전략에 잘 정리 해놓은 정보가 있어서 남깁니다. 출처 [인파님 블로그](https://inpa.tistory.com/entry/REDIS-📚-캐시Cache-설계-전략-지침-총정리)
1.0
1장 사용자 수에 따른 규모 확장성 - 캐시 설계 전략 - 책에서 나오는 캐시 전략은 읽기 전략은 Read Through 패턴인거 같습니다. 쓰기 전략은 어떠한 방식인지는 안나오지만 캐시 설계전략에 잘 정리 해놓은 정보가 있어서 남깁니다. 출처 [인파님 블로그](https://inpa.tistory.com/entry/REDIS-📚-캐시Cache-설계-전략-지침-총정리)
non_process
사용자 수에 따른 규모 확장성 캐시 설계 전략 책에서 나오는 캐시 전략은 읽기 전략은 read through 패턴인거 같습니다 쓰기 전략은 어떠한 방식인지는 안나오지만 캐시 설계전략에 잘 정리 해놓은 정보가 있어서 남깁니다 출처
0
7,521
10,597,143,456
IssuesEvent
2019-10-09 23:26:18
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
remove "locomotion" parent
multi-species process quick fix
GO:0052371 regulation by organism of entry into other organism involved in symbiotic interaction remove parent GO:0040012 regulation of locomotion Definition Any process that modulates the frequency, rate or extent of locomotion of a cell or organism. I have no idea why it has this parent?
1.0
remove "locomotion" parent - GO:0052371 regulation by organism of entry into other organism involved in symbiotic interaction remove parent GO:0040012 regulation of locomotion Definition Any process that modulates the frequency, rate or extent of locomotion of a cell or organism. I have no idea why it has this parent?
process
remove locomotion parent go regulation by organism of entry into other organism involved in symbiotic interaction remove parent go regulation of locomotion definition any process that modulates the frequency rate or extent of locomotion of a cell or organism i have no idea why it has this parent
1
107,433
11,543,740,326
IssuesEvent
2020-02-18 10:08:56
saros-project/saros
https://api.github.com/repos/saros-project/saros
closed
Skipping documentation builds skips more than expected
Area: Documentation Area: Infrastructure Prio: Medium State: Unconfirmed Type: Bug
We use the travis env. variable `TRAVIS_COMMIT_RANGE` in order to create a git diff. Unfortunately, the variable does not behave as [documented](https://docs.travis-ci.com/user/environment-variables/) (which [already reported](https://github.com/travis-ci/travis-ci/issues/2668)) if the PR is rebased. > TRAVIS_COMMIT_RANGE: The range of commits that were included in the push or pull request. (Note that this is empty for builds triggered by the initial commit of a new branch.) This lead to the error: > fatal: ambiguous argument '9b235d8aa6d6...855515818ecb': unknown revision or path not in the working tree. Use '--' to separate paths from revisions, like this: 'git <command> [<revision>...] -- [<file>...]' Only docs were updated, not running the CI. Therefore we are currently skipping to much builds !!
1.0
Skipping documentation builds skips more than expected - We use the travis env. variable `TRAVIS_COMMIT_RANGE` in order to create a git diff. Unfortunately, the variable does not behave as [documented](https://docs.travis-ci.com/user/environment-variables/) (which [already reported](https://github.com/travis-ci/travis-ci/issues/2668)) if the PR is rebased. > TRAVIS_COMMIT_RANGE: The range of commits that were included in the push or pull request. (Note that this is empty for builds triggered by the initial commit of a new branch.) This lead to the error: > fatal: ambiguous argument '9b235d8aa6d6...855515818ecb': unknown revision or path not in the working tree. Use '--' to separate paths from revisions, like this: 'git <command> [<revision>...] -- [<file>...]' Only docs were updated, not running the CI. Therefore we are currently skipping to much builds !!
non_process
skipping documentation builds skips more than expected we use the travis env variable travis commit range in order to create a git diff unfortunately the variable does not behave as which if the pr is rebased travis commit range the range of commits that were included in the push or pull request note that this is empty for builds triggered by the initial commit of a new branch this lead to the error fatal ambiguous argument unknown revision or path not in the working tree use to separate paths from revisions like this git only docs were updated not running the ci therefore we are currently skipping to much builds
0
790,330
27,822,983,828
IssuesEvent
2023-03-19 13:00:20
DarkGuy10/BotClient
https://api.github.com/repos/DarkGuy10/BotClient
closed
Bug: Channels don't load successfully
Bug: Confirmed Priority: High
### Issue checklist - [X] I have checked the FAQs. - [X] I have searched the existing issues to make sure this isn't a duplicate. - [x] I have discussed this issue on the support server. ### Issue description Steps to reproduce the bug: 1. Go to a server with Thread channels ### Current behavior Channels don't load and an error message appears in the console [1] Error occurred in handler for 'channels': TypeError: Cannot read properties of null (reading 'has') [1] at serializeGuildChannel (.../BotClient/public/serializers/serializeGuildChannel.js:19:4) [1] at /home/user/Downloads/BotClient/public/electron.js:153:3 [1] at Array.map (<anonymous>) [1] at /home/user/Downloads/BotClient/public/electron.js:152:61 [1] at node:electron/js2c/browser_init:189:579 [1] at EventEmitter.<anonymous> (node:electron/js2c/browser_init:161:11093) [1] at EventEmitter.emit (node:events:390:28) ### Expected behavior Non thread channels should display ### BotClient version v0.10.3-alpha ### Operating System Solus Linux ### How did you download the application? Built from source ### Priority this issue should have Medium (should be fixed soon) ### Addition information _No response_
1.0
Bug: Channels don't load successfully - ### Issue checklist - [X] I have checked the FAQs. - [X] I have searched the existing issues to make sure this isn't a duplicate. - [x] I have discussed this issue on the support server. ### Issue description Steps to reproduce the bug: 1. Go to a server with Thread channels ### Current behavior Channels don't load and an error message appears in the console [1] Error occurred in handler for 'channels': TypeError: Cannot read properties of null (reading 'has') [1] at serializeGuildChannel (.../BotClient/public/serializers/serializeGuildChannel.js:19:4) [1] at /home/user/Downloads/BotClient/public/electron.js:153:3 [1] at Array.map (<anonymous>) [1] at /home/user/Downloads/BotClient/public/electron.js:152:61 [1] at node:electron/js2c/browser_init:189:579 [1] at EventEmitter.<anonymous> (node:electron/js2c/browser_init:161:11093) [1] at EventEmitter.emit (node:events:390:28) ### Expected behavior Non thread channels should display ### BotClient version v0.10.3-alpha ### Operating System Solus Linux ### How did you download the application? Built from source ### Priority this issue should have Medium (should be fixed soon) ### Addition information _No response_
non_process
bug channels don t load successfully issue checklist i have checked the faqs i have searched the existing issues to make sure this isn t a duplicate i have discussed this issue on the support server issue description steps to reproduce the bug go to a server with thread channels current behavior channels don t load and an error message appears in the console error occurred in handler for channels typeerror cannot read properties of null reading has at serializeguildchannel botclient public serializers serializeguildchannel js at home user downloads botclient public electron js at array map at home user downloads botclient public electron js at node electron browser init at eventemitter node electron browser init at eventemitter emit node events expected behavior non thread channels should display botclient version alpha operating system solus linux how did you download the application built from source priority this issue should have medium should be fixed soon addition information no response
0
99,297
11,138,104,546
IssuesEvent
2019-12-20 21:19:34
engnogueira/webdjango
https://api.github.com/repos/engnogueira/webdjango
closed
1.5.1 - Backup do Postgresql
documentation
Nessa aula você vai conferir como agendar backups do bando de dados Postgresql no Heroku. [Backup do Postgresql](https://www.python.pro.br/modulos/django/topicos/backup-do-postgresql) Documentação do Heroku: devcenter.heroku.com Heroku PGBackups | Heroku Dev Center Heroku PGBackups allows you to capture backups of your databases using logical backup techniques. Comando para agendar backups diários: heroku pg:backups:schedule DATABASE_URL --at '02:00 America/Sao_Paulo'
1.0
1.5.1 - Backup do Postgresql - Nessa aula você vai conferir como agendar backups do bando de dados Postgresql no Heroku. [Backup do Postgresql](https://www.python.pro.br/modulos/django/topicos/backup-do-postgresql) Documentação do Heroku: devcenter.heroku.com Heroku PGBackups | Heroku Dev Center Heroku PGBackups allows you to capture backups of your databases using logical backup techniques. Comando para agendar backups diários: heroku pg:backups:schedule DATABASE_URL --at '02:00 America/Sao_Paulo'
non_process
backup do postgresql nessa aula você vai conferir como agendar backups do bando de dados postgresql no heroku documentação do heroku devcenter heroku com heroku pgbackups heroku dev center heroku pgbackups allows you to capture backups of your databases using logical backup techniques comando para agendar backups diários heroku pg backups schedule database url at america sao paulo
0
13,950
16,724,568,826
IssuesEvent
2021-06-10 11:26:22
DevExpress/testcafe-hammerhead
https://api.github.com/repos/DevExpress/testcafe-hammerhead
closed
`WorkerGlobalScope.importScripts` should be overwritten
AREA: client FREQUENCY: level 1 SYSTEM: URL processing SYSTEM: workers TYPE: bug health-monitor
The [WorkerGlobalScope.importScripts()](https://html.spec.whatwg.org/multipage/workers.html#importing-scripts-and-libraries) method is being used to import scripts into worker global scope. We should handle URLs passed to it, because otherwise imported resources may not be processed by HH. Example in which the imported script should be processed, but it doesn't: ```js require('http') .createServer((req, res) => { if (req.url === '/') { res.writeHead(200, { 'content-type': 'text/html' }); res.end(` <!DOCTYPE html> <head> <link rel="shortcut icon" href="#" /> </head> <body> <script> navigator.serviceWorker.register('sw.js'); </script> </body> `); } else if (req.url === '/sw.js') { res.writeHead(200, { 'content-type': 'application/javascript' }); res.end(` self.addEventListener('fetch', function(event) { console.log('fetch event in sw.js with url', event.request.url); }); importScripts('https://wentwrong.github.io/sw-fetch/one-more-service-worker.js'); `); } else res.destroy(); }) .listen(2020, () => console.log('http://localhost:2020')); ``` This bug was found during analysis of the Health-Monitor run issues on the twitter.com website.
1.0
`WorkerGlobalScope.importScripts` should be overwritten - The [WorkerGlobalScope.importScripts()](https://html.spec.whatwg.org/multipage/workers.html#importing-scripts-and-libraries) method is being used to import scripts into worker global scope. We should handle URLs passed to it, because otherwise imported resources may not be processed by HH. Example in which the imported script should be processed, but it doesn't: ```js require('http') .createServer((req, res) => { if (req.url === '/') { res.writeHead(200, { 'content-type': 'text/html' }); res.end(` <!DOCTYPE html> <head> <link rel="shortcut icon" href="#" /> </head> <body> <script> navigator.serviceWorker.register('sw.js'); </script> </body> `); } else if (req.url === '/sw.js') { res.writeHead(200, { 'content-type': 'application/javascript' }); res.end(` self.addEventListener('fetch', function(event) { console.log('fetch event in sw.js with url', event.request.url); }); importScripts('https://wentwrong.github.io/sw-fetch/one-more-service-worker.js'); `); } else res.destroy(); }) .listen(2020, () => console.log('http://localhost:2020')); ``` This bug was found during analysis of the Health-Monitor run issues on the twitter.com website.
process
workerglobalscope importscripts should be overwritten the method is being used to import scripts into worker global scope we should handle urls passed to it because otherwise imported resources may not be processed by hh example in which the imported script should be processed but it doesn t js require http createserver req res if req url res writehead content type text html res end navigator serviceworker register sw js else if req url sw js res writehead content type application javascript res end self addeventlistener fetch function event console log fetch event in sw js with url event request url importscripts else res destroy listen console log this bug was found during analysis of the health monitor run issues on the twitter com website
1
392,113
11,583,673,219
IssuesEvent
2020-02-22 12:42:03
Kaktushose/levelbot2
https://api.github.com/repos/Kaktushose/levelbot2
closed
Stufenaufstieg + Erfolgreicher Kauf Info: Designanpassung
enhancement low priority member
**Beschreibe deinen Vorschlag** Deine Belohnungs-Information ist noch im alten, etwas unübersichtlichen Design. Anpassung an den neuen Info-Command: ⬆️ Stufenaufstieg! @user Neue Stufe: 🎚 @Stufe Einmalige Belohnung: 💰 / 🛍 x Münzen/Item Nächste Stufe: 🎯 @Stufedanach (ab x XP) ---------------------------------------------------------------------- Anpassung Erfolgreicher Kauf Information: Bitte Emoji vor Tagen adden Dauer: ⏲ Tage
1.0
Stufenaufstieg + Erfolgreicher Kauf Info: Designanpassung - **Beschreibe deinen Vorschlag** Deine Belohnungs-Information ist noch im alten, etwas unübersichtlichen Design. Anpassung an den neuen Info-Command: ⬆️ Stufenaufstieg! @user Neue Stufe: 🎚 @Stufe Einmalige Belohnung: 💰 / 🛍 x Münzen/Item Nächste Stufe: 🎯 @Stufedanach (ab x XP) ---------------------------------------------------------------------- Anpassung Erfolgreicher Kauf Information: Bitte Emoji vor Tagen adden Dauer: ⏲ Tage
non_process
stufenaufstieg erfolgreicher kauf info designanpassung beschreibe deinen vorschlag deine belohnungs information ist noch im alten etwas unübersichtlichen design anpassung an den neuen info command ⬆️ stufenaufstieg user neue stufe 🎚 stufe einmalige belohnung 💰 🛍 x münzen item nächste stufe 🎯 stufedanach ab x xp anpassung erfolgreicher kauf information bitte emoji vor tagen adden dauer ⏲ tage
0
14,459
17,537,327,673
IssuesEvent
2021-08-12 08:03:10
googleapis/gapic-generator-typescript
https://api.github.com/repos/googleapis/gapic-generator-typescript
closed
Generate updated Docker image
priority: p2 type: process
The latest Docker image we've published is from 2020-09-02, almost a year ago. It would be useful to generate a new image, particularly including the DIREGAPIC changes to the generator. This is blocking smoke tests in Showcase, https://github.com/googleapis/gapic-showcase/issues/827.
1.0
Generate updated Docker image - The latest Docker image we've published is from 2020-09-02, almost a year ago. It would be useful to generate a new image, particularly including the DIREGAPIC changes to the generator. This is blocking smoke tests in Showcase, https://github.com/googleapis/gapic-showcase/issues/827.
process
generate updated docker image the latest docker image we ve published is from almost a year ago it would be useful to generate a new image particularly including the diregapic changes to the generator this is blocking smoke tests in showcase
1
8,702
11,842,086,960
IssuesEvent
2020-03-23 22:11:34
pacificclimate/climate-explorer-data-prep
https://api.github.com/repos/pacificclimate/climate-explorer-data-prep
closed
Calculate ffd data
process new data
Plan2adapt displays "frost free days". We have "frost days" data. We've just been calculating `365 - fd` whenever we needed to display a numerical value, whether in the front end or the backend, which is straightforward enough. However, we now want to display frost free days on a map, so we need an actual `ffd` dataset. At a minimum, we need the anusplin 6190 dataset, and the PCIC12 rcp85 2020, 2050, and 2080, though I expect once we have an ffd-making script, we might as well do every dataset we might need.
1.0
Calculate ffd data - Plan2adapt displays "frost free days". We have "frost days" data. We've just been calculating `365 - fd` whenever we needed to display a numerical value, whether in the front end or the backend, which is straightforward enough. However, we now want to display frost free days on a map, so we need an actual `ffd` dataset. At a minimum, we need the anusplin 6190 dataset, and the PCIC12 rcp85 2020, 2050, and 2080, though I expect once we have an ffd-making script, we might as well do every dataset we might need.
process
calculate ffd data displays frost free days we have frost days data we ve just been calculating fd whenever we needed to display a numerical value whether in the front end or the backend which is straightforward enough however we now want to display frost free days on a map so we need an actual ffd dataset at a minimum we need the anusplin dataset and the and though i expect once we have an ffd making script we might as well do every dataset we might need
1
151,842
19,665,434,385
IssuesEvent
2022-01-10 21:53:29
tyhal/crie
https://api.github.com/repos/tyhal/crie
closed
CVE-2016-7103 (Medium) detected in github.com/smartystreets/goconvey-v1.6.4
security vulnerability
## CVE-2016-7103 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/smartystreets/goconvey-v1.6.4</b></p></summary> <p>Go testing in the browser. Integrates with `go test`. Write behavioral tests in Go.</p> <p> Dependency Hierarchy: - github.com/errata-ai/vale-v1.7.1 (Root Library) - github.com/go-ini/ini-v1.62.0 - :x: **github.com/smartystreets/goconvey-v1.6.4** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/tyhal/crie/commit/304e2783e903eb495b8bc99cd892d467bea7f95a">304e2783e903eb495b8bc99cd892d467bea7f95a</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Cross-site scripting (XSS) vulnerability in jQuery UI before 1.12.0 might allow remote attackers to inject arbitrary web script or HTML via the closeText parameter of the dialog function. <p>Publish Date: 2017-03-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-7103>CVE-2016-7103</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-7103">https://nvd.nist.gov/vuln/detail/CVE-2016-7103</a></p> <p>Release Date: 2017-03-15</p> <p>Fix Resolution: 1.12.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2016-7103 (Medium) detected in github.com/smartystreets/goconvey-v1.6.4 - ## CVE-2016-7103 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/smartystreets/goconvey-v1.6.4</b></p></summary> <p>Go testing in the browser. Integrates with `go test`. Write behavioral tests in Go.</p> <p> Dependency Hierarchy: - github.com/errata-ai/vale-v1.7.1 (Root Library) - github.com/go-ini/ini-v1.62.0 - :x: **github.com/smartystreets/goconvey-v1.6.4** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/tyhal/crie/commit/304e2783e903eb495b8bc99cd892d467bea7f95a">304e2783e903eb495b8bc99cd892d467bea7f95a</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Cross-site scripting (XSS) vulnerability in jQuery UI before 1.12.0 might allow remote attackers to inject arbitrary web script or HTML via the closeText parameter of the dialog function. <p>Publish Date: 2017-03-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-7103>CVE-2016-7103</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-7103">https://nvd.nist.gov/vuln/detail/CVE-2016-7103</a></p> <p>Release Date: 2017-03-15</p> <p>Fix Resolution: 1.12.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in github com smartystreets goconvey cve medium severity vulnerability vulnerable library github com smartystreets goconvey go testing in the browser integrates with go test write behavioral tests in go dependency hierarchy github com errata ai vale root library github com go ini ini x github com smartystreets goconvey vulnerable library found in head commit a href vulnerability details cross site scripting xss vulnerability in jquery ui before might allow remote attackers to inject arbitrary web script or html via the closetext parameter of the dialog function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
17,393
23,209,825,446
IssuesEvent
2022-08-02 09:09:51
pyanodon/pybugreports
https://api.github.com/repos/pyanodon/pybugreports
closed
Postprocess fail: missing crafting category with Rocket-Silo Construction
mod:pypostprocessing postprocess-fail compatibility
### Mod source PyAE Beta ### Which mod are you having an issue with? - [ ] pyalienlife - [ ] pyalternativeenergy - [ ] pycoalprocessing - [ ] pyfusionenergy - [ ] pyhightech - [ ] pyindustry - [ ] pypetroleumhandling - [X] pypostprocessing - [ ] pyrawores ### Operating system >=Windows 10 ### What kind of issue is this? - [ ] Compatibility - [ ] Locale (names, descriptions, unknown keys) - [ ] Graphical - [ ] Crash - [ ] Progression - [ ] Balance - [X] Pypostprocessing failure - [ ] Other ### What is the problem? ERROR: Missing crafting category: rsc-stage2 (ingredients: 4, fluids in: 1, fluids out:0), for rsc-construction-stage2 stack traceback: [C]: in function 'error' __pypostprocessing__/prototypes/functions/auto_tech.lua:690: in function 'parse_recipe' __pypostprocessing__/prototypes/functions/auto_tech.lua:1146: in function 'parse_data_raw' __pypostprocessing__/prototypes/functions/auto_tech.lua:2512: in main chunk [C]: in function 'require' __pypostprocessing__/data-final-fixes.lua:136: in main chunk Mods to be disabled: • pypostprocessing (0.1.0) ### Steps to reproduce 1. Enable ByAE Beta 2. Add https://mods.factorio.com/mod/Rocket-Silo-Construction 3. Prof- er, observe crash ### Additional context RSC isn't near and dear to me but maybe this affects other mods as well. ### Log file _No response_
2.0
Postprocess fail: missing crafting category with Rocket-Silo Construction - ### Mod source PyAE Beta ### Which mod are you having an issue with? - [ ] pyalienlife - [ ] pyalternativeenergy - [ ] pycoalprocessing - [ ] pyfusionenergy - [ ] pyhightech - [ ] pyindustry - [ ] pypetroleumhandling - [X] pypostprocessing - [ ] pyrawores ### Operating system >=Windows 10 ### What kind of issue is this? - [ ] Compatibility - [ ] Locale (names, descriptions, unknown keys) - [ ] Graphical - [ ] Crash - [ ] Progression - [ ] Balance - [X] Pypostprocessing failure - [ ] Other ### What is the problem? ERROR: Missing crafting category: rsc-stage2 (ingredients: 4, fluids in: 1, fluids out:0), for rsc-construction-stage2 stack traceback: [C]: in function 'error' __pypostprocessing__/prototypes/functions/auto_tech.lua:690: in function 'parse_recipe' __pypostprocessing__/prototypes/functions/auto_tech.lua:1146: in function 'parse_data_raw' __pypostprocessing__/prototypes/functions/auto_tech.lua:2512: in main chunk [C]: in function 'require' __pypostprocessing__/data-final-fixes.lua:136: in main chunk Mods to be disabled: • pypostprocessing (0.1.0) ### Steps to reproduce 1. Enable ByAE Beta 2. Add https://mods.factorio.com/mod/Rocket-Silo-Construction 3. Prof- er, observe crash ### Additional context RSC isn't near and dear to me but maybe this affects other mods as well. ### Log file _No response_
process
postprocess fail missing crafting category with rocket silo construction mod source pyae beta which mod are you having an issue with pyalienlife pyalternativeenergy pycoalprocessing pyfusionenergy pyhightech pyindustry pypetroleumhandling pypostprocessing pyrawores operating system windows what kind of issue is this compatibility locale names descriptions unknown keys graphical crash progression balance pypostprocessing failure other what is the problem error missing crafting category rsc ingredients fluids in fluids out for rsc construction stack traceback in function error pypostprocessing prototypes functions auto tech lua in function parse recipe pypostprocessing prototypes functions auto tech lua in function parse data raw pypostprocessing prototypes functions auto tech lua in main chunk in function require pypostprocessing data final fixes lua in main chunk mods to be disabled • pypostprocessing steps to reproduce enable byae beta add prof er observe crash additional context rsc isn t near and dear to me but maybe this affects other mods as well log file no response
1
243,136
26,277,932,771
IssuesEvent
2023-01-07 01:32:12
DavidSpek/kale
https://api.github.com/repos/DavidSpek/kale
opened
CVE-2022-40897 (Medium) detected in setuptools-44.1.1-py2.py3-none-any.whl
security vulnerability
## CVE-2022-40897 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>setuptools-44.1.1-py2.py3-none-any.whl</b></p></summary> <p>Easily download, build, install, upgrade, and uninstall Python packages</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/e1/b7/182161210a13158cd3ccc41ee19aadef54496b74f2817cc147006ec932b4/setuptools-44.1.1-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/e1/b7/182161210a13158cd3ccc41ee19aadef54496b74f2817cc147006ec932b4/setuptools-44.1.1-py2.py3-none-any.whl</a></p> <p>Path to dependency file: /examples/taxi-cab-classification/requirements.txt</p> <p>Path to vulnerable library: /examples/taxi-cab-classification/requirements.txt,/examples/titanic-ml-dataset/requirements.txt,/backend</p> <p> Dependency Hierarchy: - :x: **setuptools-44.1.1-py2.py3-none-any.whl** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://api.github.com/repos/DavidSpek/kale/commits/b3bfd7086d7e8afe9e0e3ed49d6cf60a9adcea21">b3bfd7086d7e8afe9e0e3ed49d6cf60a9adcea21</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Python Packaging Authority (PyPA) setuptools before 65.5.1 allows remote attackers to cause a denial of service via HTML in a crafted package or custom PackageIndex page. There is a Regular Expression Denial of Service (ReDoS) in package_index.py. <p>Publish Date: 2022-12-23 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-40897>CVE-2022-40897</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://pyup.io/posts/pyup-discovers-redos-vulnerabilities-in-top-python-packages/">https://pyup.io/posts/pyup-discovers-redos-vulnerabilities-in-top-python-packages/</a></p> <p>Release Date: 2022-12-23</p> <p>Fix Resolution: setuptools - 65.5.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-40897 (Medium) detected in setuptools-44.1.1-py2.py3-none-any.whl - ## CVE-2022-40897 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>setuptools-44.1.1-py2.py3-none-any.whl</b></p></summary> <p>Easily download, build, install, upgrade, and uninstall Python packages</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/e1/b7/182161210a13158cd3ccc41ee19aadef54496b74f2817cc147006ec932b4/setuptools-44.1.1-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/e1/b7/182161210a13158cd3ccc41ee19aadef54496b74f2817cc147006ec932b4/setuptools-44.1.1-py2.py3-none-any.whl</a></p> <p>Path to dependency file: /examples/taxi-cab-classification/requirements.txt</p> <p>Path to vulnerable library: /examples/taxi-cab-classification/requirements.txt,/examples/titanic-ml-dataset/requirements.txt,/backend</p> <p> Dependency Hierarchy: - :x: **setuptools-44.1.1-py2.py3-none-any.whl** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://api.github.com/repos/DavidSpek/kale/commits/b3bfd7086d7e8afe9e0e3ed49d6cf60a9adcea21">b3bfd7086d7e8afe9e0e3ed49d6cf60a9adcea21</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Python Packaging Authority (PyPA) setuptools before 65.5.1 allows remote attackers to cause a denial of service via HTML in a crafted package or custom PackageIndex page. There is a Regular Expression Denial of Service (ReDoS) in package_index.py. <p>Publish Date: 2022-12-23 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-40897>CVE-2022-40897</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://pyup.io/posts/pyup-discovers-redos-vulnerabilities-in-top-python-packages/">https://pyup.io/posts/pyup-discovers-redos-vulnerabilities-in-top-python-packages/</a></p> <p>Release Date: 2022-12-23</p> <p>Fix Resolution: setuptools - 65.5.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in setuptools none any whl cve medium severity vulnerability vulnerable library setuptools none any whl easily download build install upgrade and uninstall python packages library home page a href path to dependency file examples taxi cab classification requirements txt path to vulnerable library examples taxi cab classification requirements txt examples titanic ml dataset requirements txt backend dependency hierarchy x setuptools none any whl vulnerable library found in head commit a href found in base branch master vulnerability details python packaging authority pypa setuptools before allows remote attackers to cause a denial of service via html in a crafted package or custom packageindex page there is a regular expression denial of service redos in package index py publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution setuptools step up your open source security game with mend
0
5,880
8,704,373,857
IssuesEvent
2018-12-05 19:14:10
comic/grand-challenge.org
https://api.github.com/repos/comic/grand-challenge.org
closed
No response when uploading a new algorithm using the wrong file format
area/processors bug priority/p1
# Recipe 1. Go to https://grand-challenge.org/algorithms/create/ 2. Upload, for example, a `.tar.gz` file # Result Upload completes, nothing happens.
1.0
No response when uploading a new algorithm using the wrong file format - # Recipe 1. Go to https://grand-challenge.org/algorithms/create/ 2. Upload, for example, a `.tar.gz` file # Result Upload completes, nothing happens.
process
no response when uploading a new algorithm using the wrong file format recipe go to upload for example a tar gz file result upload completes nothing happens
1
16,151
20,508,862,813
IssuesEvent
2022-03-01 02:46:09
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Possibility to store settings used in Processing Toolbox - Aggregate
Feedback stale Processing Feature Request
### Feature description Came across the Aggregate algorithm in the Processing Toolbox. And wow - this can replace a lot of calculations/programming I have previously done outside QGIS. But as Aggregate is actually doing your "programming" it would be very nice to be able to save "the programming" for later re-use: ![image](https://user-images.githubusercontent.com/1887531/149651100-cf44161d-ab09-4d5c-a38a-6a665559fa14.png) ### Additional context _No response_
1.0
Possibility to store settings used in Processing Toolbox - Aggregate - ### Feature description Came across the Aggregate algorithm in the Processing Toolbox. And wow - this can replace a lot of calculations/programming I have previously done outside QGIS. But as Aggregate is actually doing your "programming" it would be very nice to be able to save "the programming" for later re-use: ![image](https://user-images.githubusercontent.com/1887531/149651100-cf44161d-ab09-4d5c-a38a-6a665559fa14.png) ### Additional context _No response_
process
possibility to store settings used in processing toolbox aggregate feature description came across the aggregate algorithm in the processing toolbox and wow this can replace a lot of calculations programming i have previously done outside qgis but as aggregate is actually doing your programming it would be very nice to be able to save the programming for later re use additional context no response
1
183,753
14,248,580,178
IssuesEvent
2020-11-19 13:08:41
Aspirants-FS-FE/SheduliZZZer
https://api.github.com/repos/Aspirants-FS-FE/SheduliZZZer
closed
Написать тесты к ExpertCard.js
JS tests
# **Описание проблемы** Нет тестов для класса ExpertCard # **Решение проблемы** Написать тесты для класса ExpertCard
1.0
Написать тесты к ExpertCard.js - # **Описание проблемы** Нет тестов для класса ExpertCard # **Решение проблемы** Написать тесты для класса ExpertCard
non_process
написать тесты к expertcard js описание проблемы нет тестов для класса expertcard решение проблемы написать тесты для класса expertcard
0
16,872
22,152,591,024
IssuesEvent
2022-06-03 18:35:16
vmware-tanzu/sonobuoy-plugins
https://api.github.com/repos/vmware-tanzu/sonobuoy-plugins
opened
Simplify reuse of post-processor tasks
Post-Processor
We will have some examples of how to do things like remove skips and people can copy/paste them, but we should be able to make it even easier in those cases and allow users to just specify which common tweaks to do. We may be able to hardcode these ytt files and then allow them to just be referenced? That would avoid us having to add lots of special case flags and would also allow custom ordering etc.
1.0
Simplify reuse of post-processor tasks - We will have some examples of how to do things like remove skips and people can copy/paste them, but we should be able to make it even easier in those cases and allow users to just specify which common tweaks to do. We may be able to hardcode these ytt files and then allow them to just be referenced? That would avoid us having to add lots of special case flags and would also allow custom ordering etc.
process
simplify reuse of post processor tasks we will have some examples of how to do things like remove skips and people can copy paste them but we should be able to make it even easier in those cases and allow users to just specify which common tweaks to do we may be able to hardcode these ytt files and then allow them to just be referenced that would avoid us having to add lots of special case flags and would also allow custom ordering etc
1
17,169
22,744,302,781
IssuesEvent
2022-07-07 07:48:18
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
opened
Obsoletion of molecular functions represented as biological processes
obsoletion multi-species process
Dear all, The proposal has been made to obsolete: GO:0044481 envenomation resulting in proteolysis in another organism 7 EXP - represents a MF -> removed, all annotations were redundant with other annotations to 'envenomation resulting in negative regulation of platelet aggregation in another organism' and/or 'envenomation resulting in fibrinogenolysis in another organism' GO:0099127 envenomation resulting in positive regulation of argininosuccinate synthase activity in another organism 0 annotations GO:0044543 envenomation resulting in zymogen activation in another organism 0 annotations The reason for obsoletion is that these represent molecular functions. Annotations have been removed. There are no mappings to these terms; these terms are not present in any subsets. You can comment on the ticket: Thanks, Pascale
1.0
Obsoletion of molecular functions represented as biological processes - Dear all, The proposal has been made to obsolete: GO:0044481 envenomation resulting in proteolysis in another organism 7 EXP - represents a MF -> removed, all annotations were redundant with other annotations to 'envenomation resulting in negative regulation of platelet aggregation in another organism' and/or 'envenomation resulting in fibrinogenolysis in another organism' GO:0099127 envenomation resulting in positive regulation of argininosuccinate synthase activity in another organism 0 annotations GO:0044543 envenomation resulting in zymogen activation in another organism 0 annotations The reason for obsoletion is that these represent molecular functions. Annotations have been removed. There are no mappings to these terms; these terms are not present in any subsets. You can comment on the ticket: Thanks, Pascale
process
obsoletion of molecular functions represented as biological processes dear all the proposal has been made to obsolete go envenomation resulting in proteolysis in another organism exp represents a mf removed all annotations were redundant with other annotations to envenomation resulting in negative regulation of platelet aggregation in another organism and or envenomation resulting in fibrinogenolysis in another organism go envenomation resulting in positive regulation of argininosuccinate synthase activity in another organism annotations go envenomation resulting in zymogen activation in another organism annotations the reason for obsoletion is that these represent molecular functions annotations have been removed there are no mappings to these terms these terms are not present in any subsets you can comment on the ticket thanks pascale
1
13,654
16,361,417,471
IssuesEvent
2021-05-14 10:03:18
w3c/rdf-star
https://api.github.com/repos/w3c/rdf-star
closed
Issues running EARL report generator
process
Capturing points discovered while preparing a EARL PR: 1. The instructions say `/releases` but other reports are in `/reports`. But `/reports` is also the directory for the HTML production. 1. They are not linked to on the home page. 1. There are lot of stray red "X" on the reports. (these go away when there is more than one system in a section) 1. `doap:developer` may be a blank node.
1.0
Issues running EARL report generator - Capturing points discovered while preparing a EARL PR: 1. The instructions say `/releases` but other reports are in `/reports`. But `/reports` is also the directory for the HTML production. 1. They are not linked to on the home page. 1. There are lot of stray red "X" on the reports. (these go away when there is more than one system in a section) 1. `doap:developer` may be a blank node.
process
issues running earl report generator capturing points discovered while preparing a earl pr the instructions say releases but other reports are in reports but reports is also the directory for the html production they are not linked to on the home page there are lot of stray red x on the reports these go away when there is more than one system in a section doap developer may be a blank node
1
9,177
12,226,633,044
IssuesEvent
2020-05-03 11:51:50
labnote-ant/labnote
https://api.github.com/repos/labnote-ant/labnote
closed
The target of process view is not updated
bug process-view
Radio buttons are not working and default checked functions need to be added. - Stirring, Heat, Water bath, Cooling, Filtering
1.0
The target of process view is not updated - Radio buttons are not working and default checked functions need to be added. - Stirring, Heat, Water bath, Cooling, Filtering
process
the target of process view is not updated radio buttons are not working and default checked functions need to be added stirring heat water bath cooling filtering
1
84,323
7,916,843,182
IssuesEvent
2018-07-04 07:55:25
researchstudio-sat/webofneeds
https://api.github.com/repos/researchstudio-sat/webofneeds
closed
Interface representation of Use Cases
UX User Story refactoring opportunity testing
Depends on #1708, #2022 Related to #1955, #1975, #2024 Use case selection needs a new interface component to select use cases. The list of use cases to display should be read from the config file. This requires refactoring of `create-post` and `create-isseeks`.
1.0
Interface representation of Use Cases - Depends on #1708, #2022 Related to #1955, #1975, #2024 Use case selection needs a new interface component to select use cases. The list of use cases to display should be read from the config file. This requires refactoring of `create-post` and `create-isseeks`.
non_process
interface representation of use cases depends on related to use case selection needs a new interface component to select use cases the list of use cases to display should be read from the config file this requires refactoring of create post and create isseeks
0
12,893
15,283,492,097
IssuesEvent
2021-02-23 10:55:56
Bedrohung-der-Bienen/Transformationsfelder-Digitalisierung
https://api.github.com/repos/Bedrohung-der-Bienen/Transformationsfelder-Digitalisierung
opened
Auf der Webseite sollte ein registrierungsbutton sein.
backburner backend frontburner frontend register process website
**Als** Benutzer, **möchte ich** durch einen Button Registrieren können **damit** ich mich anmelden kann
1.0
Auf der Webseite sollte ein registrierungsbutton sein. - **Als** Benutzer, **möchte ich** durch einen Button Registrieren können **damit** ich mich anmelden kann
process
auf der webseite sollte ein registrierungsbutton sein als benutzer möchte ich durch einen button registrieren können damit ich mich anmelden kann
1
6,618
9,699,028,002
IssuesEvent
2019-05-26 10:35:59
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
batch mode python error on QGIS master
Bug Feedback Priority: high Processing Regression
Author Name: **Giovanni Manghi** (@gioman) Original Redmine Issue: [22048](https://issues.qgis.org/issues/22048) Affected QGIS version: 3.7(master) Redmine category:processing/core Assignee: Alexander Bruy --- Traceback (most recent call last): File "/usr/share/qgis/python/plugins/processing/gui/wrappers.py", line 212, in widgetValue return self.value() File "/usr/share/qgis/python/plugins/processing/gui/wrappers.py", line 1348, in value return self.widget.value() AttributeError: 'BatchInputSelectionPanel' object has no attribute 'value' after that the GUI opens but is not possible to select inputs already loaded in the project, it only allows to select them from file system, but then for each layer the same error is thrown.
1.0
batch mode python error on QGIS master - Author Name: **Giovanni Manghi** (@gioman) Original Redmine Issue: [22048](https://issues.qgis.org/issues/22048) Affected QGIS version: 3.7(master) Redmine category:processing/core Assignee: Alexander Bruy --- Traceback (most recent call last): File "/usr/share/qgis/python/plugins/processing/gui/wrappers.py", line 212, in widgetValue return self.value() File "/usr/share/qgis/python/plugins/processing/gui/wrappers.py", line 1348, in value return self.widget.value() AttributeError: 'BatchInputSelectionPanel' object has no attribute 'value' after that the GUI opens but is not possible to select inputs already loaded in the project, it only allows to select them from file system, but then for each layer the same error is thrown.
process
batch mode python error on qgis master author name giovanni manghi gioman original redmine issue affected qgis version master redmine category processing core assignee alexander bruy traceback most recent call last file usr share qgis python plugins processing gui wrappers py line in widgetvalue return self value file usr share qgis python plugins processing gui wrappers py line in value return self widget value attributeerror batchinputselectionpanel object has no attribute value after that the gui opens but is not possible to select inputs already loaded in the project it only allows to select them from file system but then for each layer the same error is thrown
1
274,363
23,834,103,560
IssuesEvent
2022-09-06 02:51:12
Kong/kubernetes-ingress-controller
https://api.github.com/repos/Kong/kubernetes-ingress-controller
opened
Use memory FS for kustomization in e2e tests
area/testing
### Is there an existing issue for this? - [X] I have searched the existing issues ### Problem Statement Now we run kustomization for e2e tests in temporary directories. This brings temporary files in local filesystem. ### Proposed Solution https://pkg.go.dev/sigs.k8s.io/kustomize/kyaml@v0.13.9/filesys#MakeFsInMemory provides in-memory file systems to store kustomization files, we could use this to avoid local file system usage. ### Additional information _No response_ ### Acceptance Criteria _No response_
1.0
Use memory FS for kustomization in e2e tests - ### Is there an existing issue for this? - [X] I have searched the existing issues ### Problem Statement Now we run kustomization for e2e tests in temporary directories. This brings temporary files in local filesystem. ### Proposed Solution https://pkg.go.dev/sigs.k8s.io/kustomize/kyaml@v0.13.9/filesys#MakeFsInMemory provides in-memory file systems to store kustomization files, we could use this to avoid local file system usage. ### Additional information _No response_ ### Acceptance Criteria _No response_
non_process
use memory fs for kustomization in tests is there an existing issue for this i have searched the existing issues problem statement now we run kustomization for tests in temporary directories this brings temporary files in local filesystem proposed solution provides in memory file systems to store kustomization files we could use this to avoid local file system usage additional information no response acceptance criteria no response
0
303,225
22,959,920,437
IssuesEvent
2022-07-19 14:36:24
WordPress/Documentation-Issue-Tracker
https://api.github.com/repos/WordPress/Documentation-Issue-Tracker
closed
[HelpHub] Quote Block
user documentation good first issue 5.9 block editor
Article: https://wordpress.org/support/article/quote-block/ ## General - [ ] make sure all screenshots/videos are relevant for current version (5.9) - [ ] update changelog at the end of the article Issue migrated from Trello: https://trello.com/c/FA20cGRv/4-quote-block
1.0
[HelpHub] Quote Block - Article: https://wordpress.org/support/article/quote-block/ ## General - [ ] make sure all screenshots/videos are relevant for current version (5.9) - [ ] update changelog at the end of the article Issue migrated from Trello: https://trello.com/c/FA20cGRv/4-quote-block
non_process
quote block article general make sure all screenshots videos are relevant for current version update changelog at the end of the article issue migrated from trello
0
44,747
7,126,761,347
IssuesEvent
2018-01-20 14:18:56
sinonjs/sinon
https://api.github.com/repos/sinonjs/sinon
closed
Document how to use native promises along with faked timers
Documentation Help wanted stale
As #738 and its heaps of comments show there exists quite a lot of confusion on how timers and promises mix when faking time. We should create an article in the upcoming documentation site that shows how to do this. [This comment](https://github.com/sinonjs/sinon/issues/738#issuecomment-238224281) could serve as a basis.
1.0
Document how to use native promises along with faked timers - As #738 and its heaps of comments show there exists quite a lot of confusion on how timers and promises mix when faking time. We should create an article in the upcoming documentation site that shows how to do this. [This comment](https://github.com/sinonjs/sinon/issues/738#issuecomment-238224281) could serve as a basis.
non_process
document how to use native promises along with faked timers as and its heaps of comments show there exists quite a lot of confusion on how timers and promises mix when faking time we should create an article in the upcoming documentation site that shows how to do this could serve as a basis
0
8,942
7,498,519,522
IssuesEvent
2018-04-09 05:13:56
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
System.Security.Cryptography.Pkcs '1.2.840.113549.1.1.5' is not a known hash algorithm.
area-System.Security question
i'm trying to compute a cms SHA1RSA using the pre release version(4.5.0-preview1-26216-02) of System.Security.Cryptography.Pkcs. Oid.FromFriendlyName("SHA1RSA",OidGroup.SignatureAlgorithm) returns the corect digest algorithem, but upon calling encode, i get the exception that i,m not using any known hash algorithm. Is there any short term plan to incorporate it? Encoding snippet:---------- protected byte[] GenerateSignature(byte[] fileContent) { CmsSigner signer = new CmsSigner(SubjectIdentifierType.IssuerAndSerialNumber, ClientCert); SignedCms signedCms = new SignedCms(new ContentInfo(fileContent), false); signer = new CmsSigner(SubjectIdentifierType.IssuerAndSerialNumber, ClientCert); signer.DigestAlgorithm = Oid.FromFriendlyName("SHA1RSA",OidGroup.SignatureAlgorithm); signedCms.ComputeSignature(signer, false); var signature = signedCms.Encode(); return signature; }
True
System.Security.Cryptography.Pkcs '1.2.840.113549.1.1.5' is not a known hash algorithm. - i'm trying to compute a cms SHA1RSA using the pre release version(4.5.0-preview1-26216-02) of System.Security.Cryptography.Pkcs. Oid.FromFriendlyName("SHA1RSA",OidGroup.SignatureAlgorithm) returns the corect digest algorithem, but upon calling encode, i get the exception that i,m not using any known hash algorithm. Is there any short term plan to incorporate it? Encoding snippet:---------- protected byte[] GenerateSignature(byte[] fileContent) { CmsSigner signer = new CmsSigner(SubjectIdentifierType.IssuerAndSerialNumber, ClientCert); SignedCms signedCms = new SignedCms(new ContentInfo(fileContent), false); signer = new CmsSigner(SubjectIdentifierType.IssuerAndSerialNumber, ClientCert); signer.DigestAlgorithm = Oid.FromFriendlyName("SHA1RSA",OidGroup.SignatureAlgorithm); signedCms.ComputeSignature(signer, false); var signature = signedCms.Encode(); return signature; }
non_process
system security cryptography pkcs is not a known hash algorithm i m trying to compute a cms using the pre release version of system security cryptography pkcs oid fromfriendlyname oidgroup signaturealgorithm returns the corect digest algorithem but upon calling encode i get the exception that i m not using any known hash algorithm is there any short term plan to incorporate it encoding snippet protected byte generatesignature byte filecontent cmssigner signer new cmssigner subjectidentifiertype issuerandserialnumber clientcert signedcms signedcms new signedcms new contentinfo filecontent false signer new cmssigner subjectidentifiertype issuerandserialnumber clientcert signer digestalgorithm oid fromfriendlyname oidgroup signaturealgorithm signedcms computesignature signer false var signature signedcms encode return signature
0
5,361
8,188,420,276
IssuesEvent
2018-08-30 01:48:27
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
Fish shell buffering issue
log-processing question
`tail -f -n 2000 /var/log/nginx.access.log | grep -v --line-buffered "nginx_report.html" | goaccess -o nginx_report.html --real-time-html --addr=127.0.0.1 --log-format='%h - %^ [%d:%t %^] "%r" %s %b "%R" "%u" "%^" %T %^' --time-format='%T' --date-format='%d/%b/%Y'` while running this command on fish shell nothing happens and its stuck if i press `ctrl + c` then i got the message `WebSocket server ready to accept new client connections` If i run this command without grep its working fine on fish shell example `tail -f -n 2000 /var/log/nginx.access.log | goaccess -o nginx_report.html --real-time-html --addr=127.0.0.1 --log-format='%h - %^ [%d:%t %^] "%r" %s %b "%R" "%u" "%^" %T %^' --time-format='%T' --date-format='%d/%b/%Y'` and show instantly `WebSocket server ready to accept new client connections`
1.0
Fish shell buffering issue - `tail -f -n 2000 /var/log/nginx.access.log | grep -v --line-buffered "nginx_report.html" | goaccess -o nginx_report.html --real-time-html --addr=127.0.0.1 --log-format='%h - %^ [%d:%t %^] "%r" %s %b "%R" "%u" "%^" %T %^' --time-format='%T' --date-format='%d/%b/%Y'` while running this command on fish shell nothing happens and its stuck if i press `ctrl + c` then i got the message `WebSocket server ready to accept new client connections` If i run this command without grep its working fine on fish shell example `tail -f -n 2000 /var/log/nginx.access.log | goaccess -o nginx_report.html --real-time-html --addr=127.0.0.1 --log-format='%h - %^ [%d:%t %^] "%r" %s %b "%R" "%u" "%^" %T %^' --time-format='%T' --date-format='%d/%b/%Y'` and show instantly `WebSocket server ready to accept new client connections`
process
fish shell buffering issue tail f n var log nginx access log grep v line buffered nginx report html goaccess o nginx report html real time html addr log format h r s b r u t time format t date format d b y while running this command on fish shell nothing happens and its stuck if i press ctrl c then i got the message websocket server ready to accept new client connections if i run this command without grep its working fine on fish shell example tail f n var log nginx access log goaccess o nginx report html real time html addr log format h r s b r u t time format t date format d b y and show instantly websocket server ready to accept new client connections
1
22,268
30,821,707,142
IssuesEvent
2023-08-01 16:49:18
h4sh5/pypi-auto-scanner
https://api.github.com/repos/h4sh5/pypi-auto-scanner
opened
pyprland 1.4.0 has 1 GuardDog issues
guarddog silent-process-execution
https://pypi.org/project/pyprland https://inspector.pypi.io/project/pyprland ```{ "dependency": "pyprland", "version": "1.4.0", "result": { "issues": 1, "errors": {}, "results": { "silent-process-execution": [ { "location": "pyprland-1.4.0/pyprland/plugins/scratchpads.py:187", "code": " proc = subprocess.Popen(\n scratch.conf[\"command\"],\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n shell=True,\n )", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmpjsuqivwq/pyprland" } }```
1.0
pyprland 1.4.0 has 1 GuardDog issues - https://pypi.org/project/pyprland https://inspector.pypi.io/project/pyprland ```{ "dependency": "pyprland", "version": "1.4.0", "result": { "issues": 1, "errors": {}, "results": { "silent-process-execution": [ { "location": "pyprland-1.4.0/pyprland/plugins/scratchpads.py:187", "code": " proc = subprocess.Popen(\n scratch.conf[\"command\"],\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n shell=True,\n )", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmpjsuqivwq/pyprland" } }```
process
pyprland has guarddog issues dependency pyprland version result issues errors results silent process execution location pyprland pyprland plugins scratchpads py code proc subprocess popen n scratch conf n stdin subprocess devnull n stdout subprocess devnull n stderr subprocess devnull n shell true n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp tmpjsuqivwq pyprland
1
31,149
8,663,547,785
IssuesEvent
2018-11-28 17:37:22
JuliaLang/julia
https://api.github.com/repos/JuliaLang/julia
closed
Compiling on RPi3B+, Part 2
32-bit arm build
Here is the output when attempting to compile with _make_: ``` Summary: Total ─────── 735.898312 seconds Base: ─────── 225.480952 seconds 30.6402% Stdlibs: ──── 510.399024 seconds 69.3573% JULIA usr/lib/julia/sys-o.a Generating precompile statements... 1086 generated in 734.456520 seconds (overhead 443.537380 seconds) Killed *** This error is usually fixed by running `make clean`. If the error persists, try `make cleanall`. *** mv: cannot stat '/home/pi/Development/julia/usr/lib/julia/sys-o.a.tmp': No such file or directory Makefile:216: recipe for target '/home/pi/Development/julia/usr/lib/julia/sys-o.a' failed make[1]: *** [/home/pi/Development/julia/usr/lib/julia/sys-o.a] Error 1 Makefile:78: recipe for target 'julia-sysimg-release' failed make: *** [julia-sysimg-release] Error 2 ``` Executing _make clean_ or _make cleanall_ ( or _git clean -xfd_ ) has no effect, this message comes each time. The *Generating precompile statements* is presented on the screen for 10 to 15 minutes before the rest of the sentence finishes. I attempted to run this with JULIA_PRECOMPILE=0 ( both in Make.user and via export ) and the error was still present. Please note, that i compiled this without the *-j* option, which generates other errors.
1.0
Compiling on RPi3B+, Part 2 - Here is the output when attempting to compile with _make_: ``` Summary: Total ─────── 735.898312 seconds Base: ─────── 225.480952 seconds 30.6402% Stdlibs: ──── 510.399024 seconds 69.3573% JULIA usr/lib/julia/sys-o.a Generating precompile statements... 1086 generated in 734.456520 seconds (overhead 443.537380 seconds) Killed *** This error is usually fixed by running `make clean`. If the error persists, try `make cleanall`. *** mv: cannot stat '/home/pi/Development/julia/usr/lib/julia/sys-o.a.tmp': No such file or directory Makefile:216: recipe for target '/home/pi/Development/julia/usr/lib/julia/sys-o.a' failed make[1]: *** [/home/pi/Development/julia/usr/lib/julia/sys-o.a] Error 1 Makefile:78: recipe for target 'julia-sysimg-release' failed make: *** [julia-sysimg-release] Error 2 ``` Executing _make clean_ or _make cleanall_ ( or _git clean -xfd_ ) has no effect, this message comes each time. The *Generating precompile statements* is presented on the screen for 10 to 15 minutes before the rest of the sentence finishes. I attempted to run this with JULIA_PRECOMPILE=0 ( both in Make.user and via export ) and the error was still present. Please note, that i compiled this without the *-j* option, which generates other errors.
non_process
compiling on part here is the output when attempting to compile with make summary total ─────── seconds base ─────── seconds stdlibs ──── seconds julia usr lib julia sys o a generating precompile statements generated in seconds overhead seconds killed this error is usually fixed by running make clean if the error persists try make cleanall mv cannot stat home pi development julia usr lib julia sys o a tmp no such file or directory makefile recipe for target home pi development julia usr lib julia sys o a failed make error makefile recipe for target julia sysimg release failed make error executing make clean or make cleanall or git clean xfd has no effect this message comes each time the generating precompile statements is presented on the screen for to minutes before the rest of the sentence finishes i attempted to run this with julia precompile both in make user and via export and the error was still present please note that i compiled this without the j option which generates other errors
0
5,565
8,406,269,462
IssuesEvent
2018-10-11 17:27:44
NREL/EnergyPlus
https://api.github.com/repos/NREL/EnergyPlus
closed
Add units field for Parametric:SetValueForRun
ParametricPreprocessor WontFix suggestion
Add a field to the Parametric:SetValueForRun that can contain a list of types of dimensions so that the \unitsBasedOnField specifier can be used and the IDF Editor will convert the field values used to set the parameters appropriately when using IP units. This is based on Helpdesk ticket #10038
1.0
Add units field for Parametric:SetValueForRun - Add a field to the Parametric:SetValueForRun that can contain a list of types of dimensions so that the \unitsBasedOnField specifier can be used and the IDF Editor will convert the field values used to set the parameters appropriately when using IP units. This is based on Helpdesk ticket #10038
process
add units field for parametric setvalueforrun add a field to the parametric setvalueforrun that can contain a list of types of dimensions so that the unitsbasedonfield specifier can be used and the idf editor will convert the field values used to set the parameters appropriately when using ip units this is based on helpdesk ticket
1
331,209
24,296,868,967
IssuesEvent
2022-09-29 10:47:51
Gamify-IT/issues
https://api.github.com/repos/Gamify-IT/issues
closed
Documentation: Add install manual for overworld
documentation storypoint/1
# Description There is no install manual for the overworld. Create a manual here: <https://github.com/Gamify-IT/docs/tree/main/install-manuals> # DoD - [ ] manual describes the backend options (similar to [chickenshock](https://github.com/Gamify-IT/docs/blob/main/install-manuals/minigames/chickenshock.md#backend)) - [ ] manual describes the frontend options (similar to [chickenshock](https://github.com/Gamify-IT/docs/blob/main/install-manuals/minigames/chickenshock.md#frontend))
1.0
Documentation: Add install manual for overworld - # Description There is no install manual for the overworld. Create a manual here: <https://github.com/Gamify-IT/docs/tree/main/install-manuals> # DoD - [ ] manual describes the backend options (similar to [chickenshock](https://github.com/Gamify-IT/docs/blob/main/install-manuals/minigames/chickenshock.md#backend)) - [ ] manual describes the frontend options (similar to [chickenshock](https://github.com/Gamify-IT/docs/blob/main/install-manuals/minigames/chickenshock.md#frontend))
non_process
documentation add install manual for overworld description there is no install manual for the overworld create a manual here dod manual describes the backend options similar to manual describes the frontend options similar to
0
10,591
13,400,934,240
IssuesEvent
2020-09-03 16:31:25
jgraley/inferno-cpp2v
https://api.github.com/repos/jgraley/inferno-cpp2v
opened
CSP comparative mode
Constraint Processing Developability
Enable both solver and conjecture to be enabled at once by `#define` in `AndRuleEngine`. When both enabled, try comparing the outputs (which should just be assignments/couplings if #119 has been done which I would say should have been).
1.0
CSP comparative mode - Enable both solver and conjecture to be enabled at once by `#define` in `AndRuleEngine`. When both enabled, try comparing the outputs (which should just be assignments/couplings if #119 has been done which I would say should have been).
process
csp comparative mode enable both solver and conjecture to be enabled at once by define in andruleengine when both enabled try comparing the outputs which should just be assignments couplings if has been done which i would say should have been
1
10,950
13,756,692,995
IssuesEvent
2020-10-06 20:21:10
parcel-bundler/parcel
https://api.github.com/repos/parcel-bundler/parcel
closed
PostCSS 8
:baby: Good First Issue :raising_hand_woman: Feature CSS Preprocessing ✨ Parcel 2
<!--- Thanks for filing an issue 😄 ! Before you submit, please read the following: Search open/closed issues before submitting since someone might have asked the same thing before! --> # 🙋 feature request Update PostCSS from 7 to 8 in `v2` branch. ## 🔦 Context [PostCSS 8 changelog](https://github.com/postcss/postcss/releases/tag/8.0.0) PostCSS 8 came with a new plugin API. Right now Parcel users can use new PostCSS plugins because Parcel uses PostCSS 7. PostCSS 8 has no breaking changes for Parcel 2 since Parcel 2 already dropped Node.js 6 and 8 support.
1.0
PostCSS 8 - <!--- Thanks for filing an issue 😄 ! Before you submit, please read the following: Search open/closed issues before submitting since someone might have asked the same thing before! --> # 🙋 feature request Update PostCSS from 7 to 8 in `v2` branch. ## 🔦 Context [PostCSS 8 changelog](https://github.com/postcss/postcss/releases/tag/8.0.0) PostCSS 8 came with a new plugin API. Right now Parcel users can use new PostCSS plugins because Parcel uses PostCSS 7. PostCSS 8 has no breaking changes for Parcel 2 since Parcel 2 already dropped Node.js 6 and 8 support.
process
postcss thanks for filing an issue 😄 before you submit please read the following search open closed issues before submitting since someone might have asked the same thing before 🙋 feature request update postcss from to in branch 🔦 context postcss came with a new plugin api right now parcel users can use new postcss plugins because parcel uses postcss postcss has no breaking changes for parcel since parcel already dropped node js and support
1
19,544
10,368,398,258
IssuesEvent
2019-09-07 16:35:53
AOSC-Dev/aosc-os-abbs
https://api.github.com/repos/AOSC-Dev/aosc-os-abbs
opened
libreoffice: security update to 6.2.7
security to-stable upgrade
<!-- Please remove items do not apply. --> **CVE IDs:** CVE-2019-9854 **Other security advisory IDs:** N/A **Descriptions:** https://www.libreoffice.org/about-us/security/advisories/cve-2019-9854/ LibreOffice has a feature where documents can specify that pre-installed macros can be executed on various script events such as mouse-over, document-open etc. Access is intended to be restricted to scripts under the share/Scripts/python, user/Scripts/python sub-directories of the LibreOffice install. Protection was added, to address CVE-2019-9852, to avoid a directory traversal attack where scripts in arbitrary locations on the file system could be executed by employing a URL encoding attack to defeat the path verification step. However this protection could be bypassed by taking advantage of a flaw in how LibreOffice assembled the final script URL location directly from components of the passed in path as opposed to solely from the sanitized output of the path verification step. In the fixed versions, the parsed url describing the script location is assembled from the output of the verification step. **Architectural progress:** <!-- Please remove any architecture to which the security vulnerabilities do not apply. --> - [ ] AMD64 `amd64` - [ ] 32-bit Optional Environment `optenv32` - [ ] AArch64 `arm64` - [ ] ARMv7 `armel` - [ ] PowerPC 64-bit BE `ppc64` - [ ] PowerPC 32-bit BE `powerpc` <!-- If the specified package is `noarch`, please use the stub below. --> <!-- - [ ] Architecture-independent `noarch` -->
True
libreoffice: security update to 6.2.7 - <!-- Please remove items do not apply. --> **CVE IDs:** CVE-2019-9854 **Other security advisory IDs:** N/A **Descriptions:** https://www.libreoffice.org/about-us/security/advisories/cve-2019-9854/ LibreOffice has a feature where documents can specify that pre-installed macros can be executed on various script events such as mouse-over, document-open etc. Access is intended to be restricted to scripts under the share/Scripts/python, user/Scripts/python sub-directories of the LibreOffice install. Protection was added, to address CVE-2019-9852, to avoid a directory traversal attack where scripts in arbitrary locations on the file system could be executed by employing a URL encoding attack to defeat the path verification step. However this protection could be bypassed by taking advantage of a flaw in how LibreOffice assembled the final script URL location directly from components of the passed in path as opposed to solely from the sanitized output of the path verification step. In the fixed versions, the parsed url describing the script location is assembled from the output of the verification step. **Architectural progress:** <!-- Please remove any architecture to which the security vulnerabilities do not apply. --> - [ ] AMD64 `amd64` - [ ] 32-bit Optional Environment `optenv32` - [ ] AArch64 `arm64` - [ ] ARMv7 `armel` - [ ] PowerPC 64-bit BE `ppc64` - [ ] PowerPC 32-bit BE `powerpc` <!-- If the specified package is `noarch`, please use the stub below. --> <!-- - [ ] Architecture-independent `noarch` -->
non_process
libreoffice security update to cve ids cve other security advisory ids n a descriptions libreoffice has a feature where documents can specify that pre installed macros can be executed on various script events such as mouse over document open etc access is intended to be restricted to scripts under the share scripts python user scripts python sub directories of the libreoffice install protection was added to address cve to avoid a directory traversal attack where scripts in arbitrary locations on the file system could be executed by employing a url encoding attack to defeat the path verification step however this protection could be bypassed by taking advantage of a flaw in how libreoffice assembled the final script url location directly from components of the passed in path as opposed to solely from the sanitized output of the path verification step in the fixed versions the parsed url describing the script location is assembled from the output of the verification step architectural progress bit optional environment armel powerpc bit be powerpc bit be powerpc
0
15,022
18,735,843,722
IssuesEvent
2021-11-04 07:23:29
qgis/QGIS-Documentation
https://api.github.com/repos/qgis/QGIS-Documentation
closed
[processing][needs-docs] Adjust status of controls in algorithm dialog
Text Showcase/Screenshots User Manual Automatic new feature Easy fix Processing 3.14
Original commit: https://github.com/qgis/QGIS/commit/237f1f7e2dc7372071254ebf2ac267e140e2e6da by gacarrillor + Run button is not shown anymore in the Log tab, therefore, you can only run algorithms from the Parameters tab. + While running an algorithm, the Parameters tab is now blocked. + When an algorithm execution finishes (either successfully or not), a new button Change Parameters is shown in the Log tab. + The Batch Algorithm Dialog is now consistent with the described behavior (before, it blocked the Parameters panel, but not the tab; and it was the only dialog blocking parameters widgets). These changes were applied to the Algorithm Dialog and Batch Algorithm Dialog, and work on Edit in place dialogs as well.
1.0
[processing][needs-docs] Adjust status of controls in algorithm dialog - Original commit: https://github.com/qgis/QGIS/commit/237f1f7e2dc7372071254ebf2ac267e140e2e6da by gacarrillor + Run button is not shown anymore in the Log tab, therefore, you can only run algorithms from the Parameters tab. + While running an algorithm, the Parameters tab is now blocked. + When an algorithm execution finishes (either successfully or not), a new button Change Parameters is shown in the Log tab. + The Batch Algorithm Dialog is now consistent with the described behavior (before, it blocked the Parameters panel, but not the tab; and it was the only dialog blocking parameters widgets). These changes were applied to the Algorithm Dialog and Batch Algorithm Dialog, and work on Edit in place dialogs as well.
process
adjust status of controls in algorithm dialog original commit by gacarrillor run button is not shown anymore in the log tab therefore you can only run algorithms from the parameters tab while running an algorithm the parameters tab is now blocked when an algorithm execution finishes either successfully or not a new button change parameters is shown in the log tab the batch algorithm dialog is now consistent with the described behavior before it blocked the parameters panel but not the tab and it was the only dialog blocking parameters widgets these changes were applied to the algorithm dialog and batch algorithm dialog and work on edit in place dialogs as well
1
7,885
11,052,659,511
IssuesEvent
2019-12-10 09:49:58
bisq-network/bisq
https://api.github.com/repos/bisq-network/bisq
closed
Payment methods should have more validations
good first issue in:gui in:trade-process
### Description Following payments should have more validations popmoney revolut Transfer with same bank Transfer with specific banks uphold Zelle #### Version 1.2.0 ### Steps to reproduce Select popmoney in account creation and enter space key in required fields Select revolut in account creation and enter space key in required fields ### Expected behaviour should have some more validation ### Actual behaviour able to add acct with space key ### Screenshots ![Screenshot from 2019-10-24 22-23-04](https://user-images.githubusercontent.com/55691179/67507665-07136f00-f6ad-11e9-9f40-9f19b45fd8ba.png) ![Screenshot from 2019-10-24 22-32-19](https://user-images.githubusercontent.com/55691179/67508212-2d85da00-f6ae-11e9-938d-3270c3c95a0f.png) ![Screenshot from 2019-10-24 22-49-54](https://user-images.githubusercontent.com/55691179/67509431-9cfcc900-f6b0-11e9-87da-2b05d30d2265.png) ![Screenshot from 2019-10-25 00-46-07](https://user-images.githubusercontent.com/55691179/67517740-dccbac80-f6c0-11e9-9246-c136016555c4.png) ![Screenshot from 2019-10-25 00-53-02](https://user-images.githubusercontent.com/55691179/67518237-fa4d4600-f6c1-11e9-8632-9b83814b38d3.png) ![Screenshot from 2019-10-25 01-03-53](https://user-images.githubusercontent.com/55691179/67518910-5b294e00-f6c3-11e9-8634-ebb8cbc2ea26.png) #### Device or machine Ubuntu #### Additional info <!-- Additional information useful for debugging (e.g. logs) -->
1.0
Payment methods should have more validations - ### Description Following payments should have more validations popmoney revolut Transfer with same bank Transfer with specific banks uphold Zelle #### Version 1.2.0 ### Steps to reproduce Select popmoney in account creation and enter space key in required fields Select revolut in account creation and enter space key in required fields ### Expected behaviour should have some more validation ### Actual behaviour able to add acct with space key ### Screenshots ![Screenshot from 2019-10-24 22-23-04](https://user-images.githubusercontent.com/55691179/67507665-07136f00-f6ad-11e9-9f40-9f19b45fd8ba.png) ![Screenshot from 2019-10-24 22-32-19](https://user-images.githubusercontent.com/55691179/67508212-2d85da00-f6ae-11e9-938d-3270c3c95a0f.png) ![Screenshot from 2019-10-24 22-49-54](https://user-images.githubusercontent.com/55691179/67509431-9cfcc900-f6b0-11e9-87da-2b05d30d2265.png) ![Screenshot from 2019-10-25 00-46-07](https://user-images.githubusercontent.com/55691179/67517740-dccbac80-f6c0-11e9-9246-c136016555c4.png) ![Screenshot from 2019-10-25 00-53-02](https://user-images.githubusercontent.com/55691179/67518237-fa4d4600-f6c1-11e9-8632-9b83814b38d3.png) ![Screenshot from 2019-10-25 01-03-53](https://user-images.githubusercontent.com/55691179/67518910-5b294e00-f6c3-11e9-8634-ebb8cbc2ea26.png) #### Device or machine Ubuntu #### Additional info <!-- Additional information useful for debugging (e.g. logs) -->
process
payment methods should have more validations description following payments should have more validations popmoney revolut transfer with same bank transfer with specific banks uphold zelle version steps to reproduce select popmoney in account creation and enter space key in required fields select revolut in account creation and enter space key in required fields expected behaviour should have some more validation actual behaviour able to add acct with space key screenshots device or machine ubuntu additional info
1
18,466
24,549,782,978
IssuesEvent
2022-10-12 11:41:30
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] [Angular Upgrade] Search bar > Search icon is not getting displayed
Bug P2 Participant manager Process: Fixed Process: Tested dev
**AR:** Search bar > Search icon is not getting displayed in the participant manager **ER:** Search bar > Search icon should get displayed in the participant manager **Note:** Issue needs to be fixed throughout the applications wherever search bar is available ![Searchicon](https://user-images.githubusercontent.com/86007179/177715751-8d290e35-1ec5-4c17-87fa-77878de7144a.png)
2.0
[PM] [Angular Upgrade] Search bar > Search icon is not getting displayed - **AR:** Search bar > Search icon is not getting displayed in the participant manager **ER:** Search bar > Search icon should get displayed in the participant manager **Note:** Issue needs to be fixed throughout the applications wherever search bar is available ![Searchicon](https://user-images.githubusercontent.com/86007179/177715751-8d290e35-1ec5-4c17-87fa-77878de7144a.png)
process
search bar search icon is not getting displayed ar search bar search icon is not getting displayed in the participant manager er search bar search icon should get displayed in the participant manager note issue needs to be fixed throughout the applications wherever search bar is available
1
11,221
14,000,609,390
IssuesEvent
2020-10-28 12:34:41
GetTerminus/terminus-oss
https://api.github.com/repos/GetTerminus/terminus-oss
closed
Switch our unpkg for jsdelivr
Focus: consumer Goal: Process Improvement Type: chore
Attempting to speed up our storybook load. Currently Unpkg is fairly slow. Example: https://www.jsdelivr.com/package/npm/@terminus/fe-utilities?tab=collection ![test-unpkg.png](https://images.zenhubusercontent.com/59fa074d8a75884b9085aa07/e30876e1-13aa-4d60-85e3-bdb75e6dea06) ![test-jsdelivr.png](https://images.zenhubusercontent.com/59fa074d8a75884b9085aa07/82f0a2f4-a7d7-4e60-a81c-297e6734e7ef)
1.0
Switch our unpkg for jsdelivr - Attempting to speed up our storybook load. Currently Unpkg is fairly slow. Example: https://www.jsdelivr.com/package/npm/@terminus/fe-utilities?tab=collection ![test-unpkg.png](https://images.zenhubusercontent.com/59fa074d8a75884b9085aa07/e30876e1-13aa-4d60-85e3-bdb75e6dea06) ![test-jsdelivr.png](https://images.zenhubusercontent.com/59fa074d8a75884b9085aa07/82f0a2f4-a7d7-4e60-a81c-297e6734e7ef)
process
switch our unpkg for jsdelivr attempting to speed up our storybook load currently unpkg is fairly slow example
1
769,661
27,015,712,142
IssuesEvent
2023-02-10 19:09:46
opendatahub-io/odh-dashboard
https://api.github.com/repos/opendatahub-io/odh-dashboard
closed
[Model Serving]: Add navigation for metric page
kind/enhancement priority/high feature/model-serving
### Feature description Enable navigation for model serving metrics ([Mocks Server](https://www.sketch.com/s/113593f8-5970-49d6-a352-709b07639127/a/agOKG1D) [Mock Inference](https://www.sketch.com/s/113593f8-5970-49d6-a352-709b07639127/a/AxlyJ2R)): * Display metrics in DSG project * Display metrics in Global Model Serving view * Model links to navigate to the metric page for the inference/model
1.0
[Model Serving]: Add navigation for metric page - ### Feature description Enable navigation for model serving metrics ([Mocks Server](https://www.sketch.com/s/113593f8-5970-49d6-a352-709b07639127/a/agOKG1D) [Mock Inference](https://www.sketch.com/s/113593f8-5970-49d6-a352-709b07639127/a/AxlyJ2R)): * Display metrics in DSG project * Display metrics in Global Model Serving view * Model links to navigate to the metric page for the inference/model
non_process
add navigation for metric page feature description enable navigation for model serving metrics display metrics in dsg project display metrics in global model serving view model links to navigate to the metric page for the inference model
0
3,470
6,551,295,222
IssuesEvent
2017-09-05 14:19:27
threefoldfoundation/app_backend
https://api.github.com/repos/threefoldfoundation/app_backend
closed
Current token value
process_duplicate type_feature
Show the current token value in the wallet or in some global stats menu item.
1.0
Current token value - Show the current token value in the wallet or in some global stats menu item.
process
current token value show the current token value in the wallet or in some global stats menu item
1
164
2,584,152,670
IssuesEvent
2015-02-16 13:29:18
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
Filtering doesn't support default for rev flagging
bug P2 preprocess
For DITAVAL ```xml <val> <revprop action="flag" color="red"/> </val> ``` no flagging markup is generated. The spec states > If the val attribute is absent, then the revprop element declares a default behavior for any value in the rev attribute. Same applies to `prop` element, e.g. ```xml <val> <prop action="flag" color="red" /> </val> ```
1.0
Filtering doesn't support default for rev flagging - For DITAVAL ```xml <val> <revprop action="flag" color="red"/> </val> ``` no flagging markup is generated. The spec states > If the val attribute is absent, then the revprop element declares a default behavior for any value in the rev attribute. Same applies to `prop` element, e.g. ```xml <val> <prop action="flag" color="red" /> </val> ```
process
filtering doesn t support default for rev flagging for ditaval xml no flagging markup is generated the spec states if the val attribute is absent then the revprop element declares a default behavior for any value in the rev attribute same applies to prop element e g xml
1
3,381
6,505,552,975
IssuesEvent
2017-08-24 03:50:21
Great-Hill-Corporation/quickBlocks
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
closed
Command line options that require additional values should not need colon separator
apps-all libs-utillib status-inprocess tools-all type-bug
For command options that require an additional param allow for space separating not ':' - [x] blockScrape: --check:block - [x] ethprice: --at:ts, --period:t, --when:h - [x] ethslurp: --archive:name, --block:start:end, --dates:startDate:endDate, --max:n, --name:name, --fmt:fmt, --sleep:secs, --func:functionName, --errfilt:errorOnly, --acct_id:id - [x] makeClass: --filter:name, --namespace:ns - [x] miniBlocks: --check:blockNum - [x] dataUpgrade: --testNum:n - [x] printFloat: --testNum:n - [x] cacheMan: --extract:id, --truncate:n, --renumber:old-new, --remove:addr - [x] getBlock: --source:src, --fields:where - [x] ~~getBloom: --source:src, --fields:where~~ - [x] For all: **--file:cmd_file** which is in options_base.cpp - [x] For all: **--verbose:level** which is in options_base.cpp
1.0
Command line options that require additional values should not need colon separator - For command options that require an additional param allow for space separating not ':' - [x] blockScrape: --check:block - [x] ethprice: --at:ts, --period:t, --when:h - [x] ethslurp: --archive:name, --block:start:end, --dates:startDate:endDate, --max:n, --name:name, --fmt:fmt, --sleep:secs, --func:functionName, --errfilt:errorOnly, --acct_id:id - [x] makeClass: --filter:name, --namespace:ns - [x] miniBlocks: --check:blockNum - [x] dataUpgrade: --testNum:n - [x] printFloat: --testNum:n - [x] cacheMan: --extract:id, --truncate:n, --renumber:old-new, --remove:addr - [x] getBlock: --source:src, --fields:where - [x] ~~getBloom: --source:src, --fields:where~~ - [x] For all: **--file:cmd_file** which is in options_base.cpp - [x] For all: **--verbose:level** which is in options_base.cpp
process
command line options that require additional values should not need colon separator for command options that require an additional param allow for space separating not blockscrape check block ethprice at ts period t when h ethslurp archive name block start end dates startdate enddate max n name name fmt fmt sleep secs func functionname errfilt erroronly acct id id makeclass filter name namespace ns miniblocks check blocknum dataupgrade testnum n printfloat testnum n cacheman extract id truncate n renumber old new remove addr getblock source src fields where getbloom source src fields where for all file cmd file which is in options base cpp for all verbose level which is in options base cpp
1
8,560
11,732,700,763
IssuesEvent
2020-03-11 04:39:35
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Correct Coprocessor PerfContext
challenge-program-2 component/coprocessor difficulty/easy status/help-wanted
## Description Now Coprocessor enables unified thread pool by default, which will break tasks into multiple pieces. However the structure `PerfStatisticsInstant` used in Coprocessor is Thread Local, and it cannot be safely sent between threads. Currently `PerfContext` is not marked as `!Send` and it causes misleading log. Goals: - Output correct PerfContext log in Coprocessor slow log. ## Score * 300 ## Mentor(s) * @breeswish * @sticnarf ## Recommended Skills * Rust programming
1.0
UCP: Correct Coprocessor PerfContext - ## Description Now Coprocessor enables unified thread pool by default, which will break tasks into multiple pieces. However the structure `PerfStatisticsInstant` used in Coprocessor is Thread Local, and it cannot be safely sent between threads. Currently `PerfContext` is not marked as `!Send` and it causes misleading log. Goals: - Output correct PerfContext log in Coprocessor slow log. ## Score * 300 ## Mentor(s) * @breeswish * @sticnarf ## Recommended Skills * Rust programming
process
ucp correct coprocessor perfcontext description now coprocessor enables unified thread pool by default which will break tasks into multiple pieces however the structure perfstatisticsinstant used in coprocessor is thread local and it cannot be safely sent between threads currently perfcontext is not marked as send and it causes misleading log goals output correct perfcontext log in coprocessor slow log score mentor s breeswish sticnarf recommended skills rust programming
1
18,661
24,581,493,948
IssuesEvent
2022-10-13 15:56:52
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[FHIR] Firestore > Activity responses should not get stored in the Firestore when FHIR is enabled
Bug P0 Response datastore Process: Fixed Process: Tested dev
AR: Firestore > Activity responses are getting stored in the Firestore when FHIR is enabled ER: Firestore > Activity responses should not get stored in the Firestore when FHIR is enabled
2.0
[FHIR] Firestore > Activity responses should not get stored in the Firestore when FHIR is enabled - AR: Firestore > Activity responses are getting stored in the Firestore when FHIR is enabled ER: Firestore > Activity responses should not get stored in the Firestore when FHIR is enabled
process
firestore activity responses should not get stored in the firestore when fhir is enabled ar firestore activity responses are getting stored in the firestore when fhir is enabled er firestore activity responses should not get stored in the firestore when fhir is enabled
1
21,578
29,935,358,498
IssuesEvent
2023-06-22 12:24:56
ESMValGroup/ESMValCore
https://api.github.com/repos/ESMValGroup/ESMValCore
closed
Preprocess datasets in fixed order
enhancement preprocessor
It would be nice if the preprocessor would run over the datasets in a single task in a fixed order. This will make it easier to see how far you got and if you have now fixed a particular data issue when running a recipe with multiple datasets with multiple data issues.
1.0
Preprocess datasets in fixed order - It would be nice if the preprocessor would run over the datasets in a single task in a fixed order. This will make it easier to see how far you got and if you have now fixed a particular data issue when running a recipe with multiple datasets with multiple data issues.
process
preprocess datasets in fixed order it would be nice if the preprocessor would run over the datasets in a single task in a fixed order this will make it easier to see how far you got and if you have now fixed a particular data issue when running a recipe with multiple datasets with multiple data issues
1