Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
96,616
3,971,236,210
IssuesEvent
2016-05-04 10:56:50
OCHA-DAP/hdx-ckan
https://api.github.com/repos/OCHA-DAP/hdx-ckan
closed
After error on creating a new organization, the URL field is not saved
Priority-Low
Create a new organization. Add name and url BUT NOT description. Click submit. The page is shown with a red error message. The title is still shown but not the URL.
1.0
After error on creating a new organization, the URL field is not saved - Create a new organization. Add name and url BUT NOT description. Click submit. The page is shown with a red error message. The title is still shown but not the URL.
non_process
after error on creating a new organization the url field is not saved create a new organization add name and url but not description click submit the page is shown with a red error message the title is still shown but not the url
0
319,552
9,746,217,614
IssuesEvent
2019-06-03 11:40:35
conan-io/conan-docker-tools
https://api.github.com/repos/conan-io/conan-docker-tools
closed
Use Docker multi stage to reduce the recipe number
complex: low component: docker priority: low stage: triaging type: feature
### Description of Problem, Request, or Question Months ago @SSE4 commented that we could use [Docker multistage build](https://docs.docker.com/develop/develop-images/multistage-build/) to reduce the number of recipes, but we had some limitation related to Docker Compose version supported by Travis. Since we have migrated to Azure we could re-visit this idea and try to optimize our Docker recipes. All extra arch images, like ARM, could be merged into the default Docker recipe. Related issue #89
1.0
Use Docker multi stage to reduce the recipe number - ### Description of Problem, Request, or Question Months ago @SSE4 commented that we could use [Docker multistage build](https://docs.docker.com/develop/develop-images/multistage-build/) to reduce the number of recipes, but we had some limitation related to Docker Compose version supported by Travis. Since we have migrated to Azure we could re-visit this idea and try to optimize our Docker recipes. All extra arch images, like ARM, could be merged into the default Docker recipe. Related issue #89
non_process
use docker multi stage to reduce the recipe number description of problem request or question months ago commented that we could use to reduce the number of recipes but we had some limitation related to docker compose version supported by travis since we have migrated to azure we could re visit this idea and try to optimize our docker recipes all extra arch images like arm could be merged into the default docker recipe related issue
0
21,363
29,194,079,884
IssuesEvent
2023-05-20 00:31:43
devssa/onde-codar-em-salvador
https://api.github.com/repos/devssa/onde-codar-em-salvador
closed
[Remoto] Broadcom Tools Consultant (CA UIM, ASM y APM) na Coodesh
SALVADOR PYTHON TOMCAT REQUISITOS REMOTO Telecomunicações PROCESSOS GITHUB SHELL JBOSS APACHE SEGURANÇA UMA ESPANHOL QUALIDADE SCRIPT APM NEGÓCIOS CLUSTER MONITORAMENTO SUPORTE Stale
## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/consultor-de-ferramentas-broadcom-ca-uim-asm-y-apm-180128446?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>A <strong>Grupo Telefônica</strong> está em busca de <strong><ins>Broadcom Tools Consultant (CA UIM, ASM y APM)</ins></strong> para compor seu time!<br></p> ## Telefônica : <p>Somos uma empresa do Grupo Telefônica, líder em telecomunicações no Brasil. Trabalhamos com o propósito de Digitalizar para Aproximar pessoas, negócios e toda sociedade, construindo uma nação mais conectada e transformando a vida dos brasileiros.&nbsp;</p> <p>Buscamos ampliar a autonomia, a personalização e as escolhas em tempo real dos nossos clientes, colocando-os no comando da sua vida digital, com segurança e confiabilidade – tudo isso com a qualidade que só a Vivo tem.</p><a href='https://coodesh.com/empresas/telefonica'>Veja mais no site</a> ## Habilidades: - Python - Powershell - JBoss - Apache - Monitoramento de aplicações ## Local: 100% Remoto ## Requisitos: - Suporte e gestão nas ferramentas Broadcom CA UIM, APM y ASM; - Configuração e resolução de problemas nas ferramentas de monitoração; - Conhecimentos avançados em upgrade, migração e performance tunning; - Sólido conhecimento em shell script, python; - Experiência significativa com desenvolvimento de scripts de automação e monitoramento; - Conhecimento geral da camada middleware no nível de instalação e configuração tanto autônomo quanto cluster (JBOSS, APACHE, TOMCAT); - Conhecimento de ferramentas de monitoramento e automação; - Experiência com processos Ágeis. ## Diferenciais: - Certificação RedHat; - Falar espanhol. ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Broadcom Tools Consultant (CA UIM, ASM y APM) na Telefônica ](https://coodesh.com/vagas/consultor-de-ferramentas-broadcom-ca-uim-asm-y-apm-180128446?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Remoto #### Regime CLT #### Categoria Testes/Q.A
1.0
[Remoto] Broadcom Tools Consultant (CA UIM, ASM y APM) na Coodesh - ## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/consultor-de-ferramentas-broadcom-ca-uim-asm-y-apm-180128446?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>A <strong>Grupo Telefônica</strong> está em busca de <strong><ins>Broadcom Tools Consultant (CA UIM, ASM y APM)</ins></strong> para compor seu time!<br></p> ## Telefônica : <p>Somos uma empresa do Grupo Telefônica, líder em telecomunicações no Brasil. Trabalhamos com o propósito de Digitalizar para Aproximar pessoas, negócios e toda sociedade, construindo uma nação mais conectada e transformando a vida dos brasileiros.&nbsp;</p> <p>Buscamos ampliar a autonomia, a personalização e as escolhas em tempo real dos nossos clientes, colocando-os no comando da sua vida digital, com segurança e confiabilidade – tudo isso com a qualidade que só a Vivo tem.</p><a href='https://coodesh.com/empresas/telefonica'>Veja mais no site</a> ## Habilidades: - Python - Powershell - JBoss - Apache - Monitoramento de aplicações ## Local: 100% Remoto ## Requisitos: - Suporte e gestão nas ferramentas Broadcom CA UIM, APM y ASM; - Configuração e resolução de problemas nas ferramentas de monitoração; - Conhecimentos avançados em upgrade, migração e performance tunning; - Sólido conhecimento em shell script, python; - Experiência significativa com desenvolvimento de scripts de automação e monitoramento; - Conhecimento geral da camada middleware no nível de instalação e configuração tanto autônomo quanto cluster (JBOSS, APACHE, TOMCAT); - Conhecimento de ferramentas de monitoramento e automação; - Experiência com processos Ágeis. ## Diferenciais: - Certificação RedHat; - Falar espanhol. ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Broadcom Tools Consultant (CA UIM, ASM y APM) na Telefônica ](https://coodesh.com/vagas/consultor-de-ferramentas-broadcom-ca-uim-asm-y-apm-180128446?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Remoto #### Regime CLT #### Categoria Testes/Q.A
process
broadcom tools consultant ca uim asm y apm na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a grupo telefônica está em busca de broadcom tools consultant ca uim asm y apm para compor seu time telefônica somos uma empresa do grupo telefônica líder em telecomunicações no brasil trabalhamos com o propósito de digitalizar para aproximar pessoas negócios e toda sociedade construindo uma nação mais conectada e transformando a vida dos brasileiros nbsp buscamos ampliar a autonomia a personalização e as escolhas em tempo real dos nossos clientes colocando os no comando da sua vida digital com segurança e confiabilidade – tudo isso com a qualidade que só a vivo tem habilidades python powershell jboss apache monitoramento de aplicações local remoto requisitos suporte e gestão nas ferramentas broadcom ca uim apm y asm configuração e resolução de problemas nas ferramentas de monitoração conhecimentos avançados em upgrade migração e performance tunning sólido conhecimento em shell script python experiência significativa com desenvolvimento de scripts de automação e monitoramento conhecimento geral da camada middleware no nível de instalação e configuração tanto autônomo quanto cluster jboss apache tomcat conhecimento de ferramentas de monitoramento e automação experiência com processos ágeis diferenciais certificação redhat falar espanhol como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto regime clt categoria testes q a
1
745,884
26,005,472,720
IssuesEvent
2022-12-20 18:55:45
TalaoDAO/AltMe
https://api.github.com/repos/TalaoDAO/AltMe
closed
New card for Chainborn game
a V3 Priority
issuer is same principle as Tezotopia https://issuer.talao.co/chainborn/membershipcard/xxxxx with xxx is a random number generated by wallet see with hugo for Card design on Figma
1.0
New card for Chainborn game - issuer is same principle as Tezotopia https://issuer.talao.co/chainborn/membershipcard/xxxxx with xxx is a random number generated by wallet see with hugo for Card design on Figma
non_process
new card for chainborn game issuer is same principle as tezotopia with xxx is a random number generated by wallet see with hugo for card design on figma
0
21,773
30,288,350,110
IssuesEvent
2023-07-09 00:44:56
mikf/gallery-dl
https://api.github.com/repos/mikf/gallery-dl
closed
metadata json file downloaded twice with different name
config postprocessor
Hi when i am downloading file from gfycat its downloading json metadata file twice Here is my config ``` { "extractor": { "base-directory": "F://dled-gallery-dl/", "directory": ["{category}"], "filename": "{category}_{id}{num:?_//}.{extension}", "postprocessors":[ { "name": "metadata", "event": "post", "filename": "{category}_{id}.json", "skip": true } ], "redgifs": { "format": ["sd", "gif"] }, "gfycat": { "filename": "redgifs_{gfyId}.{extension}", "directory": ["{category}"], "format": ["mobile"], "postprocessors":[ { "name": "metadata", "event": "post", "filename": "redgifs_{gfyId}.json", "skip": true } ] }, } } ``` ``` D:\>gallery-dl -v https://gfycat.com/bowedcourageouscommabutterfly-rhardcorenature [gallery-dl][debug] Version 1.25.6 [gallery-dl][debug] Python 3.11.4 - Windows-10-10.0.19045-SP0 [gallery-dl][debug] requests 2.31.0 - urllib3 2.0.3 [gallery-dl][debug] Configuration Files ['%APPDATA%\\gallery-dl\\config.json'] [gallery-dl][debug] Starting DownloadJob for 'https://gfycat.com/bowedcourageouscommabutterfly-rhardcorenature' [gfycat][debug] Using GfycatImageExtractor for 'https://gfycat.com/bowedcourageouscommabutterfly-rhardcorenature' [urllib3.connectionpool][debug] Starting new HTTPS connection (1): api.gfycat.com:443 [urllib3.connectionpool][debug] https://api.gfycat.com:443 "GET /v1/gfycats/bowedcourageouscommabutterfly HTTP/1.1" 200 806 [gfycat][debug] Active postprocessor modules: [MetadataPP, MetadataPP] # F:\\dled-gallery-dl\gfycat\redgifs_bowedcourageouscommabutterfly.mp4 ``` ``` Files downloaded: redgifs_bowedcourageouscommabutterfly.mp4 redgifs_bowedcourageouscommabutterfly.json gfycat_None.json ``` Unfortunately gfycat uses `gfyId` instead of `id` so i have to define postprocessor inside the extractor again. `gfycat_None.json` should not have downloaded.
1.0
metadata json file downloaded twice with different name - Hi when i am downloading file from gfycat its downloading json metadata file twice Here is my config ``` { "extractor": { "base-directory": "F://dled-gallery-dl/", "directory": ["{category}"], "filename": "{category}_{id}{num:?_//}.{extension}", "postprocessors":[ { "name": "metadata", "event": "post", "filename": "{category}_{id}.json", "skip": true } ], "redgifs": { "format": ["sd", "gif"] }, "gfycat": { "filename": "redgifs_{gfyId}.{extension}", "directory": ["{category}"], "format": ["mobile"], "postprocessors":[ { "name": "metadata", "event": "post", "filename": "redgifs_{gfyId}.json", "skip": true } ] }, } } ``` ``` D:\>gallery-dl -v https://gfycat.com/bowedcourageouscommabutterfly-rhardcorenature [gallery-dl][debug] Version 1.25.6 [gallery-dl][debug] Python 3.11.4 - Windows-10-10.0.19045-SP0 [gallery-dl][debug] requests 2.31.0 - urllib3 2.0.3 [gallery-dl][debug] Configuration Files ['%APPDATA%\\gallery-dl\\config.json'] [gallery-dl][debug] Starting DownloadJob for 'https://gfycat.com/bowedcourageouscommabutterfly-rhardcorenature' [gfycat][debug] Using GfycatImageExtractor for 'https://gfycat.com/bowedcourageouscommabutterfly-rhardcorenature' [urllib3.connectionpool][debug] Starting new HTTPS connection (1): api.gfycat.com:443 [urllib3.connectionpool][debug] https://api.gfycat.com:443 "GET /v1/gfycats/bowedcourageouscommabutterfly HTTP/1.1" 200 806 [gfycat][debug] Active postprocessor modules: [MetadataPP, MetadataPP] # F:\\dled-gallery-dl\gfycat\redgifs_bowedcourageouscommabutterfly.mp4 ``` ``` Files downloaded: redgifs_bowedcourageouscommabutterfly.mp4 redgifs_bowedcourageouscommabutterfly.json gfycat_None.json ``` Unfortunately gfycat uses `gfyId` instead of `id` so i have to define postprocessor inside the extractor again. `gfycat_None.json` should not have downloaded.
process
metadata json file downloaded twice with different name hi when i am downloading file from gfycat its downloading json metadata file twice here is my config extractor base directory f dled gallery dl directory filename category id num extension postprocessors name metadata event post filename category id json skip true redgifs format gfycat filename redgifs gfyid extension directory format postprocessors name metadata event post filename redgifs gfyid json skip true d gallery dl v version python windows requests configuration files starting downloadjob for using gfycatimageextractor for starting new https connection api gfycat com get gfycats bowedcourageouscommabutterfly http active postprocessor modules f dled gallery dl gfycat redgifs bowedcourageouscommabutterfly files downloaded redgifs bowedcourageouscommabutterfly redgifs bowedcourageouscommabutterfly json gfycat none json unfortunately gfycat uses gfyid instead of id so i have to define postprocessor inside the extractor again gfycat none json should not have downloaded
1
7,817
10,980,533,233
IssuesEvent
2019-11-30 15:05:22
lyh543/lyh543.github.io
https://api.github.com/repos/lyh543/lyh543.github.io
opened
数值分析中的数据处理方法
/MATLAB/data-process-in-data-analysis/ Gitalk
https://www.lyh543.xyz/MATLAB/data-process-in-data-analysis/ 插值变量之中存在的函数关系,有时不能确定,而是通过获得的数据来找出两个变量间可能存在的连续。 这东西和拟合有点像。 已知 $f(x)$ 的很多点 $x_i, y_i)$,要找一个函数 $P(x) \approx f(x)$。 这里使用的是多段分段函数进行近似。 常用的方法有:线性插值 linear、三次样条插值 spline、三次插值 cubic。推荐使用三次样条插值。 MATLAB 函数:y_n
1.0
数值分析中的数据处理方法 - https://www.lyh543.xyz/MATLAB/data-process-in-data-analysis/ 插值变量之中存在的函数关系,有时不能确定,而是通过获得的数据来找出两个变量间可能存在的连续。 这东西和拟合有点像。 已知 $f(x)$ 的很多点 $x_i, y_i)$,要找一个函数 $P(x) \approx f(x)$。 这里使用的是多段分段函数进行近似。 常用的方法有:线性插值 linear、三次样条插值 spline、三次插值 cubic。推荐使用三次样条插值。 MATLAB 函数:y_n
process
数值分析中的数据处理方法 插值变量之中存在的函数关系,有时不能确定,而是通过获得的数据来找出两个变量间可能存在的连续。 这东西和拟合有点像。 已知 f x 的很多点 x i y i ,要找一个函数 p x approx f x 。 这里使用的是多段分段函数进行近似。 常用的方法有:线性插值 linear、三次样条插值 spline、三次插值 cubic。推荐使用三次样条插值。 matlab 函数:y n
1
19,323
25,472,083,158
IssuesEvent
2022-11-25 11:03:46
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[IDP] [PM] Not able to set up new admin account in the PM
Bug Blocker P0 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
**Pre-condition:** mfa should be disabled in the PM **Steps:** 1. Login to PM 2. Click on 'Admins' tab 3. Click on 'Add new admin' button 4. Add admin in the application 5. Click on account activation link 6. Try to set up account without 'Phone number' and Verify **AR:** Not able to set up new admin account in the PM **ER:** Admin should be able to set up their account in the PM ,without entering phone number field.
3.0
[IDP] [PM] Not able to set up new admin account in the PM - **Pre-condition:** mfa should be disabled in the PM **Steps:** 1. Login to PM 2. Click on 'Admins' tab 3. Click on 'Add new admin' button 4. Add admin in the application 5. Click on account activation link 6. Try to set up account without 'Phone number' and Verify **AR:** Not able to set up new admin account in the PM **ER:** Admin should be able to set up their account in the PM ,without entering phone number field.
process
not able to set up new admin account in the pm pre condition mfa should be disabled in the pm steps login to pm click on admins tab click on add new admin button add admin in the application click on account activation link try to set up account without phone number and verify ar not able to set up new admin account in the pm er admin should be able to set up their account in the pm without entering phone number field
1
94,435
11,872,434,000
IssuesEvent
2020-03-26 15:48:42
chapel-lang/chapel
https://api.github.com/repos/chapel-lang/chapel
closed
Submodules in different files
area: Language type: Design
This is a proposal for supporting submodules in different files that was pulled out of https://github.com/chapel-lang/chapel/issues/12923#issuecomment-494433624 It is an alternative to #10946 and #10909 and is expected to resolve #8470. The basic idea is that next to a Chapel file such as `M.chpl` (which contains `module M`) one can place a directory `M/` which contains modules that will be compiled as submodules within `M`. ### Example and Details Directory Layout: ``` main/ main-module.chpl # Uses M M.chpl M/ L.chpl L/ K.chpl ``` Compilation of Main Module: ``` chpl main/main-module.chpl M.chpl ``` * The compiler would implicitly make the modules in `M/` available to code in `M.chpl` (just as they would be with submodules within `M`). As a result, M.chpl could have a call like `L.foo()` which would be allowed in `M.chpl` even without a use statement. M.chpl would need to include a use statement if it wanted to write the call to `L.foo()` as `foo()`. * However these submodules would not be visible to code that `use`s `M` unless `M.chpl` also included `public use L` or similar (note that today `use L` is the same as `public use L` but that may change). * `main-module.chpl` would not be able to use `L` or to refer to it unless M.chpl includes `public use L`, just as it cannot refer to a private submodule. * Lastly, the compiler would consider `L` to be a submodule of `M` for privacy / scoping purposes. In particular that means that code in `L` can refer to private things in `M`. #### Example contents of the files: main/main-module.chpl ``` chapel use M; mFunction(); ``` M.chpl ``` chapel module M { proc mFunction() { L.lFunction(); // L is implicitly visible in M (but see #13536) } } ``` M/L.chpl ``` chapel module L { proc lFunction { K.kFunction(); // K is implicitly visible in M (but see #13536) } } ``` M/L/K.chpl ``` chapel module K { proc kFunction { } } ```
1.0
Submodules in different files - This is a proposal for supporting submodules in different files that was pulled out of https://github.com/chapel-lang/chapel/issues/12923#issuecomment-494433624 It is an alternative to #10946 and #10909 and is expected to resolve #8470. The basic idea is that next to a Chapel file such as `M.chpl` (which contains `module M`) one can place a directory `M/` which contains modules that will be compiled as submodules within `M`. ### Example and Details Directory Layout: ``` main/ main-module.chpl # Uses M M.chpl M/ L.chpl L/ K.chpl ``` Compilation of Main Module: ``` chpl main/main-module.chpl M.chpl ``` * The compiler would implicitly make the modules in `M/` available to code in `M.chpl` (just as they would be with submodules within `M`). As a result, M.chpl could have a call like `L.foo()` which would be allowed in `M.chpl` even without a use statement. M.chpl would need to include a use statement if it wanted to write the call to `L.foo()` as `foo()`. * However these submodules would not be visible to code that `use`s `M` unless `M.chpl` also included `public use L` or similar (note that today `use L` is the same as `public use L` but that may change). * `main-module.chpl` would not be able to use `L` or to refer to it unless M.chpl includes `public use L`, just as it cannot refer to a private submodule. * Lastly, the compiler would consider `L` to be a submodule of `M` for privacy / scoping purposes. In particular that means that code in `L` can refer to private things in `M`. #### Example contents of the files: main/main-module.chpl ``` chapel use M; mFunction(); ``` M.chpl ``` chapel module M { proc mFunction() { L.lFunction(); // L is implicitly visible in M (but see #13536) } } ``` M/L.chpl ``` chapel module L { proc lFunction { K.kFunction(); // K is implicitly visible in M (but see #13536) } } ``` M/L/K.chpl ``` chapel module K { proc kFunction { } } ```
non_process
submodules in different files this is a proposal for supporting submodules in different files that was pulled out of it is an alternative to and and is expected to resolve the basic idea is that next to a chapel file such as m chpl which contains module m one can place a directory m which contains modules that will be compiled as submodules within m example and details directory layout main main module chpl uses m m chpl m l chpl l k chpl compilation of main module chpl main main module chpl m chpl the compiler would implicitly make the modules in m available to code in m chpl just as they would be with submodules within m as a result m chpl could have a call like l foo which would be allowed in m chpl even without a use statement m chpl would need to include a use statement if it wanted to write the call to l foo as foo however these submodules would not be visible to code that use s m unless m chpl also included public use l or similar note that today use l is the same as public use l but that may change main module chpl would not be able to use l or to refer to it unless m chpl includes public use l just as it cannot refer to a private submodule lastly the compiler would consider l to be a submodule of m for privacy scoping purposes in particular that means that code in l can refer to private things in m example contents of the files main main module chpl chapel use m mfunction m chpl chapel module m proc mfunction l lfunction l is implicitly visible in m but see m l chpl chapel module l proc lfunction k kfunction k is implicitly visible in m but see m l k chpl chapel module k proc kfunction
0
8,860
11,956,772,800
IssuesEvent
2020-04-04 12:01:16
knative/serving
https://api.github.com/repos/knative/serving
closed
Upgrade from v0.9.0 to v0.10.0 fails on manifest monitoring-metrics-prometheus.yaml
area/monitoring kind/bug kind/process lifecycle/rotten
## What area Upgrade, monitoring /area monitoring /kind process ## What version of Knative? 0.10.0 ## Expected Behavior `kubectl apply -f monitoring-metrics-prometheus.yaml` is applied without errors ## Actual Behavior The command fails on the following error: ``` The Service "kube-state-metrics" is invalid: spec.clusterIP: Invalid value: "": field is immutable ``` ## Steps to Reproduce the Problem Just apply the upgrade: ```kubectl apply -f https://github.com/knative/serving/releases/download/v0.10.0/monitoring-metrics-prometheus.yaml``` The problem is in the following patch snippet: ```apiVersion: v1 kind: Service metadata: + annotations: + prometheus.io/scrape: "true" labels: app: kube-state-metrics name: kube-state-metrics namespace: knative-monitoring spec: - clusterIP: None ports: - - name: https-main - port: 8443 + - name: http-metrics + port: 8080 protocol: TCP - targetPort: https-main - - name: https-self - port: 9443 + targetPort: http-metrics + - name: telemetry + port: 8081 protocol: TCP - targetPort: https-self + targetPort: telemetry selector: app: kube-state-metrics @@ -6643,7 +6619,7 @@ data: kind: ConfigMap metadata: labels: - serving.knative.dev/release: "v0.9.0" + serving.knative.dev/release: "v0.10.0" name: grafana-custom-config namespace: knative-monitoring ``` The `- clusterIP: None` is removed, and breaks because it cannot be updated with the default value that is `""`
1.0
Upgrade from v0.9.0 to v0.10.0 fails on manifest monitoring-metrics-prometheus.yaml - ## What area Upgrade, monitoring /area monitoring /kind process ## What version of Knative? 0.10.0 ## Expected Behavior `kubectl apply -f monitoring-metrics-prometheus.yaml` is applied without errors ## Actual Behavior The command fails on the following error: ``` The Service "kube-state-metrics" is invalid: spec.clusterIP: Invalid value: "": field is immutable ``` ## Steps to Reproduce the Problem Just apply the upgrade: ```kubectl apply -f https://github.com/knative/serving/releases/download/v0.10.0/monitoring-metrics-prometheus.yaml``` The problem is in the following patch snippet: ```apiVersion: v1 kind: Service metadata: + annotations: + prometheus.io/scrape: "true" labels: app: kube-state-metrics name: kube-state-metrics namespace: knative-monitoring spec: - clusterIP: None ports: - - name: https-main - port: 8443 + - name: http-metrics + port: 8080 protocol: TCP - targetPort: https-main - - name: https-self - port: 9443 + targetPort: http-metrics + - name: telemetry + port: 8081 protocol: TCP - targetPort: https-self + targetPort: telemetry selector: app: kube-state-metrics @@ -6643,7 +6619,7 @@ data: kind: ConfigMap metadata: labels: - serving.knative.dev/release: "v0.9.0" + serving.knative.dev/release: "v0.10.0" name: grafana-custom-config namespace: knative-monitoring ``` The `- clusterIP: None` is removed, and breaks because it cannot be updated with the default value that is `""`
process
upgrade from to fails on manifest monitoring metrics prometheus yaml what area upgrade monitoring area monitoring kind process what version of knative expected behavior kubectl apply f monitoring metrics prometheus yaml is applied without errors actual behavior the command fails on the following error the service kube state metrics is invalid spec clusterip invalid value field is immutable steps to reproduce the problem just apply the upgrade kubectl apply f the problem is in the following patch snippet apiversion kind service metadata annotations prometheus io scrape true labels app kube state metrics name kube state metrics namespace knative monitoring spec clusterip none ports name https main port name http metrics port protocol tcp targetport https main name https self port targetport http metrics name telemetry port protocol tcp targetport https self targetport telemetry selector app kube state metrics data kind configmap metadata labels serving knative dev release serving knative dev release name grafana custom config namespace knative monitoring the clusterip none is removed and breaks because it cannot be updated with the default value that is
1
54,912
11,349,980,063
IssuesEvent
2020-01-24 07:18:40
junha-ahn/memo-server
https://api.github.com/repos/junha-ahn/memo-server
closed
middlewares 구현
On-code work feature major
- [x] 컨테이너 - [x] 응답 통일 - [x] async try catch - [x] 가드 - [x] 값 체크 - [x] 토큰 체크
1.0
middlewares 구현 - - [x] 컨테이너 - [x] 응답 통일 - [x] async try catch - [x] 가드 - [x] 값 체크 - [x] 토큰 체크
non_process
middlewares 구현 컨테이너 응답 통일 async try catch 가드 값 체크 토큰 체크
0
532,527
15,558,858,036
IssuesEvent
2021-03-16 10:46:49
kymckay/f21as-project
https://api.github.com/repos/kymckay/f21as-project
closed
Introduce Customer class
category/simulation priority/low type/discussion
Something we've discussed before, but I think this is more relevant now that we are actually "simulating" transactions and will probably want the customer's names for the UI too. This would involve: - [ ] Moving the customer ID into the customer class (generated from their name, accounting for multiple customers with the same initials) - [ ] Linking a Customer instance to each Order instead of just their ID
1.0
Introduce Customer class - Something we've discussed before, but I think this is more relevant now that we are actually "simulating" transactions and will probably want the customer's names for the UI too. This would involve: - [ ] Moving the customer ID into the customer class (generated from their name, accounting for multiple customers with the same initials) - [ ] Linking a Customer instance to each Order instead of just their ID
non_process
introduce customer class something we ve discussed before but i think this is more relevant now that we are actually simulating transactions and will probably want the customer s names for the ui too this would involve moving the customer id into the customer class generated from their name accounting for multiple customers with the same initials linking a customer instance to each order instead of just their id
0
22,359
31,074,998,978
IssuesEvent
2023-08-12 11:22:30
bitfocus/companion-module-requests
https://api.github.com/repos/bitfocus/companion-module-requests
opened
TVL
NOT YET PROCESSED
- [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested** The name of the device, hardware, or software you would like to control: What you would like to be able to make it do from Companion: Direct links or attachments to the ethernet control protocol or API:
1.0
TVL - - [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested** The name of the device, hardware, or software you would like to control: What you would like to be able to make it do from Companion: Direct links or attachments to the ethernet control protocol or API:
process
tvl i have researched the list of existing companion modules and requests and have determined this has not yet been requested the name of the device hardware or software you would like to control what you would like to be able to make it do from companion direct links or attachments to the ethernet control protocol or api
1
12,008
14,738,364,442
IssuesEvent
2021-01-07 04:32:48
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Towne - question - No Usage
anc-external anc-ops anc-process anp-1 ant-support
In GitLab by @kdjstudios on May 14, 2018, 12:08 **Submitted by:** Deb Crown <dcrown@towneanswering.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-05-14-87128/conversation **Server:** External **Client/Site:** Towne **Account:** NA **Issue:** When I opened SA today I saw the post talking about the ‘no usage’ button. Charlie and I are not really sure when that means. Would this apply to the entire billing cycle or just one customer? And when would you use it?
1.0
Towne - question - No Usage - In GitLab by @kdjstudios on May 14, 2018, 12:08 **Submitted by:** Deb Crown <dcrown@towneanswering.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-05-14-87128/conversation **Server:** External **Client/Site:** Towne **Account:** NA **Issue:** When I opened SA today I saw the post talking about the ‘no usage’ button. Charlie and I are not really sure when that means. Would this apply to the entire billing cycle or just one customer? And when would you use it?
process
towne question no usage in gitlab by kdjstudios on may submitted by deb crown helpdesk server external client site towne account na issue when i opened sa today i saw the post talking about the ‘no usage’ button charlie and i are not really sure when that means would this apply to the entire billing cycle or just one customer and when would you use it
1
339,704
24,626,383,632
IssuesEvent
2022-10-16 15:19:43
bounswe/bounswe2022group2
https://api.github.com/repos/bounswe/bounswe2022group2
closed
Revising the Requirements : Glossary
priority-medium type-documentation status-waitingresponse requirements
### Issue Description After a very detailed discussion about our requirements in our weekly meeting, we have finalized how to update it. Now, we need to re-write and organize requirements section based on job-share. In that context, I have the responsibility to revise Glossary section. This issue is related to #325. ### Step Details Steps that will be performed: - [x] Add / remove / change the descriptions that does not belong / missing in our project in Glossary section. ### Final Actions After the revision is complete, I will get it reviewed so we can secure it's fully compatible with our project goals. ### Deadline of the Issue 14.10.2022 - 23:59 ### Reviewer Koray Tekin - @Koraytkn ### Deadline for the Review 15.10.2022 - 23:59
1.0
Revising the Requirements : Glossary - ### Issue Description After a very detailed discussion about our requirements in our weekly meeting, we have finalized how to update it. Now, we need to re-write and organize requirements section based on job-share. In that context, I have the responsibility to revise Glossary section. This issue is related to #325. ### Step Details Steps that will be performed: - [x] Add / remove / change the descriptions that does not belong / missing in our project in Glossary section. ### Final Actions After the revision is complete, I will get it reviewed so we can secure it's fully compatible with our project goals. ### Deadline of the Issue 14.10.2022 - 23:59 ### Reviewer Koray Tekin - @Koraytkn ### Deadline for the Review 15.10.2022 - 23:59
non_process
revising the requirements glossary issue description after a very detailed discussion about our requirements in our weekly meeting we have finalized how to update it now we need to re write and organize requirements section based on job share in that context i have the responsibility to revise glossary section this issue is related to step details steps that will be performed add remove change the descriptions that does not belong missing in our project in glossary section final actions after the revision is complete i will get it reviewed so we can secure it s fully compatible with our project goals deadline of the issue reviewer koray tekin koraytkn deadline for the review
0
737,454
25,517,395,873
IssuesEvent
2022-11-28 17:23:53
googleapis/gax-nodejs
https://api.github.com/repos/googleapis/gax-nodejs
opened
Can I set my own custom header for each request?
type: question priority: p3
The question may be a bit silly because I do not have a deep understanding of gax-nodejs and aip. According to the generated code, it seems that the header is fixed with `x-goog-api-client` internally and it is created as Metadata via `metadataBuilder`. Is it impossible to add other headers?
1.0
Can I set my own custom header for each request? - The question may be a bit silly because I do not have a deep understanding of gax-nodejs and aip. According to the generated code, it seems that the header is fixed with `x-goog-api-client` internally and it is created as Metadata via `metadataBuilder`. Is it impossible to add other headers?
non_process
can i set my own custom header for each request the question may be a bit silly because i do not have a deep understanding of gax nodejs and aip according to the generated code it seems that the header is fixed with x goog api client internally and it is created as metadata via metadatabuilder is it impossible to add other headers
0
3,742
6,733,148,464
IssuesEvent
2017-10-18 13:59:41
york-region-tpss/stp
https://api.github.com/repos/york-region-tpss/stp
closed
Tree Planting Detail Form - Detail Items
enhancement process workflow report ui ux
Create region plugin for enhanced media list which allows user to reorder the list by drag and drop. ![image](https://user-images.githubusercontent.com/3499016/30334083-97264e8a-97ac-11e7-9eb7-41a5ef48320b.png)
1.0
Tree Planting Detail Form - Detail Items - Create region plugin for enhanced media list which allows user to reorder the list by drag and drop. ![image](https://user-images.githubusercontent.com/3499016/30334083-97264e8a-97ac-11e7-9eb7-41a5ef48320b.png)
process
tree planting detail form detail items create region plugin for enhanced media list which allows user to reorder the list by drag and drop
1
317,330
27,228,138,917
IssuesEvent
2023-02-21 11:10:32
awslabs/aws-lambda-powertools-typescript
https://api.github.com/repos/awslabs/aws-lambda-powertools-typescript
closed
Maintenance: integration tests for `AppConfigProvider`
area/parameters status/confirmed type/tests
### Summary As part of #1039, we need to implement integration tests for the `AppConfigProvider` which is part of the upcoming Parameters utility. This issue is used to breakdown the larger epic and track progress in a more granular way. ### Why is this needed? To increase confidence around the utility behavior by testing it in a real AWS Lambda execution environment. And also to provide a baseline against potential future regressions. ### Which area does this relate to? Tests, Parameters ### Solution _No response_ ### Acknowledgment - [X] This request meets [Lambda Powertools Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets) - [ ] Should this be considered in other Lambda Powertools languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/)
1.0
Maintenance: integration tests for `AppConfigProvider` - ### Summary As part of #1039, we need to implement integration tests for the `AppConfigProvider` which is part of the upcoming Parameters utility. This issue is used to breakdown the larger epic and track progress in a more granular way. ### Why is this needed? To increase confidence around the utility behavior by testing it in a real AWS Lambda execution environment. And also to provide a baseline against potential future regressions. ### Which area does this relate to? Tests, Parameters ### Solution _No response_ ### Acknowledgment - [X] This request meets [Lambda Powertools Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets) - [ ] Should this be considered in other Lambda Powertools languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/)
non_process
maintenance integration tests for appconfigprovider summary as part of we need to implement integration tests for the appconfigprovider which is part of the upcoming parameters utility this issue is used to breakdown the larger epic and track progress in a more granular way why is this needed to increase confidence around the utility behavior by testing it in a real aws lambda execution environment and also to provide a baseline against potential future regressions which area does this relate to tests parameters solution no response acknowledgment this request meets should this be considered in other lambda powertools languages i e
0
16,100
20,272,347,620
IssuesEvent
2022-02-15 17:16:13
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
CONTRIBUTING: document Prisma Client workflow of using local link
kind/docs process/candidate topic: prisma-client team/client
Currently the CONTRIBUTING says https://github.com/prisma/prisma/blob/main/CONTRIBUTING.md#prisma-client ``` cd packages/client ts-node fixtures/generate.ts ./fixtures/blog/ --skip-transpile cd fixtures/blog npx prisma db push --skip-generate will create the database structure ts-node main ``` @millsp knows what this is about since is only using another workflow where he has a project locally linked instead. > It's especially useful is you want to work with other libs. > It works super well with running pnpm run watch. This will be really valuable for any (internal/external) contributors to the codebase. It should be easy to document, and really helpful to me.
1.0
CONTRIBUTING: document Prisma Client workflow of using local link - Currently the CONTRIBUTING says https://github.com/prisma/prisma/blob/main/CONTRIBUTING.md#prisma-client ``` cd packages/client ts-node fixtures/generate.ts ./fixtures/blog/ --skip-transpile cd fixtures/blog npx prisma db push --skip-generate will create the database structure ts-node main ``` @millsp knows what this is about since is only using another workflow where he has a project locally linked instead. > It's especially useful is you want to work with other libs. > It works super well with running pnpm run watch. This will be really valuable for any (internal/external) contributors to the codebase. It should be easy to document, and really helpful to me.
process
contributing document prisma client workflow of using local link currently the contributing says cd packages client ts node fixtures generate ts fixtures blog skip transpile cd fixtures blog npx prisma db push skip generate will create the database structure ts node main millsp knows what this is about since is only using another workflow where he has a project locally linked instead it s especially useful is you want to work with other libs it works super well with running pnpm run watch this will be really valuable for any internal external contributors to the codebase it should be easy to document and really helpful to me
1
33,013
2,761,380,153
IssuesEvent
2015-04-28 16:56:15
dobidoberman1/Mystic-5.4.8-Bug-Tracker
https://api.github.com/repos/dobidoberman1/Mystic-5.4.8-Bug-Tracker
closed
Timeless Isle - Court area giving Breathe bar
Medium Priority
Bug name: Timeless Isle - Celestial Court area giving Breathe bar Bug Priority: MEDIUM PRIORITY Bug Type: Area effects Bug description: On Timeless Isle if a player goes into the Celestial Court area and touches the ground gets an underwater breathing bar which eventually is going to kill him. How is it supposed to work?: No breathing bar required and no area effect in general.
1.0
Timeless Isle - Court area giving Breathe bar - Bug name: Timeless Isle - Celestial Court area giving Breathe bar Bug Priority: MEDIUM PRIORITY Bug Type: Area effects Bug description: On Timeless Isle if a player goes into the Celestial Court area and touches the ground gets an underwater breathing bar which eventually is going to kill him. How is it supposed to work?: No breathing bar required and no area effect in general.
non_process
timeless isle court area giving breathe bar bug name timeless isle celestial court area giving breathe bar bug priority medium priority bug type area effects bug description on timeless isle if a player goes into the celestial court area and touches the ground gets an underwater breathing bar which eventually is going to kill him how is it supposed to work no breathing bar required and no area effect in general
0
729,481
25,129,513,838
IssuesEvent
2022-11-09 14:13:30
inverse-inc/packetfence
https://api.github.com/repos/inverse-inc/packetfence
closed
Self service policy: default configuration
Type: Bug Priority: Low Priority: High
**Describe the bug** On a setup without any configuration, default connection profile doesn't have a self service policy configured. On 12.1 (currently `devel`), if you try to reach: * /status: you have an error message * /device-registration: you have a message which tell you that module is not loaded I expect to see same messages in both cases (module not loaded) I can be wrong but I'm almost sure, /status was working previously in such configuration.
2.0
Self service policy: default configuration - **Describe the bug** On a setup without any configuration, default connection profile doesn't have a self service policy configured. On 12.1 (currently `devel`), if you try to reach: * /status: you have an error message * /device-registration: you have a message which tell you that module is not loaded I expect to see same messages in both cases (module not loaded) I can be wrong but I'm almost sure, /status was working previously in such configuration.
non_process
self service policy default configuration describe the bug on a setup without any configuration default connection profile doesn t have a self service policy configured on currently devel if you try to reach status you have an error message device registration you have a message which tell you that module is not loaded i expect to see same messages in both cases module not loaded i can be wrong but i m almost sure status was working previously in such configuration
0
602,954
18,518,397,045
IssuesEvent
2021-10-20 12:46:23
brave/brave-browser
https://api.github.com/repos/brave/brave-browser
closed
ToAsset Balance is Displaying a 0 Balance
priority/P3 QA/No release-notes/exclude feature/wallet OS/Desktop
<!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue. PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE. INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED--> ## Description The `toAsset` balance is displaying a `0.00` balance even though the wallet has a balances for the selected `toAsset` https://user-images.githubusercontent.com/40611140/137988849-dce61d31-ed71-4114-a4b6-d0b71f75d90e.mov
1.0
ToAsset Balance is Displaying a 0 Balance - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue. PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE. INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED--> ## Description The `toAsset` balance is displaying a `0.00` balance even though the wallet has a balances for the selected `toAsset` https://user-images.githubusercontent.com/40611140/137988849-dce61d31-ed71-4114-a4b6-d0b71f75d90e.mov
non_process
toasset balance is displaying a balance have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description the toasset balance is displaying a balance even though the wallet has a balances for the selected toasset
0
112,283
17,087,322,186
IssuesEvent
2021-07-08 13:26:26
jgeraigery/experian-java
https://api.github.com/repos/jgeraigery/experian-java
opened
CVE-2017-15095 (High) detected in jackson-databind-2.9.2.jar
security vulnerability
## CVE-2017-15095 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.2.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: experian-java/MavenWorkspace/bis-services-lib/bis-services-base/pom.xml</p> <p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.2/jackson-databind-2.9.2.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.9.2.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/experian-java/commit/9ade2a959068cca30ecfdbb254939af6f67affb1">9ade2a959068cca30ecfdbb254939af6f67affb1</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A deserialization flaw was discovered in the jackson-databind in versions before 2.8.10 and 2.9.1, which could allow an unauthenticated user to perform code execution by sending the maliciously crafted input to the readValue method of the ObjectMapper. This issue extends the previous flaw CVE-2017-7525 by blacklisting more classes that could be used maliciously. <p>Publish Date: 2018-02-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-15095>CVE-2017-15095</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-15095">https://nvd.nist.gov/vuln/detail/CVE-2017-15095</a></p> <p>Release Date: 2018-02-06</p> <p>Fix Resolution: 2.8.10,2.9.1</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.2","packageFilePaths":["/MavenWorkspace/bis-services-lib/bis-services-base/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.8.10,2.9.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-15095","vulnerabilityDetails":"A deserialization flaw was discovered in the jackson-databind in versions before 2.8.10 and 2.9.1, which could allow an unauthenticated user to perform code execution by sending the maliciously crafted input to the readValue method of the ObjectMapper. This issue extends the previous flaw CVE-2017-7525 by blacklisting more classes that could be used maliciously.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-15095","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2017-15095 (High) detected in jackson-databind-2.9.2.jar - ## CVE-2017-15095 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.2.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: experian-java/MavenWorkspace/bis-services-lib/bis-services-base/pom.xml</p> <p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.2/jackson-databind-2.9.2.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.9.2.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/experian-java/commit/9ade2a959068cca30ecfdbb254939af6f67affb1">9ade2a959068cca30ecfdbb254939af6f67affb1</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A deserialization flaw was discovered in the jackson-databind in versions before 2.8.10 and 2.9.1, which could allow an unauthenticated user to perform code execution by sending the maliciously crafted input to the readValue method of the ObjectMapper. This issue extends the previous flaw CVE-2017-7525 by blacklisting more classes that could be used maliciously. <p>Publish Date: 2018-02-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-15095>CVE-2017-15095</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-15095">https://nvd.nist.gov/vuln/detail/CVE-2017-15095</a></p> <p>Release Date: 2018-02-06</p> <p>Fix Resolution: 2.8.10,2.9.1</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.2","packageFilePaths":["/MavenWorkspace/bis-services-lib/bis-services-base/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.8.10,2.9.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-15095","vulnerabilityDetails":"A deserialization flaw was discovered in the jackson-databind in versions before 2.8.10 and 2.9.1, which could allow an unauthenticated user to perform code execution by sending the maliciously crafted input to the readValue method of the ObjectMapper. This issue extends the previous flaw CVE-2017-7525 by blacklisting more classes that could be used maliciously.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-15095","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file experian java mavenworkspace bis services lib bis services base pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details a deserialization flaw was discovered in the jackson databind in versions before and which could allow an unauthenticated user to perform code execution by sending the maliciously crafted input to the readvalue method of the objectmapper this issue extends the previous flaw cve by blacklisting more classes that could be used maliciously publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails a deserialization flaw was discovered in the jackson databind in versions before and which could allow an unauthenticated user to perform code execution by sending the maliciously crafted input to the readvalue method of the objectmapper this issue extends the previous flaw cve by blacklisting more classes that could be used maliciously vulnerabilityurl
0
27,570
4,321,898,617
IssuesEvent
2016-07-25 12:11:42
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
opened
DurableLongRunningTaskTest.test
Team: Core Type: Test-Failure
``` java.lang.AssertionError: null at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at com.hazelcast.durableexecutor.DurableLongRunningTaskTest$1.run(DurableLongRunningTaskTest.java:46) at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:901) at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:915) at com.hazelcast.durableexecutor.DurableLongRunningTaskTest.test(DurableLongRunningTaskTest.java:43) ``` https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-IbmJDK1.7/com.hazelcast$hazelcast/980/testReport/junit/com.hazelcast.durableexecutor/DurableLongRunningTaskTest/test/
1.0
DurableLongRunningTaskTest.test - ``` java.lang.AssertionError: null at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at com.hazelcast.durableexecutor.DurableLongRunningTaskTest$1.run(DurableLongRunningTaskTest.java:46) at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:901) at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:915) at com.hazelcast.durableexecutor.DurableLongRunningTaskTest.test(DurableLongRunningTaskTest.java:43) ``` https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-IbmJDK1.7/com.hazelcast$hazelcast/980/testReport/junit/com.hazelcast.durableexecutor/DurableLongRunningTaskTest/test/
non_process
durablelongrunningtasktest test java lang assertionerror null at org junit assert fail assert java at org junit assert asserttrue assert java at org junit assert asserttrue assert java at com hazelcast durableexecutor durablelongrunningtasktest run durablelongrunningtasktest java at com hazelcast test hazelcasttestsupport asserttrueeventually hazelcasttestsupport java at com hazelcast test hazelcasttestsupport asserttrueeventually hazelcasttestsupport java at com hazelcast durableexecutor durablelongrunningtasktest test durablelongrunningtasktest java
0
125,378
10,341,143,166
IssuesEvent
2019-09-04 00:54:27
rancher/rancher
https://api.github.com/repos/rancher/rancher
closed
externally added metadata should get updated
[zube]: To Test kind/bug-qa team/ca
Issue: - Add a new template or service option in metadata and update `data.json` - Check it gets loaded under `v3/rkeAddons` or `v3/rkek8sserviceoptions` - Update it in metadata and push new `data.json` - The objects don't get updated. Note: - the objects would get updated only if they're not builtin (not vendored in)
1.0
externally added metadata should get updated - Issue: - Add a new template or service option in metadata and update `data.json` - Check it gets loaded under `v3/rkeAddons` or `v3/rkek8sserviceoptions` - Update it in metadata and push new `data.json` - The objects don't get updated. Note: - the objects would get updated only if they're not builtin (not vendored in)
non_process
externally added metadata should get updated issue add a new template or service option in metadata and update data json check it gets loaded under rkeaddons or update it in metadata and push new data json the objects don t get updated note the objects would get updated only if they re not builtin not vendored in
0
18,715
6,628,930,960
IssuesEvent
2017-09-24 01:17:53
grpc/grpc
https://api.github.com/repos/grpc/grpc
closed
Flaky-test: "simple_request" end2end test is flaky
area/core infra/BUILDPONY kind/bug/flaky test lang/c
``` sreek@sreek-dev:~/workspace/grpc2 (master) $ tools/run_tests/run_tests.py -lc -r "h2_census_nosec_test simple_request" -ninf -S PASSED: make [time=1.1sec; retries=0:0]2017-04-13 18:18:36,709 detected port server running version 9 2017-04-13 18:18:36,731 my port server is version 9 D0413 18:19:11.328122345 114333 test_config.c:393] test slowdown factor: sanitizer=1, fixture=1, poller=1, total=1 I0413 18:19:11.328234393 114333 ev_epoll_linux.c:95] epoll engine will be using signal: 40 D0413 18:19:11.328243209 114333 ev_posix.c:107] Using polling engine: epoll D0413 18:19:11.328255607 114333 dns_resolver.c:316] Using native dns resolver I0413 18:19:11.328271720 114333 simple_request.c:55] Running test: test_invoke_simple_request/chttp2/fullstack+census D0413 18:19:12.329793395 114333 simple_request.c:129] client_peer_before_call=localhost:1218 D0413 18:19:12.330483392 114333 simple_request.c:172] server_peer=ipv6:[::1]:43564 D0413 18:19:12.330495670 114333 simple_request.c:176] client_peer=ipv6:[::1]:1218 I0413 18:19:12.330748922 114333 simple_request.c:55] Running test: test_invoke_simple_request/chttp2/fullstack+census D0413 18:19:12.332535720 114333 simple_request.c:129] client_peer_before_call=localhost:1217 D0413 18:19:13.333030920 114333 simple_request.c:172] server_peer=ipv6:[::1]:45374 D0413 18:19:13.333044547 114333 simple_request.c:176] client_peer=ipv6:[::1]:1217 I0413 18:19:13.333271217 114333 simple_request.c:55] Running test: test_invoke_simple_request/chttp2/fullstack+census D0413 18:19:13.335598718 114333 simple_request.c:129] client_peer_before_call=localhost:1054 D0413 18:19:13.338018970 114333 simple_request.c:172] server_peer=ipv6:[::1]:43524 D0413 18:19:13.338028661 114333 simple_request.c:176] client_peer=ipv6:[::1]:1054 I0413 18:19:13.338200681 114333 simple_request.c:55] Running test: test_invoke_simple_request/chttp2/fullstack+census D0413 18:19:13.339511543 114333 simple_request.c:129] client_peer_before_call=localhost:1580 D0413 18:19:13.347497522 114333 simple_request.c:172] server_peer=ipv6:[::1]:46303 D0413 18:19:13.347510348 114333 simple_request.c:176] client_peer=ipv6:[::1]:1580 I0413 18:19:13.347720327 114333 simple_request.c:55] Running test: test_invoke_simple_request/chttp2/fullstack+census D0413 18:19:13.349856429 114333 simple_request.c:129] client_peer_before_call=localhost:1050 D0413 18:19:14.347272877 114333 simple_request.c:172] server_peer=ipv6:[::1]:52851 D0413 18:19:14.347289000 114333 simple_request.c:176] client_peer=ipv6:[::1]:1050 I0413 18:19:14.347521018 114333 simple_request.c:55] Running test: test_invoke_simple_request/chttp2/fullstack+census D0413 18:19:14.349878651 114333 simple_request.c:129] client_peer_before_call=localhost:1052 D0413 18:19:14.350307547 114333 simple_request.c:172] server_peer=ipv6:[::1]:56384 D0413 18:19:14.350319588 114333 simple_request.c:176] client_peer=ipv6:[::1]:1052 I0413 18:19:14.350546230 114333 simple_request.c:55] Running test: test_invoke_simple_request/chttp2/fullstack+census D0413 18:19:14.351821702 114333 simple_request.c:129] client_peer_before_call=localhost:1189 D0413 18:19:14.352271102 114333 simple_request.c:172] server_peer=ipv6:[::1]:41659 D0413 18:19:14.352283436 114333 simple_request.c:176] client_peer=ipv6:[::1]:1189 I0413 18:19:14.352512416 114333 simple_request.c:55] Running test: test_invoke_simple_request/chttp2/fullstack+census D0413 18:19:14.355738638 114333 simple_request.c:129] client_peer_before_call=localhost:1197 E0413 18:19:19.357262376 114333 cq_verifier.c:273] cq returned unexpected event: OP_COMPLETE: tag:0x1 OK E0413 18:19:19.357291477 114333 cq_verifier.c:280] expected tags: 0x65 GRPC_OP_COMPLETE result=1 test/core/end2end/tests/simple_request.c:167 ******************************* Caught signal SIGABRT bins/opt/h2_census_nosec_test[0x488bee] /lib/x86_64-linux-gnu/libpthread.so.0(+0x10330)[0x7f05d6966330] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x37)[0x7f05d63b3c37] /lib/x86_64-linux-gnu/libc.so.6(abort+0x148)[0x7f05d63b7028] bins/opt/h2_census_nosec_test[0x429c67] bins/opt/h2_census_nosec_test[0x42569b] bins/opt/h2_census_nosec_test[0x425e4b] bins/opt/h2_census_nosec_test[0x40528f] bins/opt/h2_census_nosec_test[0x4030eb] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7f05d639ef45] bins/opt/h2_census_nosec_test[0x403122] FAILED: bins/opt/h2_census_nosec_test simple_request GRPC_POLL_STRATEGY=epoll [ret=-6, pid=114333] FLAKE: bins/opt/h2_census_nosec_test simple_request GRPC_POLL_STRATEGY=epoll [1/637 runs flaked] FAILED: Some tests failed ```
1.0
Flaky-test: "simple_request" end2end test is flaky - ``` sreek@sreek-dev:~/workspace/grpc2 (master) $ tools/run_tests/run_tests.py -lc -r "h2_census_nosec_test simple_request" -ninf -S PASSED: make [time=1.1sec; retries=0:0]2017-04-13 18:18:36,709 detected port server running version 9 2017-04-13 18:18:36,731 my port server is version 9 D0413 18:19:11.328122345 114333 test_config.c:393] test slowdown factor: sanitizer=1, fixture=1, poller=1, total=1 I0413 18:19:11.328234393 114333 ev_epoll_linux.c:95] epoll engine will be using signal: 40 D0413 18:19:11.328243209 114333 ev_posix.c:107] Using polling engine: epoll D0413 18:19:11.328255607 114333 dns_resolver.c:316] Using native dns resolver I0413 18:19:11.328271720 114333 simple_request.c:55] Running test: test_invoke_simple_request/chttp2/fullstack+census D0413 18:19:12.329793395 114333 simple_request.c:129] client_peer_before_call=localhost:1218 D0413 18:19:12.330483392 114333 simple_request.c:172] server_peer=ipv6:[::1]:43564 D0413 18:19:12.330495670 114333 simple_request.c:176] client_peer=ipv6:[::1]:1218 I0413 18:19:12.330748922 114333 simple_request.c:55] Running test: test_invoke_simple_request/chttp2/fullstack+census D0413 18:19:12.332535720 114333 simple_request.c:129] client_peer_before_call=localhost:1217 D0413 18:19:13.333030920 114333 simple_request.c:172] server_peer=ipv6:[::1]:45374 D0413 18:19:13.333044547 114333 simple_request.c:176] client_peer=ipv6:[::1]:1217 I0413 18:19:13.333271217 114333 simple_request.c:55] Running test: test_invoke_simple_request/chttp2/fullstack+census D0413 18:19:13.335598718 114333 simple_request.c:129] client_peer_before_call=localhost:1054 D0413 18:19:13.338018970 114333 simple_request.c:172] server_peer=ipv6:[::1]:43524 D0413 18:19:13.338028661 114333 simple_request.c:176] client_peer=ipv6:[::1]:1054 I0413 18:19:13.338200681 114333 simple_request.c:55] Running test: test_invoke_simple_request/chttp2/fullstack+census D0413 18:19:13.339511543 114333 simple_request.c:129] client_peer_before_call=localhost:1580 D0413 18:19:13.347497522 114333 simple_request.c:172] server_peer=ipv6:[::1]:46303 D0413 18:19:13.347510348 114333 simple_request.c:176] client_peer=ipv6:[::1]:1580 I0413 18:19:13.347720327 114333 simple_request.c:55] Running test: test_invoke_simple_request/chttp2/fullstack+census D0413 18:19:13.349856429 114333 simple_request.c:129] client_peer_before_call=localhost:1050 D0413 18:19:14.347272877 114333 simple_request.c:172] server_peer=ipv6:[::1]:52851 D0413 18:19:14.347289000 114333 simple_request.c:176] client_peer=ipv6:[::1]:1050 I0413 18:19:14.347521018 114333 simple_request.c:55] Running test: test_invoke_simple_request/chttp2/fullstack+census D0413 18:19:14.349878651 114333 simple_request.c:129] client_peer_before_call=localhost:1052 D0413 18:19:14.350307547 114333 simple_request.c:172] server_peer=ipv6:[::1]:56384 D0413 18:19:14.350319588 114333 simple_request.c:176] client_peer=ipv6:[::1]:1052 I0413 18:19:14.350546230 114333 simple_request.c:55] Running test: test_invoke_simple_request/chttp2/fullstack+census D0413 18:19:14.351821702 114333 simple_request.c:129] client_peer_before_call=localhost:1189 D0413 18:19:14.352271102 114333 simple_request.c:172] server_peer=ipv6:[::1]:41659 D0413 18:19:14.352283436 114333 simple_request.c:176] client_peer=ipv6:[::1]:1189 I0413 18:19:14.352512416 114333 simple_request.c:55] Running test: test_invoke_simple_request/chttp2/fullstack+census D0413 18:19:14.355738638 114333 simple_request.c:129] client_peer_before_call=localhost:1197 E0413 18:19:19.357262376 114333 cq_verifier.c:273] cq returned unexpected event: OP_COMPLETE: tag:0x1 OK E0413 18:19:19.357291477 114333 cq_verifier.c:280] expected tags: 0x65 GRPC_OP_COMPLETE result=1 test/core/end2end/tests/simple_request.c:167 ******************************* Caught signal SIGABRT bins/opt/h2_census_nosec_test[0x488bee] /lib/x86_64-linux-gnu/libpthread.so.0(+0x10330)[0x7f05d6966330] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x37)[0x7f05d63b3c37] /lib/x86_64-linux-gnu/libc.so.6(abort+0x148)[0x7f05d63b7028] bins/opt/h2_census_nosec_test[0x429c67] bins/opt/h2_census_nosec_test[0x42569b] bins/opt/h2_census_nosec_test[0x425e4b] bins/opt/h2_census_nosec_test[0x40528f] bins/opt/h2_census_nosec_test[0x4030eb] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7f05d639ef45] bins/opt/h2_census_nosec_test[0x403122] FAILED: bins/opt/h2_census_nosec_test simple_request GRPC_POLL_STRATEGY=epoll [ret=-6, pid=114333] FLAKE: bins/opt/h2_census_nosec_test simple_request GRPC_POLL_STRATEGY=epoll [1/637 runs flaked] FAILED: Some tests failed ```
non_process
flaky test simple request test is flaky sreek sreek dev workspace master tools run tests run tests py lc r census nosec test simple request ninf s passed make detected port server running version my port server is version test config c test slowdown factor sanitizer fixture poller total ev epoll linux c epoll engine will be using signal ev posix c using polling engine epoll dns resolver c using native dns resolver simple request c running test test invoke simple request fullstack census simple request c client peer before call localhost simple request c server peer simple request c client peer simple request c running test test invoke simple request fullstack census simple request c client peer before call localhost simple request c server peer simple request c client peer simple request c running test test invoke simple request fullstack census simple request c client peer before call localhost simple request c server peer simple request c client peer simple request c running test test invoke simple request fullstack census simple request c client peer before call localhost simple request c server peer simple request c client peer simple request c running test test invoke simple request fullstack census simple request c client peer before call localhost simple request c server peer simple request c client peer simple request c running test test invoke simple request fullstack census simple request c client peer before call localhost simple request c server peer simple request c client peer simple request c running test test invoke simple request fullstack census simple request c client peer before call localhost simple request c server peer simple request c client peer simple request c running test test invoke simple request fullstack census simple request c client peer before call localhost cq verifier c cq returned unexpected event op complete tag ok cq verifier c expected tags grpc op complete result test core tests simple request c caught signal sigabrt bins opt census nosec test lib linux gnu libpthread so lib linux gnu libc so gsignal lib linux gnu libc so abort bins opt census nosec test bins opt census nosec test bins opt census nosec test bins opt census nosec test bins opt census nosec test lib linux gnu libc so libc start main bins opt census nosec test failed bins opt census nosec test simple request grpc poll strategy epoll flake bins opt census nosec test simple request grpc poll strategy epoll failed some tests failed
0
102,681
12,819,668,835
IssuesEvent
2020-07-06 03:02:09
abrahamjuliot/creepjs
https://api.github.com/repos/abrahamjuliot/creepjs
closed
Animate
design
- visitor load (flash then step fade in) - initial load (fade in) - trash, lies, errors (wobble or bounce) - no lies, trash, errors (shadow effect) - hover (light green)
1.0
Animate - - visitor load (flash then step fade in) - initial load (fade in) - trash, lies, errors (wobble or bounce) - no lies, trash, errors (shadow effect) - hover (light green)
non_process
animate visitor load flash then step fade in initial load fade in trash lies errors wobble or bounce no lies trash errors shadow effect hover light green
0
6,772
2,610,278,104
IssuesEvent
2015-02-26 19:28:58
chrsmith/scribefire-chrome
https://api.github.com/repos/chrsmith/scribefire-chrome
closed
Images from other websites copy/paste or drag and drop
auto-migrated Priority-Medium Type-Defect
``` What's the problem? When dragging an image from a website, it keeps the url of that website (thus stealing their bandwidth). It should be uploaded to the new site. What browser are you using? Chrome What version of ScribeFire are you running? 1.6 ``` ----- Original issue reported on code.google.com by `Krystyn....@gmail.com` on 19 May 2011 at 1:42
1.0
Images from other websites copy/paste or drag and drop - ``` What's the problem? When dragging an image from a website, it keeps the url of that website (thus stealing their bandwidth). It should be uploaded to the new site. What browser are you using? Chrome What version of ScribeFire are you running? 1.6 ``` ----- Original issue reported on code.google.com by `Krystyn....@gmail.com` on 19 May 2011 at 1:42
non_process
images from other websites copy paste or drag and drop what s the problem when dragging an image from a website it keeps the url of that website thus stealing their bandwidth it should be uploaded to the new site what browser are you using chrome what version of scribefire are you running original issue reported on code google com by krystyn gmail com on may at
0
1,067
3,536,074,291
IssuesEvent
2016-01-17 00:27:05
MaretEngineering/MROV
https://api.github.com/repos/MaretEngineering/MROV
closed
Change the communications to only use one vertical motor value
Arduino enhancement Processing
The 5th and 6th motors will always have the same number because both vertical motors will always be strung together. In the current setup both are included but one should be removed. BE CAREFUL: needs to be changed for BOTH Arduino and processing. Look at the 5th/6th values in the output string. ![screen shot 2015-11-28 at 8 44 23 pm](https://cloud.githubusercontent.com/assets/6006029/11454949/ea2c2e3c-9610-11e5-9ab1-0a23beb9da4b.png) ![screen shot 2015-11-28 at 8 44 34 pm](https://cloud.githubusercontent.com/assets/6006029/11454950/ea2e2426-9610-11e5-8ac0-77bbf3bd7396.png) ![screen shot 2015-11-28 at 8 44 30 pm](https://cloud.githubusercontent.com/assets/6006029/11454948/ea2a24e8-9610-11e5-87e0-0c8e2af28a1b.png)
1.0
Change the communications to only use one vertical motor value - The 5th and 6th motors will always have the same number because both vertical motors will always be strung together. In the current setup both are included but one should be removed. BE CAREFUL: needs to be changed for BOTH Arduino and processing. Look at the 5th/6th values in the output string. ![screen shot 2015-11-28 at 8 44 23 pm](https://cloud.githubusercontent.com/assets/6006029/11454949/ea2c2e3c-9610-11e5-9ab1-0a23beb9da4b.png) ![screen shot 2015-11-28 at 8 44 34 pm](https://cloud.githubusercontent.com/assets/6006029/11454950/ea2e2426-9610-11e5-8ac0-77bbf3bd7396.png) ![screen shot 2015-11-28 at 8 44 30 pm](https://cloud.githubusercontent.com/assets/6006029/11454948/ea2a24e8-9610-11e5-87e0-0c8e2af28a1b.png)
process
change the communications to only use one vertical motor value the and motors will always have the same number because both vertical motors will always be strung together in the current setup both are included but one should be removed be careful needs to be changed for both arduino and processing look at the values in the output string
1
9,834
12,828,983,085
IssuesEvent
2020-07-06 21:43:12
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
reopened
Support for stdout and process arguments in already running processes
area-System.Diagnostics.Process
The API around the `Process` class seems counter intuitive when handling _already running processes_. For the most part, you can access a variety of properties however if you attempt to access something like `StandardOuput` or `StartInfo`, then you hit an issue. via `StandardOuput`: ```csharp var process = Process.GetProcessesByName("existingProcess")[0]; var outputStream = process.StandardOuput; // Throws InvalidOperationException: "StandardOut has not been redirected or the process hasn't started yet." ``` via `StartInfo.Arguments`: ```csharp var process = Process.GetProcessesByName("existingProcess")[0]; var arguments = process.StartInfo.Arguments; // Throws InvalidOperationException: "Process was not started by this object, so requested information cannot be determined." ``` I don't think this is a bug per-se, I think this was an intentional design choice though I think it would be great to revisit and see whether the decisions that led to it behaving like this are still applicable. Is it due to a permission/security issue between processes or perhaps a limitation in Windows or just not a very common usecase?
1.0
Support for stdout and process arguments in already running processes - The API around the `Process` class seems counter intuitive when handling _already running processes_. For the most part, you can access a variety of properties however if you attempt to access something like `StandardOuput` or `StartInfo`, then you hit an issue. via `StandardOuput`: ```csharp var process = Process.GetProcessesByName("existingProcess")[0]; var outputStream = process.StandardOuput; // Throws InvalidOperationException: "StandardOut has not been redirected or the process hasn't started yet." ``` via `StartInfo.Arguments`: ```csharp var process = Process.GetProcessesByName("existingProcess")[0]; var arguments = process.StartInfo.Arguments; // Throws InvalidOperationException: "Process was not started by this object, so requested information cannot be determined." ``` I don't think this is a bug per-se, I think this was an intentional design choice though I think it would be great to revisit and see whether the decisions that led to it behaving like this are still applicable. Is it due to a permission/security issue between processes or perhaps a limitation in Windows or just not a very common usecase?
process
support for stdout and process arguments in already running processes the api around the process class seems counter intuitive when handling already running processes for the most part you can access a variety of properties however if you attempt to access something like standardouput or startinfo then you hit an issue via standardouput csharp var process process getprocessesbyname existingprocess var outputstream process standardouput throws invalidoperationexception standardout has not been redirected or the process hasn t started yet via startinfo arguments csharp var process process getprocessesbyname existingprocess var arguments process startinfo arguments throws invalidoperationexception process was not started by this object so requested information cannot be determined i don t think this is a bug per se i think this was an intentional design choice though i think it would be great to revisit and see whether the decisions that led to it behaving like this are still applicable is it due to a permission security issue between processes or perhaps a limitation in windows or just not a very common usecase
1
228
2,653,295,043
IssuesEvent
2015-03-16 22:21:07
arduino/Arduino
https://api.github.com/repos/arduino/Arduino
closed
Compile error when declaring a pointer-function
Component: Preprocessor
I get a weird compile-error with self-written classes when i declare a function that returns a pointer to that very class. Can't figure out why. UECIDE 0.8.7z36 compiles the example-code (ardubug.ino) without any errors though. Using: Windows7-64bit, Arduino-IDE 1.6.1, Arduino Nano, ATmega328 ( Same problem for all other platines / prozessors, too ) --- Arduino-IDE - Error-Output: ardubug.ino:2:1: error: 'MyClass' does not name a type Fehler beim Kompilieren. --- Code: class MyClass{ public: MyClass(); int value; }; MyClass::MyClass(){ this->value = 101; } MyClass* someProcedure(){ return NULL; } void setup() {} void loop() {}
1.0
Compile error when declaring a pointer-function - I get a weird compile-error with self-written classes when i declare a function that returns a pointer to that very class. Can't figure out why. UECIDE 0.8.7z36 compiles the example-code (ardubug.ino) without any errors though. Using: Windows7-64bit, Arduino-IDE 1.6.1, Arduino Nano, ATmega328 ( Same problem for all other platines / prozessors, too ) --- Arduino-IDE - Error-Output: ardubug.ino:2:1: error: 'MyClass' does not name a type Fehler beim Kompilieren. --- Code: class MyClass{ public: MyClass(); int value; }; MyClass::MyClass(){ this->value = 101; } MyClass* someProcedure(){ return NULL; } void setup() {} void loop() {}
process
compile error when declaring a pointer function i get a weird compile error with self written classes when i declare a function that returns a pointer to that very class can t figure out why uecide compiles the example code ardubug ino without any errors though using arduino ide arduino nano same problem for all other platines prozessors too arduino ide error output ardubug ino error myclass does not name a type fehler beim kompilieren code class myclass public myclass int value myclass myclass this value myclass someprocedure return null void setup void loop
1
19,032
6,664,492,524
IssuesEvent
2017-10-02 20:21:14
dart-lang/build
https://api.github.com/repos/dart-lang/build
closed
Checking for existing outputs fails if an intermediate output is deleted
package:build_runner
Situation: - There is a source file `source.dart` and two phases of builders `.dart` -> `.phase1` and `.phase1` -> `.phase2` - The `.dart_tool` directory does not exist so there is no serialized asset graph - `source.phase1` does *not* exist on disk, `source.phase2` *does* exist on disk. Checking for existing outputs will not see a conflict with `source.phase2` because we only feed inputs that exist on disk [here](https://github.com/dart-lang/build/blob/465060fa2b9ab0f576c90c8385b49922e61c5fa2/build_runner/lib/src/generate/build_impl.dart#L318) - not outputs that we expect will exist. If we run a build we will write `source.phase1` and then get an exception "Cannot overwrite inputs." when trying to write `sources.phase2`.
1.0
Checking for existing outputs fails if an intermediate output is deleted - Situation: - There is a source file `source.dart` and two phases of builders `.dart` -> `.phase1` and `.phase1` -> `.phase2` - The `.dart_tool` directory does not exist so there is no serialized asset graph - `source.phase1` does *not* exist on disk, `source.phase2` *does* exist on disk. Checking for existing outputs will not see a conflict with `source.phase2` because we only feed inputs that exist on disk [here](https://github.com/dart-lang/build/blob/465060fa2b9ab0f576c90c8385b49922e61c5fa2/build_runner/lib/src/generate/build_impl.dart#L318) - not outputs that we expect will exist. If we run a build we will write `source.phase1` and then get an exception "Cannot overwrite inputs." when trying to write `sources.phase2`.
non_process
checking for existing outputs fails if an intermediate output is deleted situation there is a source file source dart and two phases of builders dart and the dart tool directory does not exist so there is no serialized asset graph source does not exist on disk source does exist on disk checking for existing outputs will not see a conflict with source because we only feed inputs that exist on disk not outputs that we expect will exist if we run a build we will write source and then get an exception cannot overwrite inputs when trying to write sources
0
5,981
8,799,283,160
IssuesEvent
2018-12-24 13:12:47
linnovate/root
https://api.github.com/repos/linnovate/root
opened
Can't reset profile picture
2.0.6 Process bug bug
open the profile page. click on replace. pick some picture. click on reset. the picture still stay as the one you replace.
1.0
Can't reset profile picture - open the profile page. click on replace. pick some picture. click on reset. the picture still stay as the one you replace.
process
can t reset profile picture open the profile page click on replace pick some picture click on reset the picture still stay as the one you replace
1
286,562
8,790,124,415
IssuesEvent
2018-12-21 07:45:54
tilezen/vector-datasource
https://api.github.com/repos/tilezen/vector-datasource
closed
Set sort_order for quay, wharf, other new landuse kinds
bug priority medium send to staging
* **What did you see?** Quay has sort_rank of 11. * **What did you expect to see?** Should be higher value, that's just a default which sorts it almost at the bottom of the stack (in this case under industrial). * **What map location are you having problems with?** https://www.openstreetmap.org/way/602259367#map=19/51.52862/-0.24830 QA followup for https://github.com/tilezen/vector-datasource/issues/1423.
1.0
Set sort_order for quay, wharf, other new landuse kinds - * **What did you see?** Quay has sort_rank of 11. * **What did you expect to see?** Should be higher value, that's just a default which sorts it almost at the bottom of the stack (in this case under industrial). * **What map location are you having problems with?** https://www.openstreetmap.org/way/602259367#map=19/51.52862/-0.24830 QA followup for https://github.com/tilezen/vector-datasource/issues/1423.
non_process
set sort order for quay wharf other new landuse kinds what did you see quay has sort rank of what did you expect to see should be higher value that s just a default which sorts it almost at the bottom of the stack in this case under industrial what map location are you having problems with qa followup for
0
343,892
24,789,549,293
IssuesEvent
2022-10-24 12:45:35
hyter99/AI_22-23_L1
https://api.github.com/repos/hyter99/AI_22-23_L1
closed
[REPO] Edit README.md
documentation
Contents: - technologies (NestJS, Prisma, React, Vite) - project description (why its made) - link to wiki - ewentually logo
1.0
[REPO] Edit README.md - Contents: - technologies (NestJS, Prisma, React, Vite) - project description (why its made) - link to wiki - ewentually logo
non_process
edit readme md contents technologies nestjs prisma react vite project description why its made link to wiki ewentually logo
0
17,829
23,769,280,235
IssuesEvent
2022-09-01 15:02:41
googleapis/python-crc32c
https://api.github.com/repos/googleapis/python-crc32c
closed
Warning: a recent release failed
type: process
The following release PRs may have failed: * #147 - The release job was triggered, but has not reported back success. * #124 - The release job is 'autorelease: tagged', but expected 'autorelease: published'.
1.0
Warning: a recent release failed - The following release PRs may have failed: * #147 - The release job was triggered, but has not reported back success. * #124 - The release job is 'autorelease: tagged', but expected 'autorelease: published'.
process
warning a recent release failed the following release prs may have failed the release job was triggered but has not reported back success the release job is autorelease tagged but expected autorelease published
1
40,886
6,876,933,867
IssuesEvent
2017-11-20 04:39:32
zulip/zulip
https://api.github.com/repos/zulip/zulip
opened
user docs: Add doc for "Video calls".
area: documentation (user) good first issue help wanted
We should add a user doc in the "Sending messages" section called "Video calls", that explains how to use the jitsi link integration. The way you use it is by clicking the video camera icon here, in the compose box: ![image](https://user-images.githubusercontent.com/890911/33002532-a6019e30-cd69-11e7-80cc-69dcac5fcdb6.png)
1.0
user docs: Add doc for "Video calls". - We should add a user doc in the "Sending messages" section called "Video calls", that explains how to use the jitsi link integration. The way you use it is by clicking the video camera icon here, in the compose box: ![image](https://user-images.githubusercontent.com/890911/33002532-a6019e30-cd69-11e7-80cc-69dcac5fcdb6.png)
non_process
user docs add doc for video calls we should add a user doc in the sending messages section called video calls that explains how to use the jitsi link integration the way you use it is by clicking the video camera icon here in the compose box
0
5,207
7,978,207,593
IssuesEvent
2018-07-17 17:35:51
material-components/material-components-ios
https://api.github.com/repos/material-components/material-components-ios
closed
[List] Write a design doc for the base cell
[List] type:Process
Definition of done: - There is a design doc for the List component. - The design doc has been reviewed by the team.
1.0
[List] Write a design doc for the base cell - Definition of done: - There is a design doc for the List component. - The design doc has been reviewed by the team.
process
write a design doc for the base cell definition of done there is a design doc for the list component the design doc has been reviewed by the team
1
21,345
29,145,912,003
IssuesEvent
2023-05-18 02:52:22
rosepearson/GeoFabrics
https://api.github.com/repos/rosepearson/GeoFabrics
closed
Revisit asserts
process
Revisit asserts are: 1. might be removed during optimization if compiled 2. raising an error may be more apporpiate. @luke noted that the following warning from SonarLint applies to all assert statements: "In production environments, the python -O optimization flag is often used, which bypasses assert statements." So, ensure asserts are still covered by a check if moving to compiled code with optimisation @jennan noted I have been using asserts where raising an error may be more appropriate! https://towardsdatascience.com/practical-python-try-except-and-assert-7117355ccaab#:~:text=The%20try%20and%20except%20blocks,an%20example%20of%20defensive%20programming.
1.0
Revisit asserts - Revisit asserts are: 1. might be removed during optimization if compiled 2. raising an error may be more apporpiate. @luke noted that the following warning from SonarLint applies to all assert statements: "In production environments, the python -O optimization flag is often used, which bypasses assert statements." So, ensure asserts are still covered by a check if moving to compiled code with optimisation @jennan noted I have been using asserts where raising an error may be more appropriate! https://towardsdatascience.com/practical-python-try-except-and-assert-7117355ccaab#:~:text=The%20try%20and%20except%20blocks,an%20example%20of%20defensive%20programming.
process
revisit asserts revisit asserts are might be removed during optimization if compiled raising an error may be more apporpiate luke noted that the following warning from sonarlint applies to all assert statements in production environments the python o optimization flag is often used which bypasses assert statements so ensure asserts are still covered by a check if moving to compiled code with optimisation jennan noted i have been using asserts where raising an error may be more appropriate
1
14,440
17,497,854,580
IssuesEvent
2021-08-10 04:45:28
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
opened
Stop_FalseArg_WithDependentServices_ThrowsInvalidOperationException failing very often
area-System.ServiceProcess
This failed 2600 times since 2021/6/15. That's 40-50 times a day. As far as I can tell, it will fail 100% of the time, although only in OuterLoop. It was written backwards. @VincentBu could you please have a look to see why we don't already have an issue for this? Is it in a run you don't normally look at?
1.0
Stop_FalseArg_WithDependentServices_ThrowsInvalidOperationException failing very often - This failed 2600 times since 2021/6/15. That's 40-50 times a day. As far as I can tell, it will fail 100% of the time, although only in OuterLoop. It was written backwards. @VincentBu could you please have a look to see why we don't already have an issue for this? Is it in a run you don't normally look at?
process
stop falsearg withdependentservices throwsinvalidoperationexception failing very often this failed times since that s times a day as far as i can tell it will fail of the time although only in outerloop it was written backwards vincentbu could you please have a look to see why we don t already have an issue for this is it in a run you don t normally look at
1
16,232
20,767,330,103
IssuesEvent
2022-03-15 22:15:07
hashgraph/hedera-mirror-node
https://api.github.com/repos/hashgraph/hedera-mirror-node
opened
Release checklist 0.53
enhancement P1 process
### Problem We need a checklist to verify the release is rolled out successfully. ### Solution ## Preparation - [x] Milestone field populated on [relevant issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc) - [x] Nothing [open](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.53.0) for milestone - [x] GitHub checks for branch are passing - [x] Automated Kubernetes deployment successful - [x] Tag release - [ ] Upload release artifacts - [ ] Publish release ## Integration - [x] Deploy to VM ## Performance - [ ] Deploy to Kubernetes - [ ] Deploy to VM - [ ] gRPC API performance tests - [ ] Importer performance tests - [ ] REST API performance tests - [ ] Migrations tested against mainnet clone ## Previewnet - [ ] Deploy to VM ## Testnet - [ ] Deploy to VM ## Mainnet - [ ] Deploy to Kubernetes EU - [ ] Deploy to Kubernetes NA - [ ] Deploy to VM - [ ] Rosetta tests ### Alternatives _No response_
1.0
Release checklist 0.53 - ### Problem We need a checklist to verify the release is rolled out successfully. ### Solution ## Preparation - [x] Milestone field populated on [relevant issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc) - [x] Nothing [open](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.53.0) for milestone - [x] GitHub checks for branch are passing - [x] Automated Kubernetes deployment successful - [x] Tag release - [ ] Upload release artifacts - [ ] Publish release ## Integration - [x] Deploy to VM ## Performance - [ ] Deploy to Kubernetes - [ ] Deploy to VM - [ ] gRPC API performance tests - [ ] Importer performance tests - [ ] REST API performance tests - [ ] Migrations tested against mainnet clone ## Previewnet - [ ] Deploy to VM ## Testnet - [ ] Deploy to VM ## Mainnet - [ ] Deploy to Kubernetes EU - [ ] Deploy to Kubernetes NA - [ ] Deploy to VM - [ ] Rosetta tests ### Alternatives _No response_
process
release checklist problem we need a checklist to verify the release is rolled out successfully solution preparation milestone field populated on nothing for milestone github checks for branch are passing automated kubernetes deployment successful tag release upload release artifacts publish release integration deploy to vm performance deploy to kubernetes deploy to vm grpc api performance tests importer performance tests rest api performance tests migrations tested against mainnet clone previewnet deploy to vm testnet deploy to vm mainnet deploy to kubernetes eu deploy to kubernetes na deploy to vm rosetta tests alternatives no response
1
15,113
18,848,891,958
IssuesEvent
2021-11-11 18:05:37
2i2c-org/team-compass
https://api.github.com/repos/2i2c-org/team-compass
closed
[Proposal] Deliverables backlog cleanup and simplification
:label: team-process type: discussion
### Description As a follow-up to our meeting today, I'd like to try an experiment on the Deliverables Backlog. I fear that we have way too many things on that backlog at once, and this makes it cumbersome to navigate and inspect. Are folks OK with (and do you have any feedback for) the following plan: - Remove deliverables from the backlog to reduce the total number, and only keep the deliverables that we want to work on within the next 3 sprints. - Assume that in a given sprint, each team member finishes an average of 2 deliverables. - In the future, limit the number of deliverables on this board to `n_team_members * 2 (deliverables per sprint) * 3 (sprints)` and round up to the nearest 10 (so currently this would be `40`) - In the future, if we wish to add a new deliverable to the backlog, but we are already at this limit, we must remove one from the backlog first. - Stop grouping the backlog by category by default, and just keep it as a flat list. Does anybody object to this? Or have thoughts on how to improve it? **If I don't hear any objections, I'll plan to do this in the next day or so**. ### Value / benefit My hope is that this will keep that backlog more manageable, and encourage us to be realistic in our planning. We can update the limit up or down depending on our experiences of how much we complete on average, but the important thing is that we have a limit in general. ### Implementation details _No response_ ### Tasks to complete _No response_ ### Updates _No response_
1.0
[Proposal] Deliverables backlog cleanup and simplification - ### Description As a follow-up to our meeting today, I'd like to try an experiment on the Deliverables Backlog. I fear that we have way too many things on that backlog at once, and this makes it cumbersome to navigate and inspect. Are folks OK with (and do you have any feedback for) the following plan: - Remove deliverables from the backlog to reduce the total number, and only keep the deliverables that we want to work on within the next 3 sprints. - Assume that in a given sprint, each team member finishes an average of 2 deliverables. - In the future, limit the number of deliverables on this board to `n_team_members * 2 (deliverables per sprint) * 3 (sprints)` and round up to the nearest 10 (so currently this would be `40`) - In the future, if we wish to add a new deliverable to the backlog, but we are already at this limit, we must remove one from the backlog first. - Stop grouping the backlog by category by default, and just keep it as a flat list. Does anybody object to this? Or have thoughts on how to improve it? **If I don't hear any objections, I'll plan to do this in the next day or so**. ### Value / benefit My hope is that this will keep that backlog more manageable, and encourage us to be realistic in our planning. We can update the limit up or down depending on our experiences of how much we complete on average, but the important thing is that we have a limit in general. ### Implementation details _No response_ ### Tasks to complete _No response_ ### Updates _No response_
process
deliverables backlog cleanup and simplification description as a follow up to our meeting today i d like to try an experiment on the deliverables backlog i fear that we have way too many things on that backlog at once and this makes it cumbersome to navigate and inspect are folks ok with and do you have any feedback for the following plan remove deliverables from the backlog to reduce the total number and only keep the deliverables that we want to work on within the next sprints assume that in a given sprint each team member finishes an average of deliverables in the future limit the number of deliverables on this board to n team members deliverables per sprint sprints and round up to the nearest so currently this would be in the future if we wish to add a new deliverable to the backlog but we are already at this limit we must remove one from the backlog first stop grouping the backlog by category by default and just keep it as a flat list does anybody object to this or have thoughts on how to improve it if i don t hear any objections i ll plan to do this in the next day or so value benefit my hope is that this will keep that backlog more manageable and encourage us to be realistic in our planning we can update the limit up or down depending on our experiences of how much we complete on average but the important thing is that we have a limit in general implementation details no response tasks to complete no response updates no response
1
12,869
15,255,077,109
IssuesEvent
2021-02-20 14:41:09
TeamPotry/tutorial_text
https://api.github.com/repos/TeamPotry/tutorial_text
closed
#format 지원
enhancement in_process
Sourcemod의 `translations` 로직을 그대로 넣으려고 합니다. https://wiki.alliedmods.net/Translations_(SourceMod_Scripting) https://github.com/alliedmodders/sourcemod/blob/master/core/logic/Translator.cpp#L269 ``` "now_set_done" { "#format" "{1:s}" "en" "TEXT VIEW SETTING is now: {1}" } ``` -----
1.0
#format 지원 - Sourcemod의 `translations` 로직을 그대로 넣으려고 합니다. https://wiki.alliedmods.net/Translations_(SourceMod_Scripting) https://github.com/alliedmodders/sourcemod/blob/master/core/logic/Translator.cpp#L269 ``` "now_set_done" { "#format" "{1:s}" "en" "TEXT VIEW SETTING is now: {1}" } ``` -----
process
format 지원 sourcemod의 translations 로직을 그대로 넣으려고 합니다 now set done format s en text view setting is now
1
14,954
18,435,083,632
IssuesEvent
2021-10-14 12:11:04
opensafely-core/job-server
https://api.github.com/repos/opensafely-core/job-server
opened
Display an audit log to staff for application form actions
application-process
As an admin user, I want to see an audit log of changes to comments on applications, so that I can view any and all changes made.
1.0
Display an audit log to staff for application form actions - As an admin user, I want to see an audit log of changes to comments on applications, so that I can view any and all changes made.
process
display an audit log to staff for application form actions as an admin user i want to see an audit log of changes to comments on applications so that i can view any and all changes made
1
105,602
13,198,452,084
IssuesEvent
2020-08-14 02:31:15
MozillaFoundation/Design
https://api.github.com/repos/MozillaFoundation/Design
closed
Misinfo Monday Week 4 (August 17,2020)
design
### Intro Placeholder ticket for week 4 misinfo monday post. ### Links Tbd
1.0
Misinfo Monday Week 4 (August 17,2020) - ### Intro Placeholder ticket for week 4 misinfo monday post. ### Links Tbd
non_process
misinfo monday week august intro placeholder ticket for week misinfo monday post links tbd
0
219,521
17,097,573,345
IssuesEvent
2021-07-09 06:28:45
eiffel-community/etos
https://api.github.com/repos/eiffel-community/etos
opened
Test runner should send out an environment defined with the URL to the execution space if any
TestRunner
<!-- Filling out the template is required. Any issue that does not include enough information may be closed at the maintainers' discretion. --> ### Description <!-- Describe the proposed or requested change, and explain the type of change. Is it a change to the protocol, to the documentation, or something else? --> If our users are using an external service as an execution space (such as Jenkins, buildbot or teamcity etc) we should provide an EnvironmentDefined that describes the URL to this service. By adding an environment variable that the test runner reads, called "EXECUTION_SPACE_URL" which should hold the URL of the execution space, the test runner can send an environment defined for this. If the environment variable is not set, the test runner should not send any environment defined for this. ### Motivation <!-- Why would you like to see this change? --> Traceability can be hard when using external services together with ETOS. ### Exemplification <!-- Can you think of a concrete example illustrating the impact and value of the change? --> If something goes wrong in the test runner that does not communicate well, for instance a test case that breaks before initializing. Then the users would need to debug it at the source. Without a URL to go with, this can become very difficult. ### Benefits <!-- What would the benefits of introducing this change be? --> Traceability. ### Possible Drawbacks <!-- What are the possible side-effects or negative impacts of the change, and why are they outweighed by the benefits? --> None
1.0
Test runner should send out an environment defined with the URL to the execution space if any - <!-- Filling out the template is required. Any issue that does not include enough information may be closed at the maintainers' discretion. --> ### Description <!-- Describe the proposed or requested change, and explain the type of change. Is it a change to the protocol, to the documentation, or something else? --> If our users are using an external service as an execution space (such as Jenkins, buildbot or teamcity etc) we should provide an EnvironmentDefined that describes the URL to this service. By adding an environment variable that the test runner reads, called "EXECUTION_SPACE_URL" which should hold the URL of the execution space, the test runner can send an environment defined for this. If the environment variable is not set, the test runner should not send any environment defined for this. ### Motivation <!-- Why would you like to see this change? --> Traceability can be hard when using external services together with ETOS. ### Exemplification <!-- Can you think of a concrete example illustrating the impact and value of the change? --> If something goes wrong in the test runner that does not communicate well, for instance a test case that breaks before initializing. Then the users would need to debug it at the source. Without a URL to go with, this can become very difficult. ### Benefits <!-- What would the benefits of introducing this change be? --> Traceability. ### Possible Drawbacks <!-- What are the possible side-effects or negative impacts of the change, and why are they outweighed by the benefits? --> None
non_process
test runner should send out an environment defined with the url to the execution space if any filling out the template is required any issue that does not include enough information may be closed at the maintainers discretion description if our users are using an external service as an execution space such as jenkins buildbot or teamcity etc we should provide an environmentdefined that describes the url to this service by adding an environment variable that the test runner reads called execution space url which should hold the url of the execution space the test runner can send an environment defined for this if the environment variable is not set the test runner should not send any environment defined for this motivation traceability can be hard when using external services together with etos exemplification if something goes wrong in the test runner that does not communicate well for instance a test case that breaks before initializing then the users would need to debug it at the source without a url to go with this can become very difficult benefits traceability possible drawbacks none
0
138,270
20,380,725,486
IssuesEvent
2022-02-21 21:20:47
skni-kod/SKNIFrontEnd
https://api.github.com/repos/skni-kod/SKNIFrontEnd
closed
Po zalogowaniu nie mam możliwości edycji projektu
bug question new-design
Gdy loguje się nie widzę ani na liście projektów ani w podglądzie projektu przycisku do jego edycji. Przesunięto to do panelu admina czy jest to bug? ![obraz](https://user-images.githubusercontent.com/26147393/139741147-4261827a-0a73-4c53-91c0-638e81a58b1b.png) ![obraz](https://user-images.githubusercontent.com/26147393/139741179-6d875985-58a9-4437-a0ab-3187a67b1b5b.png)
1.0
Po zalogowaniu nie mam możliwości edycji projektu - Gdy loguje się nie widzę ani na liście projektów ani w podglądzie projektu przycisku do jego edycji. Przesunięto to do panelu admina czy jest to bug? ![obraz](https://user-images.githubusercontent.com/26147393/139741147-4261827a-0a73-4c53-91c0-638e81a58b1b.png) ![obraz](https://user-images.githubusercontent.com/26147393/139741179-6d875985-58a9-4437-a0ab-3187a67b1b5b.png)
non_process
po zalogowaniu nie mam możliwości edycji projektu gdy loguje się nie widzę ani na liście projektów ani w podglądzie projektu przycisku do jego edycji przesunięto to do panelu admina czy jest to bug
0
1,532
2,776,197,137
IssuesEvent
2015-05-04 20:22:37
rethinkdb/rethinkdb
https://api.github.com/repos/rethinkdb/rethinkdb
closed
ENOENT, open 'tls' while resolving "tls"
cp:build JavaScript / Coffee
On `next` as of 04c4af6073eec2dac1512ba0f5f488edab07256f while compiling on arclight: ``` [1/7] BROWSERIFY build/drivers/javascript/rethinkdb.js Error: ENOENT, open 'tls' while resolving "tls" from file /home/daniel/rethinkdb/build/packages/js/net.js make[1]: *** [build/drivers/javascript/rethinkdb.js] Error 1 ``` I had to do `npm install tls` manually to get it working. Is this a missing dependency we need to declare something? Or should we modify the JS driver to work in the absence of the `tls` module? @deontologician @AtnNn any ideas?
1.0
ENOENT, open 'tls' while resolving "tls" - On `next` as of 04c4af6073eec2dac1512ba0f5f488edab07256f while compiling on arclight: ``` [1/7] BROWSERIFY build/drivers/javascript/rethinkdb.js Error: ENOENT, open 'tls' while resolving "tls" from file /home/daniel/rethinkdb/build/packages/js/net.js make[1]: *** [build/drivers/javascript/rethinkdb.js] Error 1 ``` I had to do `npm install tls` manually to get it working. Is this a missing dependency we need to declare something? Or should we modify the JS driver to work in the absence of the `tls` module? @deontologician @AtnNn any ideas?
non_process
enoent open tls while resolving tls on next as of while compiling on arclight browserify build drivers javascript rethinkdb js error enoent open tls while resolving tls from file home daniel rethinkdb build packages js net js make error i had to do npm install tls manually to get it working is this a missing dependency we need to declare something or should we modify the js driver to work in the absence of the tls module deontologician atnnn any ideas
0
123,958
10,291,677,947
IssuesEvent
2019-08-27 13:00:57
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
teamcity: failed test: _too_many_cols_direct=false
C-test-failure O-robot
The following tests appear to have failed on master (testrace): _too_many_cols_direct=false You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+_too_many_cols_direct=false). [#1451891](https://teamcity.cockroachdb.com/viewLog.html?buildId=1451891): ``` _too_many_cols_direct=false --- FAIL: testrace/TestImportData/PGDUMP:_too_many_cols_direct=false (0.000s) Test ended in panic. ------- Stdout: ------- I190823 20:59:47.869504 824 sql/event_log.go:130 [n1,client=127.0.0.1:60538,user=root] Event: "create_database", target: 126, info: {DatabaseName:d38 Statement:CREATE DATABASE d38 User:root} W190823 20:59:48.091921 65 sql/schema_changer.go:949 [n1,scExec] waiting to update leases: error with attached stack trace: github.com/cockroachdb/cockroach/pkg/sql.LeaseStore.WaitForOneVersion /go/src/github.com/cockroachdb/cockroach/pkg/sql/lease.go:314 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChanger).waitToUpdateLeases /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:1201 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChanger).exec.func1 /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:948 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChanger).exec /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:964 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChangeManager).Start.func1.1 /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:1961 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChangeManager).Start.func1 /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:2226 github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1 /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1337 - error with embedded safe details: ID %d is not a table -- arg 1: <sqlbase.ID> - ID 125 is not a table I190823 20:59:48.242579 19441 storage/replica_command.go:284 [n1,s1,r95/1:/{Table/125-Max}] initiating a split of this range at key /Table/127/1 [r96] (manual) I190823 20:59:48.265602 19440 ccl/importccl/read_import_proc.go:83 [n1,import-distsql-ingest] could not fetch file size; falling back to per-file progress: bad ContentLength: -1 I190823 20:59:48.481067 19496 storage/replica_command.go:598 [n1,merge,s1,r75/1:/Table/102{-/1}] initiating a merge of r74:/Table/10{2/1-4} [(n1,s1):1, next=2, gen=32] into this range (lhs+rhs has (size=0 B+28 B qps=0.00+0.00 --> 0.00qps) below threshold (size=28 B, qps=0.00)) I190823 20:59:48.520773 19627 storage/replica_command.go:284 [n1,split,s1,r95/1:/{Table/125-Max}] initiating a split of this range at key /Table/127 [r97] (zone config) I190823 20:59:48.555225 147 storage/queue.go:518 [n1,s1,r9/1:/Table/1{3-4}] rate limited in MaybeAdd (raftlog): context canceled I190823 20:59:48.592190 824 sql/event_log.go:130 [n1,client=127.0.0.1:60538,user=root] Event: "drop_database", target: 126, info: {DatabaseName:d38 Statement:DROP DATABASE d38 User:root DroppedSchemaObjects:[]} ``` Please assign, take a look and update the issue accordingly.
1.0
teamcity: failed test: _too_many_cols_direct=false - The following tests appear to have failed on master (testrace): _too_many_cols_direct=false You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+_too_many_cols_direct=false). [#1451891](https://teamcity.cockroachdb.com/viewLog.html?buildId=1451891): ``` _too_many_cols_direct=false --- FAIL: testrace/TestImportData/PGDUMP:_too_many_cols_direct=false (0.000s) Test ended in panic. ------- Stdout: ------- I190823 20:59:47.869504 824 sql/event_log.go:130 [n1,client=127.0.0.1:60538,user=root] Event: "create_database", target: 126, info: {DatabaseName:d38 Statement:CREATE DATABASE d38 User:root} W190823 20:59:48.091921 65 sql/schema_changer.go:949 [n1,scExec] waiting to update leases: error with attached stack trace: github.com/cockroachdb/cockroach/pkg/sql.LeaseStore.WaitForOneVersion /go/src/github.com/cockroachdb/cockroach/pkg/sql/lease.go:314 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChanger).waitToUpdateLeases /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:1201 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChanger).exec.func1 /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:948 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChanger).exec /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:964 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChangeManager).Start.func1.1 /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:1961 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChangeManager).Start.func1 /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:2226 github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1 /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1337 - error with embedded safe details: ID %d is not a table -- arg 1: <sqlbase.ID> - ID 125 is not a table I190823 20:59:48.242579 19441 storage/replica_command.go:284 [n1,s1,r95/1:/{Table/125-Max}] initiating a split of this range at key /Table/127/1 [r96] (manual) I190823 20:59:48.265602 19440 ccl/importccl/read_import_proc.go:83 [n1,import-distsql-ingest] could not fetch file size; falling back to per-file progress: bad ContentLength: -1 I190823 20:59:48.481067 19496 storage/replica_command.go:598 [n1,merge,s1,r75/1:/Table/102{-/1}] initiating a merge of r74:/Table/10{2/1-4} [(n1,s1):1, next=2, gen=32] into this range (lhs+rhs has (size=0 B+28 B qps=0.00+0.00 --> 0.00qps) below threshold (size=28 B, qps=0.00)) I190823 20:59:48.520773 19627 storage/replica_command.go:284 [n1,split,s1,r95/1:/{Table/125-Max}] initiating a split of this range at key /Table/127 [r97] (zone config) I190823 20:59:48.555225 147 storage/queue.go:518 [n1,s1,r9/1:/Table/1{3-4}] rate limited in MaybeAdd (raftlog): context canceled I190823 20:59:48.592190 824 sql/event_log.go:130 [n1,client=127.0.0.1:60538,user=root] Event: "drop_database", target: 126, info: {DatabaseName:d38 Statement:DROP DATABASE d38 User:root DroppedSchemaObjects:[]} ``` Please assign, take a look and update the issue accordingly.
non_process
teamcity failed test too many cols direct false the following tests appear to have failed on master testrace too many cols direct false you may want to check too many cols direct false fail testrace testimportdata pgdump too many cols direct false test ended in panic stdout sql event log go event create database target info databasename statement create database user root sql schema changer go waiting to update leases error with attached stack trace github com cockroachdb cockroach pkg sql leasestore waitforoneversion go src github com cockroachdb cockroach pkg sql lease go github com cockroachdb cockroach pkg sql schemachanger waittoupdateleases go src github com cockroachdb cockroach pkg sql schema changer go github com cockroachdb cockroach pkg sql schemachanger exec go src github com cockroachdb cockroach pkg sql schema changer go github com cockroachdb cockroach pkg sql schemachanger exec go src github com cockroachdb cockroach pkg sql schema changer go github com cockroachdb cockroach pkg sql schemachangemanager start go src github com cockroachdb cockroach pkg sql schema changer go github com cockroachdb cockroach pkg sql schemachangemanager start go src github com cockroachdb cockroach pkg sql schema changer go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go runtime goexit usr local go src runtime asm s error with embedded safe details id d is not a table arg id is not a table storage replica command go initiating a split of this range at key table manual ccl importccl read import proc go could not fetch file size falling back to per file progress bad contentlength storage replica command go initiating a merge of table into this range lhs rhs has size b b qps below threshold size b qps storage replica command go initiating a split of this range at key table zone config storage queue go rate limited in maybeadd raftlog context canceled sql event log go event drop database target info databasename statement drop database user root droppedschemaobjects please assign take a look and update the issue accordingly
0
5,392
19,433,677,610
IssuesEvent
2021-12-21 14:46:54
elastic/e2e-testing
https://api.github.com/repos/elastic/e2e-testing
closed
Investigate 4 test errors on 8.0
root-cause-analysis Team:Automation
After https://github.com/elastic/e2e-testing/pull/1893, there are 4 errors related to integrations. Let's investigate why they are failing, as it could come from kibana changes, OR an afterTestSuite condition that fails exiting the tests. cc/ @juliaElastic
1.0
Investigate 4 test errors on 8.0 - After https://github.com/elastic/e2e-testing/pull/1893, there are 4 errors related to integrations. Let's investigate why they are failing, as it could come from kibana changes, OR an afterTestSuite condition that fails exiting the tests. cc/ @juliaElastic
non_process
investigate test errors on after there are errors related to integrations let s investigate why they are failing as it could come from kibana changes or an aftertestsuite condition that fails exiting the tests cc juliaelastic
0
13,830
16,592,427,330
IssuesEvent
2021-06-01 09:20:06
hashicorp/packer
https://api.github.com/repos/hashicorp/packer
closed
[vagrant-cloud post-processor] Add option to overwrite existing version
enhancement post-processor/vagrant-cloud remote-plugin/vagrant
#### Feature Description When using vagrant-cloud post-processor to upload a freshly generated box into vagrant cloud, we are getting an exception if the version already exists. ``` ==> vagrant: Running post-processor: vagrant-cloud ==> vagrant (vagrant-cloud): Verifying box is accessible: hlesey/k8s-base vagrant (vagrant-cloud): Box accessible and matches tag ==> vagrant (vagrant-cloud): Creating version: 1.18.2.2 vagrant (vagrant-cloud): Version exists, skipping creation ==> vagrant (vagrant-cloud): Creating provider: virtualbox ==> vagrant (vagrant-cloud): Cleaning up provider vagrant (vagrant-cloud): Provider was not created, not deleting Build 'vagrant' errored: 1 error(s) occurred: * Post-processor failed: Error creating provider: Metadata provider must be unique for version ``` With debug enabled I've got ``` 2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: Post-Processor Vagrant Cloud API POST: https://vagrantcloud.com/api/v1/box/hlesey/k8s-base/version/1.18.2.1/providers. 2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: 2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: Body: {"provider":{"name":"virtualbox"}} 2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: Post-Processor Vagrant Cloud API Response: 2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: 2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: &{Status:422 Unprocessable Entity StatusCode:422 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache] Connection:[keep-alive] Content-Type:[application/json; charset=utf-8] Date:[Sat, 27 Jun 2020 10:00:21 GMT] Referrer-Policy:[strict-origin-when-cross-origin] Server:[Cowboy] Set-Cookie:[_atlas_session_data=aFRxUkhQcnYzV2tvem5YbnJtQ0NjZ0VxanEwcHBIODNlYURYcDBkQ0pWMzZrcjlOeUdVNE0yVlB2N2l2S0VBUTlJOGQ1YThCZWhFUmpsblpSUlc0TlE9PS0tcEgwSFlFb3FYQVhXQ2Y2cVpjN0ZvUT09--0ad671fa052e76fbb10ceb6866cab4e379ad7143; path=/; expires=Mon, 27 Jul 2020 10:00:22 GMT; secure; HttpOnly] Strict-Transport-Security:[max-age=31536000; includeSubDomains; preload] Via:[1.1 vegur] X-Content-Type-Options:[nosniff] X-Download-Options:[noopen] X-Frame-Options:[SAMEORIGIN] X-Permitted-Cross-Domain-Policies:[none] X-Request-Id:[1f0a23a5-7c4c-4a2a-bdec-1902e591f57c] X-Runtime:[0.132013] X-Vagrantcloud-Rate-Limit:[98/100] X-Xss-Protection:[1; mode=block]] Body:0xc00038e4c0 ContentLength:-1 TransferEncoding:[chunked] Close:false Uncompressed:false Trailer:map[] Request:0xc0000c6600 TLS:0xc0004e3550} ``` One way to mitigate it is to manually remove the provider for this version from vagrantcloud UI and retry the `packer build`. Another way to avoid such exceptions is to increase the version when uploading the new box. In this case we have to announce everyone that consumes this box that a new version is available and they have to adjust their config. It would be nice to have a new parameter for vagrant-cloud post-processor, something like `replace_if_exists` defaulting to false. Sample config ``` { "variables": { "cloud_token": "{{ env `VAGRANT_CLOUD_TOKEN` }}", "output_path": "./output" }, "builders": [ { "type": "vagrant", "source_path": "ubuntu/bionic64", "communicator": "ssh", "add_force": true, "provider": "virtualbox", "output_dir": "{{user `output_path`}}" } ], "post-processors": [ [ { "type": "vagrant-cloud", "box_tag": "hlesey/k8s-base", "version": "{{user `version`}}", "access_token": "{{user `cloud_token`}}", "replace_if_exists": true } ] ] } ``` [vagrant cli](https://www.vagrantup.com/docs/cli/cloud.html#cloud-provider-upload) already support replacing existing version, `vagrant cloud provider upload ORGANIZATION/BOX-NAME PROVIDER-NAME VERSION BOX-FILE` #### Use Case(s) As a packer user, I want to be able to replace an existing box version when using vagrant-cloud post-processor.
1.0
[vagrant-cloud post-processor] Add option to overwrite existing version - #### Feature Description When using vagrant-cloud post-processor to upload a freshly generated box into vagrant cloud, we are getting an exception if the version already exists. ``` ==> vagrant: Running post-processor: vagrant-cloud ==> vagrant (vagrant-cloud): Verifying box is accessible: hlesey/k8s-base vagrant (vagrant-cloud): Box accessible and matches tag ==> vagrant (vagrant-cloud): Creating version: 1.18.2.2 vagrant (vagrant-cloud): Version exists, skipping creation ==> vagrant (vagrant-cloud): Creating provider: virtualbox ==> vagrant (vagrant-cloud): Cleaning up provider vagrant (vagrant-cloud): Provider was not created, not deleting Build 'vagrant' errored: 1 error(s) occurred: * Post-processor failed: Error creating provider: Metadata provider must be unique for version ``` With debug enabled I've got ``` 2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: Post-Processor Vagrant Cloud API POST: https://vagrantcloud.com/api/v1/box/hlesey/k8s-base/version/1.18.2.1/providers. 2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: 2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: Body: {"provider":{"name":"virtualbox"}} 2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: Post-Processor Vagrant Cloud API Response: 2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: 2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: &{Status:422 Unprocessable Entity StatusCode:422 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache] Connection:[keep-alive] Content-Type:[application/json; charset=utf-8] Date:[Sat, 27 Jun 2020 10:00:21 GMT] Referrer-Policy:[strict-origin-when-cross-origin] Server:[Cowboy] Set-Cookie:[_atlas_session_data=aFRxUkhQcnYzV2tvem5YbnJtQ0NjZ0VxanEwcHBIODNlYURYcDBkQ0pWMzZrcjlOeUdVNE0yVlB2N2l2S0VBUTlJOGQ1YThCZWhFUmpsblpSUlc0TlE9PS0tcEgwSFlFb3FYQVhXQ2Y2cVpjN0ZvUT09--0ad671fa052e76fbb10ceb6866cab4e379ad7143; path=/; expires=Mon, 27 Jul 2020 10:00:22 GMT; secure; HttpOnly] Strict-Transport-Security:[max-age=31536000; includeSubDomains; preload] Via:[1.1 vegur] X-Content-Type-Options:[nosniff] X-Download-Options:[noopen] X-Frame-Options:[SAMEORIGIN] X-Permitted-Cross-Domain-Policies:[none] X-Request-Id:[1f0a23a5-7c4c-4a2a-bdec-1902e591f57c] X-Runtime:[0.132013] X-Vagrantcloud-Rate-Limit:[98/100] X-Xss-Protection:[1; mode=block]] Body:0xc00038e4c0 ContentLength:-1 TransferEncoding:[chunked] Close:false Uncompressed:false Trailer:map[] Request:0xc0000c6600 TLS:0xc0004e3550} ``` One way to mitigate it is to manually remove the provider for this version from vagrantcloud UI and retry the `packer build`. Another way to avoid such exceptions is to increase the version when uploading the new box. In this case we have to announce everyone that consumes this box that a new version is available and they have to adjust their config. It would be nice to have a new parameter for vagrant-cloud post-processor, something like `replace_if_exists` defaulting to false. Sample config ``` { "variables": { "cloud_token": "{{ env `VAGRANT_CLOUD_TOKEN` }}", "output_path": "./output" }, "builders": [ { "type": "vagrant", "source_path": "ubuntu/bionic64", "communicator": "ssh", "add_force": true, "provider": "virtualbox", "output_dir": "{{user `output_path`}}" } ], "post-processors": [ [ { "type": "vagrant-cloud", "box_tag": "hlesey/k8s-base", "version": "{{user `version`}}", "access_token": "{{user `cloud_token`}}", "replace_if_exists": true } ] ] } ``` [vagrant cli](https://www.vagrantup.com/docs/cli/cloud.html#cloud-provider-upload) already support replacing existing version, `vagrant cloud provider upload ORGANIZATION/BOX-NAME PROVIDER-NAME VERSION BOX-FILE` #### Use Case(s) As a packer user, I want to be able to replace an existing box version when using vagrant-cloud post-processor.
process
add option to overwrite existing version feature description when using vagrant cloud post processor to upload a freshly generated box into vagrant cloud we are getting an exception if the version already exists vagrant running post processor vagrant cloud vagrant vagrant cloud verifying box is accessible hlesey base vagrant vagrant cloud box accessible and matches tag vagrant vagrant cloud creating version vagrant vagrant cloud version exists skipping creation vagrant vagrant cloud creating provider virtualbox vagrant vagrant cloud cleaning up provider vagrant vagrant cloud provider was not created not deleting build vagrant errored error s occurred post processor failed error creating provider metadata provider must be unique for version with debug enabled i ve got packer post processor vagrant cloud plugin post processor vagrant cloud api post packer post processor vagrant cloud plugin packer post processor vagrant cloud plugin body provider name virtualbox packer post processor vagrant cloud plugin post processor vagrant cloud api response packer post processor vagrant cloud plugin packer post processor vagrant cloud plugin status unprocessable entity statuscode proto http protomajor protominor header map connection content type date referrer policy server set cookie strict transport security via x content type options x download options x frame options x permitted cross domain policies x request id x runtime x vagrantcloud rate limit x xss protection body contentlength transferencoding close false uncompressed false trailer map request tls one way to mitigate it is to manually remove the provider for this version from vagrantcloud ui and retry the packer build another way to avoid such exceptions is to increase the version when uploading the new box in this case we have to announce everyone that consumes this box that a new version is available and they have to adjust their config it would be nice to have a new parameter for vagrant cloud post processor something like replace if exists defaulting to false sample config variables cloud token env vagrant cloud token output path output builders type vagrant source path ubuntu communicator ssh add force true provider virtualbox output dir user output path post processors type vagrant cloud box tag hlesey base version user version access token user cloud token replace if exists true already support replacing existing version vagrant cloud provider upload organization box name provider name version box file use case s as a packer user i want to be able to replace an existing box version when using vagrant cloud post processor
1
536
3,000,239,988
IssuesEvent
2015-07-23 23:42:05
nodejs/io.js
https://api.github.com/repos/nodejs/io.js
closed
On receiving SIGWINCH, only stdout has its rows/columns updated or resize event fired
confirmed-bug process
I would expect that if both stdout and stderr are bound to `/dev/tty` then resizes would trigger on both `tty` objects, but this is not the case. This can be demonstrated with a simple script: ```javascript function setup (name, tty) { console.error(name, 'started with rows:', tty.rows, 'columns:', tty.columns) tty.on('resize', function () { console.error('\n',name, 'resized to rows:', tty.rows, 'columns:', tty.columns) }) } setup('stdout', process.stdout) setup('stderr', process.stderr) console.error('sleeping for 10 seconds') setTimeout(function () {}, 10000) ``` Run and try resizing the window before the timeout. You'll see update events from `stdout` but none from `stderr`.
1.0
On receiving SIGWINCH, only stdout has its rows/columns updated or resize event fired - I would expect that if both stdout and stderr are bound to `/dev/tty` then resizes would trigger on both `tty` objects, but this is not the case. This can be demonstrated with a simple script: ```javascript function setup (name, tty) { console.error(name, 'started with rows:', tty.rows, 'columns:', tty.columns) tty.on('resize', function () { console.error('\n',name, 'resized to rows:', tty.rows, 'columns:', tty.columns) }) } setup('stdout', process.stdout) setup('stderr', process.stderr) console.error('sleeping for 10 seconds') setTimeout(function () {}, 10000) ``` Run and try resizing the window before the timeout. You'll see update events from `stdout` but none from `stderr`.
process
on receiving sigwinch only stdout has its rows columns updated or resize event fired i would expect that if both stdout and stderr are bound to dev tty then resizes would trigger on both tty objects but this is not the case this can be demonstrated with a simple script javascript function setup name tty console error name started with rows tty rows columns tty columns tty on resize function console error n name resized to rows tty rows columns tty columns setup stdout process stdout setup stderr process stderr console error sleeping for seconds settimeout function run and try resizing the window before the timeout you ll see update events from stdout but none from stderr
1
203,661
23,164,313,734
IssuesEvent
2022-07-29 21:55:56
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Possible memory leak in ECDiffieHellmanOpenSsl
area-System.Security in-pr
# Problem Creating `ECDiffieHellmanOpenSsl` objects using `ECParameters` causes a memory leak with each new object created. Same is true when calling `ECDiffieHellmanOpenSsl.DeriveKeyFromHmac` with a custom implementation of `ECDiffieHellmanPublicKey`. (see test application below) # Description First found when memory consumption kept rising in linux docker container using the base image `mcr.microsoft.com/dotnet/runtime:5.0.4-buster-slim` while creating new `ECDiffieHellmanOpenSsl` using named `ECParameters` with set ECPoint (like in `BuildECParameters()` below). Replicated on Ubuntu 20.04 (WSL) using the console application below with SDK Version: 5.0.400. Here to monitor the memory consumption the `dotnet-counters` tool was used. Note that the GC Heap Size does not grow, only the Working Set. Since the `TestCreatingECDHWithECParams()` method does not leak (which uses a named curve to define the `ECDiffieHellman` object) but both `TestCreatingECDHWithECParams()` and `TestCustomPublicKeyImplementation()` does, and one big common difference between the two scenarios is what unmanaged code is called to create the `SafeEcKeyHandle`. I suspect the problem is related to the `Interop` calls referenced in the method descriptions below. But I might be wrong. Or have I missed something with the scenarios showcased below? # Memory leak console test class ```csharp using System; using System.Globalization; using System.Security.Cryptography; namespace MemLeakTest { class Program { static void Main(string[] args) { Console.WriteLine("Starting test..."); if (args[0] == "namedCurve") TestCreatingECDHWithNamedCurve(); else if (args[0] == "ecParameters") TestCreatingECDHWithECParams(); else if (args[0] == "publicKey") TestCustomPublicKeyImplementation(); else Console.WriteLine("Invalid params: " + args[0]); Console.WriteLine("Exiting test."); } /// <summary> /// NO MEMORY LEAK /// Looks like it ends up calling this: /// https://github.com/dotnet/runtime/blob/57bfe474518ab5b7cfe6bf7424a79ce3af9d6657/src/libraries/Common/src/Interop/Unix/System.Security.Cryptography.Native/Interop.EcKey.cs#L31 /// </summary> private static void TestCreatingECDHWithNamedCurve() { Console.WriteLine("Testing creating ECDH with NamedCurves."); for (int i = 0; i < 500000; i++) { using var serverEcdh = ECDiffieHellman.Create(ECCurve.NamedCurves.nistP256); } Console.WriteLine("Done!"); Console.ReadLine(); } /// <summary> /// GROWING WORKING SET WHILE EXECUTING. /// Looks like it ends up calling this: /// https://github.com/dotnet/runtime/blob/57bfe474518ab5b7cfe6bf7424a79ce3af9d6657/src/libraries/Common/src/Interop/Unix/System.Security.Cryptography.Native/Interop.EcDsa.ImportExport.cs#L22 /// </summary> private static void TestCreatingECDHWithECParams() { Console.WriteLine("Testing creating ECDH with ECParameters."); for (int i = 0; i < 500000; i++) { // Creating the ECDiffieHellman object causes a memory leak using var serverEcdh = ECDiffieHellman.Create(CustomECDiffieHellmanPublicKey.BuildECParameters()); } Console.WriteLine("Done!"); Console.ReadLine(); } /// <summary> /// GROWING WORKING SET WHILE EXECUTING. /// Looks like it ends up calling this: /// https://github.com/dotnet/runtime/blob/57bfe474518ab5b7cfe6bf7424a79ce3af9d6657/src/libraries/Common/src/Interop/Unix/System.Security.Cryptography.Native/Interop.EcDsa.ImportExport.cs#L22 /// </summary> private static void TestCustomPublicKeyImplementation() { Console.WriteLine("Testing DeriveKeyFromHmac with custom ECDiffieHellmanPublicKey."); using (var userAgentPublicKey = new CustomECDiffieHellmanPublicKey()) using (var serverEcdh = ECDiffieHellman.Create(ECCurve.NamedCurves.nistP256)) { for (int i = 0; i < 500000; i++) { // Calling DeriveKeyFromHmac causes a memory leak serverEcdh.DeriveKeyFromHmac(userAgentPublicKey, HashAlgorithmName.SHA256, null); } Console.WriteLine("Done!"); Console.ReadLine(); } } } public sealed class CustomECDiffieHellmanPublicKey : ECDiffieHellmanPublicKey { public override ECParameters ExportExplicitParameters() => BuildECParameters(); public override ECParameters ExportParameters() => BuildECParameters(); protected override void Dispose(bool disposing) { base.Dispose(disposing); } internal static ECParameters BuildECParameters() { var parameters = new ECParameters { Curve = ECCurve.NamedCurves.nistP256, Q = new ECPoint { X = HexToByteArray("fa719e6b556b83d413969196afdf2b07ce1ad14829f48b4c290fe276925148c7"), Y = HexToByteArray("984a4a8d4686f162feefee023bc77184ea705e32bc304f0dbd166a2fe2ed204f") } }; return parameters; } private static byte[] HexToByteArray(string hexString) { byte[] bytes = new byte[hexString.Length / 2]; for (int i = 0; i < hexString.Length; i += 2) { string s = hexString.Substring(i, 2); bytes[i / 2] = byte.Parse(s, NumberStyles.HexNumber, null); } return bytes; } } } ```
True
Possible memory leak in ECDiffieHellmanOpenSsl - # Problem Creating `ECDiffieHellmanOpenSsl` objects using `ECParameters` causes a memory leak with each new object created. Same is true when calling `ECDiffieHellmanOpenSsl.DeriveKeyFromHmac` with a custom implementation of `ECDiffieHellmanPublicKey`. (see test application below) # Description First found when memory consumption kept rising in linux docker container using the base image `mcr.microsoft.com/dotnet/runtime:5.0.4-buster-slim` while creating new `ECDiffieHellmanOpenSsl` using named `ECParameters` with set ECPoint (like in `BuildECParameters()` below). Replicated on Ubuntu 20.04 (WSL) using the console application below with SDK Version: 5.0.400. Here to monitor the memory consumption the `dotnet-counters` tool was used. Note that the GC Heap Size does not grow, only the Working Set. Since the `TestCreatingECDHWithECParams()` method does not leak (which uses a named curve to define the `ECDiffieHellman` object) but both `TestCreatingECDHWithECParams()` and `TestCustomPublicKeyImplementation()` does, and one big common difference between the two scenarios is what unmanaged code is called to create the `SafeEcKeyHandle`. I suspect the problem is related to the `Interop` calls referenced in the method descriptions below. But I might be wrong. Or have I missed something with the scenarios showcased below? # Memory leak console test class ```csharp using System; using System.Globalization; using System.Security.Cryptography; namespace MemLeakTest { class Program { static void Main(string[] args) { Console.WriteLine("Starting test..."); if (args[0] == "namedCurve") TestCreatingECDHWithNamedCurve(); else if (args[0] == "ecParameters") TestCreatingECDHWithECParams(); else if (args[0] == "publicKey") TestCustomPublicKeyImplementation(); else Console.WriteLine("Invalid params: " + args[0]); Console.WriteLine("Exiting test."); } /// <summary> /// NO MEMORY LEAK /// Looks like it ends up calling this: /// https://github.com/dotnet/runtime/blob/57bfe474518ab5b7cfe6bf7424a79ce3af9d6657/src/libraries/Common/src/Interop/Unix/System.Security.Cryptography.Native/Interop.EcKey.cs#L31 /// </summary> private static void TestCreatingECDHWithNamedCurve() { Console.WriteLine("Testing creating ECDH with NamedCurves."); for (int i = 0; i < 500000; i++) { using var serverEcdh = ECDiffieHellman.Create(ECCurve.NamedCurves.nistP256); } Console.WriteLine("Done!"); Console.ReadLine(); } /// <summary> /// GROWING WORKING SET WHILE EXECUTING. /// Looks like it ends up calling this: /// https://github.com/dotnet/runtime/blob/57bfe474518ab5b7cfe6bf7424a79ce3af9d6657/src/libraries/Common/src/Interop/Unix/System.Security.Cryptography.Native/Interop.EcDsa.ImportExport.cs#L22 /// </summary> private static void TestCreatingECDHWithECParams() { Console.WriteLine("Testing creating ECDH with ECParameters."); for (int i = 0; i < 500000; i++) { // Creating the ECDiffieHellman object causes a memory leak using var serverEcdh = ECDiffieHellman.Create(CustomECDiffieHellmanPublicKey.BuildECParameters()); } Console.WriteLine("Done!"); Console.ReadLine(); } /// <summary> /// GROWING WORKING SET WHILE EXECUTING. /// Looks like it ends up calling this: /// https://github.com/dotnet/runtime/blob/57bfe474518ab5b7cfe6bf7424a79ce3af9d6657/src/libraries/Common/src/Interop/Unix/System.Security.Cryptography.Native/Interop.EcDsa.ImportExport.cs#L22 /// </summary> private static void TestCustomPublicKeyImplementation() { Console.WriteLine("Testing DeriveKeyFromHmac with custom ECDiffieHellmanPublicKey."); using (var userAgentPublicKey = new CustomECDiffieHellmanPublicKey()) using (var serverEcdh = ECDiffieHellman.Create(ECCurve.NamedCurves.nistP256)) { for (int i = 0; i < 500000; i++) { // Calling DeriveKeyFromHmac causes a memory leak serverEcdh.DeriveKeyFromHmac(userAgentPublicKey, HashAlgorithmName.SHA256, null); } Console.WriteLine("Done!"); Console.ReadLine(); } } } public sealed class CustomECDiffieHellmanPublicKey : ECDiffieHellmanPublicKey { public override ECParameters ExportExplicitParameters() => BuildECParameters(); public override ECParameters ExportParameters() => BuildECParameters(); protected override void Dispose(bool disposing) { base.Dispose(disposing); } internal static ECParameters BuildECParameters() { var parameters = new ECParameters { Curve = ECCurve.NamedCurves.nistP256, Q = new ECPoint { X = HexToByteArray("fa719e6b556b83d413969196afdf2b07ce1ad14829f48b4c290fe276925148c7"), Y = HexToByteArray("984a4a8d4686f162feefee023bc77184ea705e32bc304f0dbd166a2fe2ed204f") } }; return parameters; } private static byte[] HexToByteArray(string hexString) { byte[] bytes = new byte[hexString.Length / 2]; for (int i = 0; i < hexString.Length; i += 2) { string s = hexString.Substring(i, 2); bytes[i / 2] = byte.Parse(s, NumberStyles.HexNumber, null); } return bytes; } } } ```
non_process
possible memory leak in ecdiffiehellmanopenssl problem creating ecdiffiehellmanopenssl objects using ecparameters causes a memory leak with each new object created same is true when calling ecdiffiehellmanopenssl derivekeyfromhmac with a custom implementation of ecdiffiehellmanpublickey see test application below description first found when memory consumption kept rising in linux docker container using the base image mcr microsoft com dotnet runtime buster slim while creating new ecdiffiehellmanopenssl using named ecparameters with set ecpoint like in buildecparameters below replicated on ubuntu wsl using the console application below with sdk version here to monitor the memory consumption the dotnet counters tool was used note that the gc heap size does not grow only the working set since the testcreatingecdhwithecparams method does not leak which uses a named curve to define the ecdiffiehellman object but both testcreatingecdhwithecparams and testcustompublickeyimplementation does and one big common difference between the two scenarios is what unmanaged code is called to create the safeeckeyhandle i suspect the problem is related to the interop calls referenced in the method descriptions below but i might be wrong or have i missed something with the scenarios showcased below memory leak console test class csharp using system using system globalization using system security cryptography namespace memleaktest class program static void main string args console writeline starting test if args namedcurve testcreatingecdhwithnamedcurve else if args ecparameters testcreatingecdhwithecparams else if args publickey testcustompublickeyimplementation else console writeline invalid params args console writeline exiting test no memory leak looks like it ends up calling this private static void testcreatingecdhwithnamedcurve console writeline testing creating ecdh with namedcurves for int i i i using var serverecdh ecdiffiehellman create eccurve namedcurves console writeline done console readline growing working set while executing looks like it ends up calling this private static void testcreatingecdhwithecparams console writeline testing creating ecdh with ecparameters for int i i i creating the ecdiffiehellman object causes a memory leak using var serverecdh ecdiffiehellman create customecdiffiehellmanpublickey buildecparameters console writeline done console readline growing working set while executing looks like it ends up calling this private static void testcustompublickeyimplementation console writeline testing derivekeyfromhmac with custom ecdiffiehellmanpublickey using var useragentpublickey new customecdiffiehellmanpublickey using var serverecdh ecdiffiehellman create eccurve namedcurves for int i i i calling derivekeyfromhmac causes a memory leak serverecdh derivekeyfromhmac useragentpublickey hashalgorithmname null console writeline done console readline public sealed class customecdiffiehellmanpublickey ecdiffiehellmanpublickey public override ecparameters exportexplicitparameters buildecparameters public override ecparameters exportparameters buildecparameters protected override void dispose bool disposing base dispose disposing internal static ecparameters buildecparameters var parameters new ecparameters curve eccurve namedcurves q new ecpoint x hextobytearray y hextobytearray return parameters private static byte hextobytearray string hexstring byte bytes new byte for int i i hexstring length i string s hexstring substring i bytes byte parse s numberstyles hexnumber null return bytes
0
20,440
27,100,573,622
IssuesEvent
2023-02-15 08:19:02
billingran/Newsletter
https://api.github.com/repos/billingran/Newsletter
closed
Sécurité dans la base de données pour l'unicité de l'email
processing... Brief 2
- [ ] Sécurité dans la base de données pour l'unicité de l'email
1.0
Sécurité dans la base de données pour l'unicité de l'email - - [ ] Sécurité dans la base de données pour l'unicité de l'email
process
sécurité dans la base de données pour l unicité de l email sécurité dans la base de données pour l unicité de l email
1
18,022
24,032,784,210
IssuesEvent
2022-09-15 16:18:59
influxdata/telegraf
https://api.github.com/repos/influxdata/telegraf
closed
processors.parser doesn't seem to work with json_v2 data_format
bug area/configuration area/json plugin/parser plugin/processor
### Relevent telegraf.conf ```toml [agent] interval = "1s" flush_interval = "1s" [[inputs.file]] files = ["./foo.json"] data_format = "value" data_type = "string" [[processors.parser]] parse_fields = ["value"] data_format = "json_v2" [[processors.parser.json_v2]] [[processors.parser.json_v2.object]] path = "@this" timestamp_key = "time" timestamp_format = "2006-01-02T15:04:05" timestamp_timezone = "Local" disable_prepend_keys = true [[processors.parser.json_v2.object.field]] path = "test" type = "uint" [[outputs.file]] files = ["stdout"] data_format = "influx" ``` ### Logs from Telegraf ``` 2022-01-02T22:08:47Z I! Starting Telegraf 1.21.1 2022-01-02T22:08:47Z E! [telegraf] Error running agent: Error loading config file telegraf.conf: plugin processors.parser: line 14: configuration specified the fields ["object"], but they weren't used ``` ### System info Telegraf 1.21.1 (git: HEAD 7c9a9c17) (both on Debian 11 and Ubuntu 20.04, amd64); installed via Debian package downloaded from Github Releases page ### Docker not applicable ### Steps to reproduce 1. Place a file `foo.json` into working directory with content e.g. `{"time": "2022-01-01T21:57:10", "test": 123}` 2. Start telegraf with `telegraf --config telegraf.conf` with the config from above 3. Error message is shown ### Expected behavior Telegraf should print the one metric from the JSON file to stdout in influx format every second. ### Actual behavior Telegraf exits with an error because of invalid config. It seems it doesn't recognize that the `object` section must be handed to the json_v2 parser or something. ### Additional info * When I use `data_format = "json_v2"` in the `input.file` section and place the same config there, data is parsed correctly, so I think the json_v2 config should be OK * If I replace the `data_format` in the `processors.parser` section with `influx`, remove the additional `json_v2` config and place an influx-formatted metric into the input file, then it also works * I'm not 100% sure if the way I specified the config for the json_v2 parser is correct for `processors.parser`. I didn't find any example. If this is a user error on my part it would at least be good to add a correct example to the `processors.parser` Readme
1.0
processors.parser doesn't seem to work with json_v2 data_format - ### Relevent telegraf.conf ```toml [agent] interval = "1s" flush_interval = "1s" [[inputs.file]] files = ["./foo.json"] data_format = "value" data_type = "string" [[processors.parser]] parse_fields = ["value"] data_format = "json_v2" [[processors.parser.json_v2]] [[processors.parser.json_v2.object]] path = "@this" timestamp_key = "time" timestamp_format = "2006-01-02T15:04:05" timestamp_timezone = "Local" disable_prepend_keys = true [[processors.parser.json_v2.object.field]] path = "test" type = "uint" [[outputs.file]] files = ["stdout"] data_format = "influx" ``` ### Logs from Telegraf ``` 2022-01-02T22:08:47Z I! Starting Telegraf 1.21.1 2022-01-02T22:08:47Z E! [telegraf] Error running agent: Error loading config file telegraf.conf: plugin processors.parser: line 14: configuration specified the fields ["object"], but they weren't used ``` ### System info Telegraf 1.21.1 (git: HEAD 7c9a9c17) (both on Debian 11 and Ubuntu 20.04, amd64); installed via Debian package downloaded from Github Releases page ### Docker not applicable ### Steps to reproduce 1. Place a file `foo.json` into working directory with content e.g. `{"time": "2022-01-01T21:57:10", "test": 123}` 2. Start telegraf with `telegraf --config telegraf.conf` with the config from above 3. Error message is shown ### Expected behavior Telegraf should print the one metric from the JSON file to stdout in influx format every second. ### Actual behavior Telegraf exits with an error because of invalid config. It seems it doesn't recognize that the `object` section must be handed to the json_v2 parser or something. ### Additional info * When I use `data_format = "json_v2"` in the `input.file` section and place the same config there, data is parsed correctly, so I think the json_v2 config should be OK * If I replace the `data_format` in the `processors.parser` section with `influx`, remove the additional `json_v2` config and place an influx-formatted metric into the input file, then it also works * I'm not 100% sure if the way I specified the config for the json_v2 parser is correct for `processors.parser`. I didn't find any example. If this is a user error on my part it would at least be good to add a correct example to the `processors.parser` Readme
process
processors parser doesn t seem to work with json data format relevent telegraf conf toml interval flush interval files data format value data type string parse fields data format json path this timestamp key time timestamp format timestamp timezone local disable prepend keys true path test type uint files data format influx logs from telegraf i starting telegraf e error running agent error loading config file telegraf conf plugin processors parser line configuration specified the fields but they weren t used system info telegraf git head both on debian and ubuntu installed via debian package downloaded from github releases page docker not applicable steps to reproduce place a file foo json into working directory with content e g time test start telegraf with telegraf config telegraf conf with the config from above error message is shown expected behavior telegraf should print the one metric from the json file to stdout in influx format every second actual behavior telegraf exits with an error because of invalid config it seems it doesn t recognize that the object section must be handed to the json parser or something additional info when i use data format json in the input file section and place the same config there data is parsed correctly so i think the json config should be ok if i replace the data format in the processors parser section with influx remove the additional json config and place an influx formatted metric into the input file then it also works i m not sure if the way i specified the config for the json parser is correct for processors parser i didn t find any example if this is a user error on my part it would at least be good to add a correct example to the processors parser readme
1
10,461
13,238,224,860
IssuesEvent
2020-08-18 23:42:07
googleapis/repo-automation-bots
https://api.github.com/repos/googleapis/repo-automation-bots
closed
gcf-utils: separate out bots and tools
type: process
When we don't want to deploy tools, we have to add an exception to the build script in gcf utils, like so: https://github.com/googleapis/repo-automation-bots/pull/710 We should create a separate folder for tools vs. bots, so that we don't have to do manual exclusions for tools like the PR above.
1.0
gcf-utils: separate out bots and tools - When we don't want to deploy tools, we have to add an exception to the build script in gcf utils, like so: https://github.com/googleapis/repo-automation-bots/pull/710 We should create a separate folder for tools vs. bots, so that we don't have to do manual exclusions for tools like the PR above.
process
gcf utils separate out bots and tools when we don t want to deploy tools we have to add an exception to the build script in gcf utils like so we should create a separate folder for tools vs bots so that we don t have to do manual exclusions for tools like the pr above
1
5,958
2,610,218,583
IssuesEvent
2015-02-26 19:09:32
chrsmith/somefinders
https://api.github.com/repos/chrsmith/somefinders
opened
www avaster ru
auto-migrated Priority-Medium Type-Defect
``` '''Аристарх Селезнёв''' День добрый никак не могу найти .www avaster ru. как то выкладывали уже '''Василько Волков''' Вот хороший сайт где можно скачать http://bit.ly/16QEiQk '''Болеслав Васильев''' Просит ввести номер мобилы!Не опасно ли это? '''Велислав Поляков''' Не это не влияет на баланс '''Авдей Блинов''' Не это не влияет на баланс Информация о файле: www avaster ru Загружен: В этом месяце Скачан раз: 141 Рейтинг: 1034 Средняя скорость скачивания: 321 Похожих файлов: 27 ``` ----- Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 1:55
1.0
www avaster ru - ``` '''Аристарх Селезнёв''' День добрый никак не могу найти .www avaster ru. как то выкладывали уже '''Василько Волков''' Вот хороший сайт где можно скачать http://bit.ly/16QEiQk '''Болеслав Васильев''' Просит ввести номер мобилы!Не опасно ли это? '''Велислав Поляков''' Не это не влияет на баланс '''Авдей Блинов''' Не это не влияет на баланс Информация о файле: www avaster ru Загружен: В этом месяце Скачан раз: 141 Рейтинг: 1034 Средняя скорость скачивания: 321 Похожих файлов: 27 ``` ----- Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 1:55
non_process
www avaster ru аристарх селезнёв день добрый никак не могу найти www avaster ru как то выкладывали уже василько волков вот хороший сайт где можно скачать болеслав васильев просит ввести номер мобилы не опасно ли это велислав поляков не это не влияет на баланс авдей блинов не это не влияет на баланс информация о файле www avaster ru загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at
0
738
3,214,324,904
IssuesEvent
2015-10-07 00:50:39
broadinstitute/hellbender-dataflow
https://api.github.com/repos/broadinstitute/hellbender-dataflow
opened
Profile and optimize the ReadsPreprocessingPipeline
Dataflow DataflowPreprocessingPipeline profiling
_From @droazen on July 22, 2015 16:28_ Should probably be started only after the tests in https://github.com/broadinstitute/hellbender/issues/695 are in place. _Copied from original issue: broadinstitute/hellbender#696_
1.0
Profile and optimize the ReadsPreprocessingPipeline - _From @droazen on July 22, 2015 16:28_ Should probably be started only after the tests in https://github.com/broadinstitute/hellbender/issues/695 are in place. _Copied from original issue: broadinstitute/hellbender#696_
process
profile and optimize the readspreprocessingpipeline from droazen on july should probably be started only after the tests in are in place copied from original issue broadinstitute hellbender
1
17,283
23,092,720,590
IssuesEvent
2022-07-26 16:29:39
pyanodon/pybugreports
https://api.github.com/repos/pyanodon/pybugreports
closed
Flare Stack mod Errors
mod:pypostprocessing postprocess-fail compatibility
### Mod source PyAE Beta ### Which mod are you having an issue with? - [ ] pyalienlife - [ ] pyalternativeenergy - [ ] pycoalprocessing - [ ] pyfusionenergy - [ ] pyhightech - [ ] pyindustry - [ ] pypetroleumhandling - [X] pypostprocessing - [ ] pyrawores ### Operating system >=Windows 10 ### What kind of issue is this? - [ ] Compatibility - [ ] Locale (names, descriptions, unknown keys) - [ ] Graphical - [ ] Crash - [ ] Progression - [ ] Balance - [X] Pypostprocessing failure - [ ] Other ### What is the problem? loading flare stack mod with py it will throw an error ### Steps to reproduce load all py mods with flare stack mod and it errors ### Additional context ![!!ggg](https://user-images.githubusercontent.com/60069986/177023629-e6bb55cc-8829-4fda-b495-7a143fefd2c0.png) ### Log file [factorio-current.log](https://github.com/pyanodon/pybugreports/files/9034962/factorio-current.log)
2.0
Flare Stack mod Errors - ### Mod source PyAE Beta ### Which mod are you having an issue with? - [ ] pyalienlife - [ ] pyalternativeenergy - [ ] pycoalprocessing - [ ] pyfusionenergy - [ ] pyhightech - [ ] pyindustry - [ ] pypetroleumhandling - [X] pypostprocessing - [ ] pyrawores ### Operating system >=Windows 10 ### What kind of issue is this? - [ ] Compatibility - [ ] Locale (names, descriptions, unknown keys) - [ ] Graphical - [ ] Crash - [ ] Progression - [ ] Balance - [X] Pypostprocessing failure - [ ] Other ### What is the problem? loading flare stack mod with py it will throw an error ### Steps to reproduce load all py mods with flare stack mod and it errors ### Additional context ![!!ggg](https://user-images.githubusercontent.com/60069986/177023629-e6bb55cc-8829-4fda-b495-7a143fefd2c0.png) ### Log file [factorio-current.log](https://github.com/pyanodon/pybugreports/files/9034962/factorio-current.log)
process
flare stack mod errors mod source pyae beta which mod are you having an issue with pyalienlife pyalternativeenergy pycoalprocessing pyfusionenergy pyhightech pyindustry pypetroleumhandling pypostprocessing pyrawores operating system windows what kind of issue is this compatibility locale names descriptions unknown keys graphical crash progression balance pypostprocessing failure other what is the problem loading flare stack mod with py it will throw an error steps to reproduce load all py mods with flare stack mod and it errors additional context log file
1
3,655
6,691,849,560
IssuesEvent
2017-10-09 14:29:34
oasis-tcs/sarif-spec
https://api.github.com/repos/oasis-tcs/sarif-spec
closed
Define driving principles for SARIF effort
process
We should articulate and maintain a set of driving principles for the SARIF format. These principles should define a vision for the format in general and be useful for resolving difficult design decisions. Below is a starter list that we should refine, add to or subtract from. 1. SARIF is primarily designed to advance the industry by providing the best direct production format possible. Aggregating results from other formats is another important scenario but secondary to direct production. 2. SARIF defines a range of data that shall be expressed in order to best support static analysis tooling. The specification describes a JSON implementation of this standard. It should be possible to define other implementations (such as XML). 3. SARIF is designed for static analysis tools and any concept that generally applies for this scenario shall be considered for the format. SARIF can clearly be used for many dynamic analysis scenarios and we should consider augmenting the format for this class of tooling, but not in cases where what is proposed is applicable to the dynamic analysis domain only. 4. SARIF is domain-agnostic; that is, it does not contain objects or properties that are specific to a single domain, such as security or compliance. However, SARIF might define specific values for properties that are specific to a single domain. For example, the proposed result.taxonomies property might define a dictionary entry whose key invokes a standard classification for memory safety issues only. 5. The SARIF design is focused on expressing results as produced by a tool at a specific point-in-time and current excludes detailed thinking related to results management (associated result work item, false positive evaluation, etc.). These concepts may be addressed by defining or proposing 'profiles' that broaden SARIF's design surface area, contingent on progress with core work.
1.0
Define driving principles for SARIF effort - We should articulate and maintain a set of driving principles for the SARIF format. These principles should define a vision for the format in general and be useful for resolving difficult design decisions. Below is a starter list that we should refine, add to or subtract from. 1. SARIF is primarily designed to advance the industry by providing the best direct production format possible. Aggregating results from other formats is another important scenario but secondary to direct production. 2. SARIF defines a range of data that shall be expressed in order to best support static analysis tooling. The specification describes a JSON implementation of this standard. It should be possible to define other implementations (such as XML). 3. SARIF is designed for static analysis tools and any concept that generally applies for this scenario shall be considered for the format. SARIF can clearly be used for many dynamic analysis scenarios and we should consider augmenting the format for this class of tooling, but not in cases where what is proposed is applicable to the dynamic analysis domain only. 4. SARIF is domain-agnostic; that is, it does not contain objects or properties that are specific to a single domain, such as security or compliance. However, SARIF might define specific values for properties that are specific to a single domain. For example, the proposed result.taxonomies property might define a dictionary entry whose key invokes a standard classification for memory safety issues only. 5. The SARIF design is focused on expressing results as produced by a tool at a specific point-in-time and current excludes detailed thinking related to results management (associated result work item, false positive evaluation, etc.). These concepts may be addressed by defining or proposing 'profiles' that broaden SARIF's design surface area, contingent on progress with core work.
process
define driving principles for sarif effort we should articulate and maintain a set of driving principles for the sarif format these principles should define a vision for the format in general and be useful for resolving difficult design decisions below is a starter list that we should refine add to or subtract from sarif is primarily designed to advance the industry by providing the best direct production format possible aggregating results from other formats is another important scenario but secondary to direct production sarif defines a range of data that shall be expressed in order to best support static analysis tooling the specification describes a json implementation of this standard it should be possible to define other implementations such as xml sarif is designed for static analysis tools and any concept that generally applies for this scenario shall be considered for the format sarif can clearly be used for many dynamic analysis scenarios and we should consider augmenting the format for this class of tooling but not in cases where what is proposed is applicable to the dynamic analysis domain only sarif is domain agnostic that is it does not contain objects or properties that are specific to a single domain such as security or compliance however sarif might define specific values for properties that are specific to a single domain for example the proposed result taxonomies property might define a dictionary entry whose key invokes a standard classification for memory safety issues only the sarif design is focused on expressing results as produced by a tool at a specific point in time and current excludes detailed thinking related to results management associated result work item false positive evaluation etc these concepts may be addressed by defining or proposing profiles that broaden sarif s design surface area contingent on progress with core work
1
289,577
31,933,142,482
IssuesEvent
2023-09-19 08:46:16
Trinadh465/linux-4.1.15_CVE-2023-4128
https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-4128
opened
CVE-2015-8104 (Medium) detected in linuxlinux-4.6
Mend: dependency security vulnerability
## CVE-2015-8104 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-4128/commit/0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8">0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/x86/kvm/svm.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> The KVM subsystem in the Linux kernel through 4.2.6, and Xen 4.3.x through 4.6.x, allows guest OS users to cause a denial of service (host OS panic or hang) by triggering many #DB (aka Debug) exceptions, related to svm.c. <p>Publish Date: 2015-11-16 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-8104>CVE-2015-8104</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2015-8104">https://www.linuxkernelcves.com/cves/CVE-2015-8104</a></p> <p>Release Date: 2015-11-16</p> <p>Fix Resolution: v4.4-rc1,v3.12.51,v3.16.35,v3.2.74,v4.1.17,v4.3.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2015-8104 (Medium) detected in linuxlinux-4.6 - ## CVE-2015-8104 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-4128/commit/0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8">0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/x86/kvm/svm.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> The KVM subsystem in the Linux kernel through 4.2.6, and Xen 4.3.x through 4.6.x, allows guest OS users to cause a denial of service (host OS panic or hang) by triggering many #DB (aka Debug) exceptions, related to svm.c. <p>Publish Date: 2015-11-16 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-8104>CVE-2015-8104</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2015-8104">https://www.linuxkernelcves.com/cves/CVE-2015-8104</a></p> <p>Release Date: 2015-11-16</p> <p>Fix Resolution: v4.4-rc1,v3.12.51,v3.16.35,v3.2.74,v4.1.17,v4.3.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files arch kvm svm c vulnerability details the kvm subsystem in the linux kernel through and xen x through x allows guest os users to cause a denial of service host os panic or hang by triggering many db aka debug exceptions related to svm c publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
6,552
9,638,866,428
IssuesEvent
2019-05-16 12:17:01
aiidateam/aiida_core
https://api.github.com/repos/aiidateam/aiida_core
opened
`Process.get_builder_restart` should specify to only retrieve incoming links
aiida-core 1.x topic/processes type/bug
If not, for work chain nodes, any call links will also be retrieved and will cause an exception
1.0
`Process.get_builder_restart` should specify to only retrieve incoming links - If not, for work chain nodes, any call links will also be retrieved and will cause an exception
process
process get builder restart should specify to only retrieve incoming links if not for work chain nodes any call links will also be retrieved and will cause an exception
1
62,379
3,184,708,294
IssuesEvent
2015-09-27 16:40:50
fallenswordhelper/fallenswordhelper
https://api.github.com/repos/fallenswordhelper/fallenswordhelper
closed
Quickbuff enhancements not working
bug minor priority:2
20:40 17/Sep/2015 [ Report ] iceman66 says: Sorry to bother you. I\'ve updated FSH to 1507, but for some reason the quick buff changes aren\'t working. I\'ve selected a player from the quick buff menu and they have a bunch of buffs. I don\'t see the red or green next to my list of buffs that signify they have already been cast. Is there something I have to enable? Thank you and sorry
1.0
Quickbuff enhancements not working - 20:40 17/Sep/2015 [ Report ] iceman66 says: Sorry to bother you. I\'ve updated FSH to 1507, but for some reason the quick buff changes aren\'t working. I\'ve selected a player from the quick buff menu and they have a bunch of buffs. I don\'t see the red or green next to my list of buffs that signify they have already been cast. Is there something I have to enable? Thank you and sorry
non_process
quickbuff enhancements not working sep says sorry to bother you i ve updated fsh to but for some reason the quick buff changes aren t working i ve selected a player from the quick buff menu and they have a bunch of buffs i don t see the red or green next to my list of buffs that signify they have already been cast is there something i have to enable thank you and sorry
0
20,608
6,902,559,219
IssuesEvent
2017-11-25 22:13:13
systemd/systemd
https://api.github.com/repos/systemd/systemd
closed
systemd-nspawn uses hard-coded location for resolv.conf
build-system has-pr meson nspawn resolve
At https://github.com/systemd/systemd/blob/master/src/nspawn/nspawn.c#L1413 we hard-code the location for the system installed resolv.conf and check for /usr/lib/systemd/resolv.conf For split-usr systems, resolv.conf is not installed at /usr though, but /lib/systemd/resolv.conf: https://github.com/systemd/systemd/blob/master/src/resolve/meson.build#L140 (*root*libexecdir)
1.0
systemd-nspawn uses hard-coded location for resolv.conf - At https://github.com/systemd/systemd/blob/master/src/nspawn/nspawn.c#L1413 we hard-code the location for the system installed resolv.conf and check for /usr/lib/systemd/resolv.conf For split-usr systems, resolv.conf is not installed at /usr though, but /lib/systemd/resolv.conf: https://github.com/systemd/systemd/blob/master/src/resolve/meson.build#L140 (*root*libexecdir)
non_process
systemd nspawn uses hard coded location for resolv conf at we hard code the location for the system installed resolv conf and check for usr lib systemd resolv conf for split usr systems resolv conf is not installed at usr though but lib systemd resolv conf root libexecdir
0
282,992
30,889,521,067
IssuesEvent
2023-08-04 02:51:05
maddyCode23/linux-4.1.15
https://api.github.com/repos/maddyCode23/linux-4.1.15
reopened
CVE-2020-35519 (High) detected in linux-stable-rtv4.1.33
Mend: dependency security vulnerability
## CVE-2020-35519 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/x25/af_x25.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/x25/af_x25.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> An out-of-bounds (OOB) memory access flaw was found in x25_bind in net/x25/af_x25.c in the Linux kernel version v5.12-rc5. A bounds check failure allows a local attacker with a user account on the system to gain access to out-of-bounds memory, leading to a system crash or a leak of internal kernel information. The highest threat from this vulnerability is to confidentiality, integrity, as well as system availability. <p>Publish Date: 2021-05-06 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-35519>CVE-2020-35519</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2020-35519">https://www.linuxkernelcves.com/cves/CVE-2020-35519</a></p> <p>Release Date: 2021-05-06</p> <p>Fix Resolution: v4.4.248, v4.9.248, v4.14.211, v4.19.162, v5.4.82, v5.9.13, v5.10-rc7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-35519 (High) detected in linux-stable-rtv4.1.33 - ## CVE-2020-35519 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/x25/af_x25.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/x25/af_x25.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> An out-of-bounds (OOB) memory access flaw was found in x25_bind in net/x25/af_x25.c in the Linux kernel version v5.12-rc5. A bounds check failure allows a local attacker with a user account on the system to gain access to out-of-bounds memory, leading to a system crash or a leak of internal kernel information. The highest threat from this vulnerability is to confidentiality, integrity, as well as system availability. <p>Publish Date: 2021-05-06 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-35519>CVE-2020-35519</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2020-35519">https://www.linuxkernelcves.com/cves/CVE-2020-35519</a></p> <p>Release Date: 2021-05-06</p> <p>Fix Resolution: v4.4.248, v4.9.248, v4.14.211, v4.19.162, v5.4.82, v5.9.13, v5.10-rc7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in linux stable cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files net af c net af c vulnerability details an out of bounds oob memory access flaw was found in bind in net af c in the linux kernel version a bounds check failure allows a local attacker with a user account on the system to gain access to out of bounds memory leading to a system crash or a leak of internal kernel information the highest threat from this vulnerability is to confidentiality integrity as well as system availability publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
4,677
7,517,304,707
IssuesEvent
2018-04-12 02:48:22
UnbFeelings/unb-feelings-GQA
https://api.github.com/repos/UnbFeelings/unb-feelings-GQA
closed
Criar templates de documentação
document process wiki
Deve-se identificar quais documentações necessitam de templates para que possam ser criados, como: Resultados das Auditoria, Documentação do Checklist dentro do resultado da Auditoria, Documentação da Entrevista dentro do resultado da Auditoria, entre outros. [Atividade no processo](https://github.com/UnbFeelings/unb-feelings-GQA/wiki/Fluxo-de-Trabalho#17-criar-templates-de-documenta%C3%A7%C3%A3o).
1.0
Criar templates de documentação - Deve-se identificar quais documentações necessitam de templates para que possam ser criados, como: Resultados das Auditoria, Documentação do Checklist dentro do resultado da Auditoria, Documentação da Entrevista dentro do resultado da Auditoria, entre outros. [Atividade no processo](https://github.com/UnbFeelings/unb-feelings-GQA/wiki/Fluxo-de-Trabalho#17-criar-templates-de-documenta%C3%A7%C3%A3o).
process
criar templates de documentação deve se identificar quais documentações necessitam de templates para que possam ser criados como resultados das auditoria documentação do checklist dentro do resultado da auditoria documentação da entrevista dentro do resultado da auditoria entre outros
1
10,560
13,350,626,418
IssuesEvent
2020-08-30 09:07:57
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
NullPointerException in org.dita.dost.util.KeyScope
bug preprocess/keyref
Reproduced with DITA OT 2.5.2, end user has not yet provided samples. keyref: [keyref] Reading file:/.../mapName.ditamap BUILD FAILED /Applications/oxygen19.1/frameworks/dita/DITA-OT2.x/build.xml:45: The following error occurred while executing this line: /Applications/oxygen19.1/frameworks/dita/DITA-OT2.x/plugins/com.jeppesen.webhelp/build_dita.xml:63: The following error occurred while executing this line: /Applications/oxygen19.1/frameworks/dita/DITA-OT2.x/plugins/org.dita.base/build_preprocess.xml:279: java.lang.NullPointerException at org.dita.dost.util.KeyScope.lambda$getChildScope$32(KeyScope.java:47) at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174) at java.util.ArrayList$ArrayListSpliterator.tryAdvance(ArrayList.java:1351) at java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126) at java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:498) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:485) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.FindOps$FindOp.evaluateSequential(FindOps.java:152) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.findFirst(ReferencePipeline.java:464) at org.dita.dost.util.KeyScope.getChildScope(KeyScope.java:47) at org.dita.dost.module.KeyrefModule.walkMap(KeyrefModule.java:258) at org.dita.dost.module.KeyrefModule.walkMap(KeyrefModule.java:293) at org.dita.dost.module.KeyrefModule.walkMap(KeyrefModule.java:293) at org.dita.dost.module.KeyrefModule.walkMap(KeyrefModule.java:293) at org.dita.dost.module.KeyrefModule.collectProcessingTopics(KeyrefModule.java:164) at org.dita.dost.module.KeyrefModule.execute(KeyrefModule.java:121) at org.dita.dost.pipeline.PipelineFacade.execute(PipelineFacade.java:80) at org.dita.dost.invoker.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:230) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:293) at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:435) at org.apache.tools.ant.Target.performTasks(Target.java:456) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1405) at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38) at org.apache.tools.ant.Project.executeTargets(Project.java:1260) at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:441) at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:293) at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:435) at org.apache.tools.ant.Target.performTasks(Target.java:456) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1405) at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38) at org.apache.tools.ant.Project.executeTargets(Project.java:1260) at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:441) at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:293) at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:435) at org.apache.tools.ant.Target.performTasks(Target.java:456) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1405) at org.apache.tools.ant.Project.executeTarget(Project.java:1376) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1260) at org.apache.tools.ant.Main.runBuild(Main.java:857) at org.apache.tools.ant.Main.startAnt(Main.java:236) at org.apache.tools.ant.launch.Launcher.run(Launcher.java:287) at org.apache.tools.ant.launch.Launcher.main(Launcher.java:113)
1.0
NullPointerException in org.dita.dost.util.KeyScope - Reproduced with DITA OT 2.5.2, end user has not yet provided samples. keyref: [keyref] Reading file:/.../mapName.ditamap BUILD FAILED /Applications/oxygen19.1/frameworks/dita/DITA-OT2.x/build.xml:45: The following error occurred while executing this line: /Applications/oxygen19.1/frameworks/dita/DITA-OT2.x/plugins/com.jeppesen.webhelp/build_dita.xml:63: The following error occurred while executing this line: /Applications/oxygen19.1/frameworks/dita/DITA-OT2.x/plugins/org.dita.base/build_preprocess.xml:279: java.lang.NullPointerException at org.dita.dost.util.KeyScope.lambda$getChildScope$32(KeyScope.java:47) at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174) at java.util.ArrayList$ArrayListSpliterator.tryAdvance(ArrayList.java:1351) at java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126) at java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:498) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:485) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.FindOps$FindOp.evaluateSequential(FindOps.java:152) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.findFirst(ReferencePipeline.java:464) at org.dita.dost.util.KeyScope.getChildScope(KeyScope.java:47) at org.dita.dost.module.KeyrefModule.walkMap(KeyrefModule.java:258) at org.dita.dost.module.KeyrefModule.walkMap(KeyrefModule.java:293) at org.dita.dost.module.KeyrefModule.walkMap(KeyrefModule.java:293) at org.dita.dost.module.KeyrefModule.walkMap(KeyrefModule.java:293) at org.dita.dost.module.KeyrefModule.collectProcessingTopics(KeyrefModule.java:164) at org.dita.dost.module.KeyrefModule.execute(KeyrefModule.java:121) at org.dita.dost.pipeline.PipelineFacade.execute(PipelineFacade.java:80) at org.dita.dost.invoker.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:230) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:293) at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:435) at org.apache.tools.ant.Target.performTasks(Target.java:456) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1405) at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38) at org.apache.tools.ant.Project.executeTargets(Project.java:1260) at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:441) at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:293) at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:435) at org.apache.tools.ant.Target.performTasks(Target.java:456) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1405) at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38) at org.apache.tools.ant.Project.executeTargets(Project.java:1260) at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:441) at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:293) at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:435) at org.apache.tools.ant.Target.performTasks(Target.java:456) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1405) at org.apache.tools.ant.Project.executeTarget(Project.java:1376) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1260) at org.apache.tools.ant.Main.runBuild(Main.java:857) at org.apache.tools.ant.Main.startAnt(Main.java:236) at org.apache.tools.ant.launch.Launcher.run(Launcher.java:287) at org.apache.tools.ant.launch.Launcher.main(Launcher.java:113)
process
nullpointerexception in org dita dost util keyscope reproduced with dita ot end user has not yet provided samples keyref reading file mapname ditamap build failed applications frameworks dita dita x build xml the following error occurred while executing this line applications frameworks dita dita x plugins com jeppesen webhelp build dita xml the following error occurred while executing this line applications frameworks dita dita x plugins org dita base build preprocess xml java lang nullpointerexception at org dita dost util keyscope lambda getchildscope keyscope java at java util stream referencepipeline accept referencepipeline java at java util arraylist arraylistspliterator tryadvance arraylist java at java util stream referencepipeline foreachwithcancel referencepipeline java at java util stream abstractpipeline copyintowithcancel abstractpipeline java at java util stream abstractpipeline copyinto abstractpipeline java at java util stream abstractpipeline wrapandcopyinto abstractpipeline java at java util stream findops findop evaluatesequential findops java at java util stream abstractpipeline evaluate abstractpipeline java at java util stream referencepipeline findfirst referencepipeline java at org dita dost util keyscope getchildscope keyscope java at org dita dost module keyrefmodule walkmap keyrefmodule java at org dita dost module keyrefmodule walkmap keyrefmodule java at org dita dost module keyrefmodule walkmap keyrefmodule java at org dita dost module keyrefmodule walkmap keyrefmodule java at org dita dost module keyrefmodule collectprocessingtopics keyrefmodule java at org dita dost module keyrefmodule execute keyrefmodule java at org dita dost pipeline pipelinefacade execute pipelinefacade java at org dita dost invoker extensibleantinvoker execute extensibleantinvoker java at org apache tools ant unknownelement execute unknownelement java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache tools ant dispatch dispatchutils execute dispatchutils java at org apache tools ant task perform task java at org apache tools ant target execute target java at org apache tools ant target performtasks target java at org apache tools ant project executesortedtargets project java at org apache tools ant helper singlecheckexecutor executetargets singlecheckexecutor java at org apache tools ant project executetargets project java at org apache tools ant taskdefs ant execute ant java at org apache tools ant taskdefs calltarget execute calltarget java at org apache tools ant unknownelement execute unknownelement java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache tools ant dispatch dispatchutils execute dispatchutils java at org apache tools ant task perform task java at org apache tools ant target execute target java at org apache tools ant target performtasks target java at org apache tools ant project executesortedtargets project java at org apache tools ant helper singlecheckexecutor executetargets singlecheckexecutor java at org apache tools ant project executetargets project java at org apache tools ant taskdefs ant execute ant java at org apache tools ant taskdefs calltarget execute calltarget java at org apache tools ant unknownelement execute unknownelement java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache tools ant dispatch dispatchutils execute dispatchutils java at org apache tools ant task perform task java at org apache tools ant target execute target java at org apache tools ant target performtasks target java at org apache tools ant project executesortedtargets project java at org apache tools ant project executetarget project java at org apache tools ant helper defaultexecutor executetargets defaultexecutor java at org apache tools ant project executetargets project java at org apache tools ant main runbuild main java at org apache tools ant main startant main java at org apache tools ant launch launcher run launcher java at org apache tools ant launch launcher main launcher java
1
12,369
14,895,345,404
IssuesEvent
2021-01-21 08:58:36
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[Hydra][Audit logs] Event triggered in wrong scenario
Bug Hydra P1 Process: Fixed Process: Tested dev
Event:- 1. PASSWORD_RESET_SUCCEEDED A/R:- Event was triggered at the time of Password help requested by user. E/R:- Event should trigger when user completed reset Password flow.
2.0
[Hydra][Audit logs] Event triggered in wrong scenario - Event:- 1. PASSWORD_RESET_SUCCEEDED A/R:- Event was triggered at the time of Password help requested by user. E/R:- Event should trigger when user completed reset Password flow.
process
event triggered in wrong scenario event password reset succeeded a r event was triggered at the time of password help requested by user e r event should trigger when user completed reset password flow
1
231,007
18,732,678,337
IssuesEvent
2021-11-04 00:41:12
backend-br/vagas
https://api.github.com/repos/backend-br/vagas
closed
[Remoto] Software Engineer Back-End no Itaú
CLT Pleno TDD Java Remoto DevOps AWS Spring Redis Cassandra Testes Unitários SQL CI Stale
## Nossa empresa A **Intera** está em busca de uma pessoa Engenheira de Software Back-End Pleno ou Sênior para o Itaú ## Local Remoto Remoto durante a pandemia ## Requisitos **Obrigatórios:** - Conhecimentos consolidados com BackEnd (Java Framework Spring e/ou .Net core); - Conhecimento em banco de dados (Ex: SQL, Cassandra, Mongo DB, REDIS, DynamoDB); - Conhecimento em Arquitetura Orientada a Eventos; - Conhecimentos em Cloud AWS; - Conhecimento em desenvolvimento de micro-serviços assíncronos por meio de comunicação por filas e tpicos (Rabbit MQ, IBM MQ, Kafka); - Conhecimentos em qualidade dos softwares (Ex: BDD, TDD, testes unitários, testes integrados). **Diferenciais:** - Construção de esteiras de CI/CD (Jenkins); - Principais Desing Patterns; - Vivência em ambiente Ágil e DevOps. ## Benefícios 🚗Vale Transporte ☕Vale Refeição (Restaurantes) 🥧Vale Alimentação (Supermercados) ❤️Assistência Médica 😁Assistência Odontológica 📈PLR (Mediante resultados do banco) 💌Seguro de Vida 👩🏾‍🦳Previdência privada 💰Descontos exclusivos em nossos produtos financeiros 🤱🏽Licença maternidade estendida 👶🏾Auxílio Creche / Babá (para papais e mamães) 📚Incentivo a estudos 🏃🏽Gympass 🏄🏼‍♀️Acesso aos Clubes Itaú (São Sebastião, Guarapiranga e Itanhaém) + Algumas vantagens que você pode conhecer durante o processo ## Contratação CLT ## Como se candidatar **Candidate-se em: https://bit.ly/3mQ0ISv** #### Regime - CLT #### Nível - Pleno - Sênior
1.0
[Remoto] Software Engineer Back-End no Itaú - ## Nossa empresa A **Intera** está em busca de uma pessoa Engenheira de Software Back-End Pleno ou Sênior para o Itaú ## Local Remoto Remoto durante a pandemia ## Requisitos **Obrigatórios:** - Conhecimentos consolidados com BackEnd (Java Framework Spring e/ou .Net core); - Conhecimento em banco de dados (Ex: SQL, Cassandra, Mongo DB, REDIS, DynamoDB); - Conhecimento em Arquitetura Orientada a Eventos; - Conhecimentos em Cloud AWS; - Conhecimento em desenvolvimento de micro-serviços assíncronos por meio de comunicação por filas e tpicos (Rabbit MQ, IBM MQ, Kafka); - Conhecimentos em qualidade dos softwares (Ex: BDD, TDD, testes unitários, testes integrados). **Diferenciais:** - Construção de esteiras de CI/CD (Jenkins); - Principais Desing Patterns; - Vivência em ambiente Ágil e DevOps. ## Benefícios 🚗Vale Transporte ☕Vale Refeição (Restaurantes) 🥧Vale Alimentação (Supermercados) ❤️Assistência Médica 😁Assistência Odontológica 📈PLR (Mediante resultados do banco) 💌Seguro de Vida 👩🏾‍🦳Previdência privada 💰Descontos exclusivos em nossos produtos financeiros 🤱🏽Licença maternidade estendida 👶🏾Auxílio Creche / Babá (para papais e mamães) 📚Incentivo a estudos 🏃🏽Gympass 🏄🏼‍♀️Acesso aos Clubes Itaú (São Sebastião, Guarapiranga e Itanhaém) + Algumas vantagens que você pode conhecer durante o processo ## Contratação CLT ## Como se candidatar **Candidate-se em: https://bit.ly/3mQ0ISv** #### Regime - CLT #### Nível - Pleno - Sênior
non_process
software engineer back end no itaú nossa empresa a intera está em busca de uma pessoa engenheira de software back end pleno ou sênior para o itaú local remoto remoto durante a pandemia requisitos obrigatórios conhecimentos consolidados com backend java framework spring e ou net core conhecimento em banco de dados ex sql cassandra mongo db redis dynamodb conhecimento em arquitetura orientada a eventos conhecimentos em cloud aws conhecimento em desenvolvimento de micro serviços assíncronos por meio de comunicação por filas e tpicos rabbit mq ibm mq kafka conhecimentos em qualidade dos softwares ex bdd tdd testes unitários testes integrados diferenciais construção de esteiras de ci cd jenkins principais desing patterns vivência em ambiente ágil e devops benefícios 🚗vale transporte ☕vale refeição restaurantes 🥧vale alimentação supermercados ❤️assistência médica 😁assistência odontológica 📈plr mediante resultados do banco 💌seguro de vida 👩🏾‍🦳previdência privada 💰descontos exclusivos em nossos produtos financeiros 🤱🏽licença maternidade estendida 👶🏾auxílio creche babá para papais e mamães 📚incentivo a estudos 🏃🏽gympass 🏄🏼‍♀️acesso aos clubes itaú são sebastião guarapiranga e itanhaém algumas vantagens que você pode conhecer durante o processo contratação clt como se candidatar candidate se em regime clt nível pleno sênior
0
222,107
24,684,399,219
IssuesEvent
2022-10-19 01:33:42
marvinruder/rating-tracker-frontend
https://api.github.com/repos/marvinruder/rating-tracker-frontend
opened
serve-14.0.1.tgz: 1 vulnerabilities (highest severity is: 7.5)
security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>serve-14.0.1.tgz</b></p></summary> <p></p> <p> </details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2022-3517](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-3517) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | minimatch-3.0.4.tgz | Transitive | N/A | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-3517</summary> ### Vulnerable Library - <b>minimatch-3.0.4.tgz</b></p> <p>a glob matcher in javascript</p> <p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz">https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz</a></p> <p> Dependency Hierarchy: - serve-14.0.1.tgz (Root Library) - serve-handler-6.1.3.tgz - :x: **minimatch-3.0.4.tgz** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> A vulnerability was found in the minimatch package. This flaw allows a Regular Expression Denial of Service (ReDoS) when calling the braceExpand function with specific arguments, resulting in a Denial of Service. <p>Publish Date: 2022-10-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-3517>CVE-2022-3517</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-10-17</p> <p>Fix Resolution: minimatch - 3.0.5</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
True
serve-14.0.1.tgz: 1 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>serve-14.0.1.tgz</b></p></summary> <p></p> <p> </details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2022-3517](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-3517) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | minimatch-3.0.4.tgz | Transitive | N/A | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-3517</summary> ### Vulnerable Library - <b>minimatch-3.0.4.tgz</b></p> <p>a glob matcher in javascript</p> <p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz">https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz</a></p> <p> Dependency Hierarchy: - serve-14.0.1.tgz (Root Library) - serve-handler-6.1.3.tgz - :x: **minimatch-3.0.4.tgz** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> A vulnerability was found in the minimatch package. This flaw allows a Regular Expression Denial of Service (ReDoS) when calling the braceExpand function with specific arguments, resulting in a Denial of Service. <p>Publish Date: 2022-10-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-3517>CVE-2022-3517</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-10-17</p> <p>Fix Resolution: minimatch - 3.0.5</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
non_process
serve tgz vulnerabilities highest severity is vulnerable library serve tgz vulnerabilities cve severity cvss dependency type fixed in remediation available high minimatch tgz transitive n a details cve vulnerable library minimatch tgz a glob matcher in javascript library home page a href dependency hierarchy serve tgz root library serve handler tgz x minimatch tgz vulnerable library found in base branch main vulnerability details a vulnerability was found in the minimatch package this flaw allows a regular expression denial of service redos when calling the braceexpand function with specific arguments resulting in a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution minimatch step up your open source security game with mend
0
7,669
6,151,267,682
IssuesEvent
2017-06-28 01:47:05
tgstation/tgstation
https://api.github.com/repos/tgstation/tgstation
closed
Botany can cause a fair bit of lag by spamming loads of reagent reactions all at once.
Performance
![](http://i.imgur.com/zlvm4vy.png) I had reed glover here help me with this. If you try and have multiple reactions at once in a plant, and also have maximum plant yield happen, and then harvest several of these trays at once, it gets quite laggy.
True
Botany can cause a fair bit of lag by spamming loads of reagent reactions all at once. - ![](http://i.imgur.com/zlvm4vy.png) I had reed glover here help me with this. If you try and have multiple reactions at once in a plant, and also have maximum plant yield happen, and then harvest several of these trays at once, it gets quite laggy.
non_process
botany can cause a fair bit of lag by spamming loads of reagent reactions all at once i had reed glover here help me with this if you try and have multiple reactions at once in a plant and also have maximum plant yield happen and then harvest several of these trays at once it gets quite laggy
0
23,943
3,871,174,968
IssuesEvent
2016-04-11 08:43:54
selinasee/JCHR4IV6WIWXKAUKECK6744D
https://api.github.com/repos/selinasee/JCHR4IV6WIWXKAUKECK6744D
closed
J2HjbUEPwBwR+Vq7Onem4L/Ls/5IJ0focDi77+xXsmA8d6qt0vhtvCWJeIO4Pa+qkPhz+x81fz7iHTI4JMviUmZXNTtLf4zlb/EZgaQUrgaUb6dvxRJs+b34ncXQdWGTD1I5DgvQLVx45f7x6rUYCrFFraoYYB9rPlbAQ2Q5U5c=
design
kLkfFbHfmFhFJE8P0/XeDMBEdWbsmckKwps8YONZTOO+qgneVrE0I+TSHsQhwCn12wrTmVrt8lPWOMyW3ymgJthoH30lERnRkhfXXwQLY/J8GBdk8iCLA3tltaIqvrulgD8OeMXOoTW2TZIW0/9oGeHdpEMmsducXmT9Hz8y9NfWGDtY4QQBXuy8e2mse6uBB9w2wKBoYd9vBipmc/ZfEgfcNsCgaGHfbwYqZnP2XxIH3DbAoGhh328GKmZz9l8SB9w2wKBoYd9vBipmc/ZfEtqeMJgyY4GpBndMsyv/C1LW3//kgkbPTHwPu8nbSV8mxrNNXt9yNFyhfU4yo5eMYDrU7LyRdZNBIdCOHn2qiOMc05qmtWHn8S9qnZXcdiqWgV8aVsEbmWMH6tyYtIKcU1V/sHlDT1R/RdXXNCEN83vh3aRDJrHbnF5k/R8/MvTX4d2kQyax25xeZP0fPzL015EYhbg9ZczTyQIxwKlqvHoBYV/rzO+fdHsAZAnh0YvMDPbTJbm7mKLmmsRu7aTK9MazTV7fcjRcoX1OMqOXjGA61Oy8kXWTQSHQjh59qojjHNOaprVh5/Evap2V3HYqlqbive4evYj26FR5FWwXaErh3aRDJrHbnF5k/R8/MvTX4d2kQyax25xeZP0fPzL017hK9TKL8xP3Q5H/6eZwgsvIuOmbv9KvqYeWVuY+RcthkcDqCG2JTtBofJbZox60QAbKGzeH+IW18pjKQj9xPnxPw65tKnp24GBEIvIOE5H2oOsCKGQ1bT60GhE/cGNTFDM0YtvW3gDZbIRVnvv1lwTh3aRDJrHbnF5k/R8/MvTX4d2kQyax25xeZP0fPzL0123W0HU+K5fW5UhFJ2yd5XLe/Ox2xYlWvCsDJ3Pgb6ryuLeApl7BDpEtS8r6PnxMZhR96dZzJ0cYVccMZ5p8+Ny+56Xs46v3jhkIHTTMTlHqyWY42YUbY1M4DyETyDwgIjZoQudfuiAu3zXPxqqdphHh3aRDJrHbnF5k/R8/MvTX4d2kQyax25xeZP0fPzL016qC4vlhJXOWBe42iQYUqrkPuEK8L45YUpJkWyDHHtKIRFcX2+UPVZqpzzpbl9bI3j+/cZdBqXyomPEZig7NoPZViHEfBDGXvQnhp0AuqMPsxmbw31P6ibRLHidB7V9KyAJ+RSucfITcAjIgObcBkWlVf7B5Q09Uf0XV1zQhDfN7l/xnobJiyeOWrRC2LYHDJCCP7XhLcUl5hDaaNydpN2lO+DZCv6EBMh3aXjRME2HYF5xDw0acjd1DWvIVsiskiJU2syZQk/pH9INsol116WTh3aRDJrHbnF5k/R8/MvTX4d2kQyax25xeZP0fPzL01ztXxinofIF/BObn1WRfqwblE9yocaCY8qt7RWKMu0ocrFlR/3DUHPTyYH8i6L3tJC3DMKIr0aVWi2e33ySpORdVzTSMJ1Qe2sJPSm4PH4T61Jp+tH4lPDRwBhoCaYRRerOiGp3S3X870Bhrm/Ko5LXh3aRDJrHbnF5k/R8/MvTX4d2kQyax25xeZP0fPzL01ypIZpu+KHDzye0naN6hUuGNuZOikz1XHm0uwXihzl3L4d2kQyax25xeZP0fPzL018nogjoOk/I5GjaB5uBsPiRxhNidLyUlR4X7nDdGVfUFaUrUVPAfc3WuJrS5OJZqR4AVoLGu1+pwBb9SM8aeYGjGzT+pGCtVzL2wwkI361JO7PHy4L0GQCsO/RKO//uAKQosSuZ8jnvk8NGhorkJCFw7V8Yp6HyBfwTm59VkX6sG5RPcqHGgmPKre0VijLtKHOs/R1cqZ3ndkokJJAYVfvmivM+zA3kQE9utNyDzYxu9NzU60+VEZcRZcHxBRMgxB/ssrzQZcSp887hn0U5Yk5iRz321Bzi/mxpimACDoqiXzLJUxF2IQa0hEhxWgNhZDA==
1.0
J2HjbUEPwBwR+Vq7Onem4L/Ls/5IJ0focDi77+xXsmA8d6qt0vhtvCWJeIO4Pa+qkPhz+x81fz7iHTI4JMviUmZXNTtLf4zlb/EZgaQUrgaUb6dvxRJs+b34ncXQdWGTD1I5DgvQLVx45f7x6rUYCrFFraoYYB9rPlbAQ2Q5U5c= - kLkfFbHfmFhFJE8P0/XeDMBEdWbsmckKwps8YONZTOO+qgneVrE0I+TSHsQhwCn12wrTmVrt8lPWOMyW3ymgJthoH30lERnRkhfXXwQLY/J8GBdk8iCLA3tltaIqvrulgD8OeMXOoTW2TZIW0/9oGeHdpEMmsducXmT9Hz8y9NfWGDtY4QQBXuy8e2mse6uBB9w2wKBoYd9vBipmc/ZfEgfcNsCgaGHfbwYqZnP2XxIH3DbAoGhh328GKmZz9l8SB9w2wKBoYd9vBipmc/ZfEtqeMJgyY4GpBndMsyv/C1LW3//kgkbPTHwPu8nbSV8mxrNNXt9yNFyhfU4yo5eMYDrU7LyRdZNBIdCOHn2qiOMc05qmtWHn8S9qnZXcdiqWgV8aVsEbmWMH6tyYtIKcU1V/sHlDT1R/RdXXNCEN83vh3aRDJrHbnF5k/R8/MvTX4d2kQyax25xeZP0fPzL015EYhbg9ZczTyQIxwKlqvHoBYV/rzO+fdHsAZAnh0YvMDPbTJbm7mKLmmsRu7aTK9MazTV7fcjRcoX1OMqOXjGA61Oy8kXWTQSHQjh59qojjHNOaprVh5/Evap2V3HYqlqbive4evYj26FR5FWwXaErh3aRDJrHbnF5k/R8/MvTX4d2kQyax25xeZP0fPzL017hK9TKL8xP3Q5H/6eZwgsvIuOmbv9KvqYeWVuY+RcthkcDqCG2JTtBofJbZox60QAbKGzeH+IW18pjKQj9xPnxPw65tKnp24GBEIvIOE5H2oOsCKGQ1bT60GhE/cGNTFDM0YtvW3gDZbIRVnvv1lwTh3aRDJrHbnF5k/R8/MvTX4d2kQyax25xeZP0fPzL0123W0HU+K5fW5UhFJ2yd5XLe/Ox2xYlWvCsDJ3Pgb6ryuLeApl7BDpEtS8r6PnxMZhR96dZzJ0cYVccMZ5p8+Ny+56Xs46v3jhkIHTTMTlHqyWY42YUbY1M4DyETyDwgIjZoQudfuiAu3zXPxqqdphHh3aRDJrHbnF5k/R8/MvTX4d2kQyax25xeZP0fPzL016qC4vlhJXOWBe42iQYUqrkPuEK8L45YUpJkWyDHHtKIRFcX2+UPVZqpzzpbl9bI3j+/cZdBqXyomPEZig7NoPZViHEfBDGXvQnhp0AuqMPsxmbw31P6ibRLHidB7V9KyAJ+RSucfITcAjIgObcBkWlVf7B5Q09Uf0XV1zQhDfN7l/xnobJiyeOWrRC2LYHDJCCP7XhLcUl5hDaaNydpN2lO+DZCv6EBMh3aXjRME2HYF5xDw0acjd1DWvIVsiskiJU2syZQk/pH9INsol116WTh3aRDJrHbnF5k/R8/MvTX4d2kQyax25xeZP0fPzL01ztXxinofIF/BObn1WRfqwblE9yocaCY8qt7RWKMu0ocrFlR/3DUHPTyYH8i6L3tJC3DMKIr0aVWi2e33ySpORdVzTSMJ1Qe2sJPSm4PH4T61Jp+tH4lPDRwBhoCaYRRerOiGp3S3X870Bhrm/Ko5LXh3aRDJrHbnF5k/R8/MvTX4d2kQyax25xeZP0fPzL01ypIZpu+KHDzye0naN6hUuGNuZOikz1XHm0uwXihzl3L4d2kQyax25xeZP0fPzL018nogjoOk/I5GjaB5uBsPiRxhNidLyUlR4X7nDdGVfUFaUrUVPAfc3WuJrS5OJZqR4AVoLGu1+pwBb9SM8aeYGjGzT+pGCtVzL2wwkI361JO7PHy4L0GQCsO/RKO//uAKQosSuZ8jnvk8NGhorkJCFw7V8Yp6HyBfwTm59VkX6sG5RPcqHGgmPKre0VijLtKHOs/R1cqZ3ndkokJJAYVfvmivM+zA3kQE9utNyDzYxu9NzU60+VEZcRZcHxBRMgxB/ssrzQZcSp887hn0U5Yk5iRz321Bzi/mxpimACDoqiXzLJUxF2IQa0hEhxWgNhZDA==
non_process
ls qkphz rzo ny rko vezcrzchxbrmgxb
0
94,018
10,788,847,453
IssuesEvent
2019-11-05 10:36:18
hwakabh/misc-snippets
https://api.github.com/repos/hwakabh/misc-snippets
closed
[kick_vm_restart_remotely]Bugfix of setting script root path.
documentation
Currently when calling remote .ps1 script from parent server, the path of `password.secret` would be set as `$HOME\Documents\`, so that it could not retrieve proper credentials from SecureString objects same as in the `password.secret` file. Need fix to retrieve encrypted password from the file `password.secret` on remote server by `childScript.ps1`.
1.0
[kick_vm_restart_remotely]Bugfix of setting script root path. - Currently when calling remote .ps1 script from parent server, the path of `password.secret` would be set as `$HOME\Documents\`, so that it could not retrieve proper credentials from SecureString objects same as in the `password.secret` file. Need fix to retrieve encrypted password from the file `password.secret` on remote server by `childScript.ps1`.
non_process
bugfix of setting script root path currently when calling remote script from parent server the path of password secret would be set as home documents so that it could not retrieve proper credentials from securestring objects same as in the password secret file need fix to retrieve encrypted password from the file password secret on remote server by childscript
0
5,085
7,876,059,476
IssuesEvent
2018-06-25 22:51:59
Great-Hill-Corporation/quickBlocks
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
closed
miniBlocks.bin and fullBlockIndex.bin require full scan of blockchain
libs-etherlib status-inprocess type-enhancement
I need a way to quickly scan the full blockchain or download the last fullBlockIndex / miniBlocks.bin files. Otherwise, some tools (such as when Block and cacheMan?) don't work. You can figure out what doesn't work by clearing the cache and re-running all the tests.
1.0
miniBlocks.bin and fullBlockIndex.bin require full scan of blockchain - I need a way to quickly scan the full blockchain or download the last fullBlockIndex / miniBlocks.bin files. Otherwise, some tools (such as when Block and cacheMan?) don't work. You can figure out what doesn't work by clearing the cache and re-running all the tests.
process
miniblocks bin and fullblockindex bin require full scan of blockchain i need a way to quickly scan the full blockchain or download the last fullblockindex miniblocks bin files otherwise some tools such as when block and cacheman don t work you can figure out what doesn t work by clearing the cache and re running all the tests
1
18,900
24,839,204,094
IssuesEvent
2022-10-26 11:21:50
quark-engine/quark-engine
https://api.github.com/repos/quark-engine/quark-engine
closed
Prepare to release version v22.10.1
issue-processing-state-03
Update the version number in `__init__.py` for the release with the latest version of Quark. It includes the following changes. * #389 * #391 * #396 * #399
1.0
Prepare to release version v22.10.1 - Update the version number in `__init__.py` for the release with the latest version of Quark. It includes the following changes. * #389 * #391 * #396 * #399
process
prepare to release version update the version number in init py for the release with the latest version of quark it includes the following changes
1
5,204
7,976,394,462
IssuesEvent
2018-07-17 12:34:09
pelias/pelias
https://api.github.com/repos/pelias/pelias
closed
Support https for pelias.io
processed
Currently we only support http and need to update the AWS settings to support https.
1.0
Support https for pelias.io - Currently we only support http and need to update the AWS settings to support https.
process
support https for pelias io currently we only support http and need to update the aws settings to support https
1
12,738
15,104,769,097
IssuesEvent
2021-02-08 12:07:38
spring-projects/spring-hateoas
https://api.github.com/repos/spring-projects/spring-hateoas
closed
Consider adding support for injecting a custom ObjectMapper bean
in: configuration process: waiting for feedback
To my knowledge, currently it isn't possible to inject a custom, Spring HATEOAS dedicated, `ObjectMapper` bean into `ConverterRegisteringWebMvcConfigurer` as it will either work with the primary instance, or create new one: https://github.com/spring-projects/spring-hateoas/blob/9131a82b9070187b3c3dbb07fd8a7dd244bddd18/src/main/java/org/springframework/hateoas/config/ConverterRegisteringWebMvcConfigurer.java#L105 It would be nice to have the capability to configure Spring HATEOAS to work with a non-primary `ObjectMapper` bean as in quite a few non-trivial apps the `ObjectMapper` will be used in different places and therefore with different configuration requirements. For example, in Spring Session we have qualifiers that enable users to provide their own, Spring Session dedicated, bean definitions for infrastructure related components. As an example, see: - [`RedisHttpSessionConfiguration`](https://github.com/spring-projects/spring-session/blob/0c9fbedd05517036c0b17184eaf42fba8586fe23/spring-session-data-redis/src/main/java/org/springframework/session/data/redis/config/annotation/web/http/RedisHttpSessionConfiguration.java#L180-L215) - allows injection of Spring Session specific `RedisConnectionFactory`, `RedisSerializer`, `Executor`. - [`JdbcHttpSessionConfiguration`](https://github.com/spring-projects/spring-session/blob/0c9fbedd05517036c0b17184eaf42fba8586fe23/spring-session-jdbc/src/main/java/org/springframework/session/jdbc/config/annotation/web/http/JdbcHttpSessionConfiguration.java#L149-L175) - allows injection of Spring Session specific `DataSource`, `LobHandler`, `ConversionService`. It would be nice if Spring HATEOAS would provide similar configuration capabilities. I'm working on a project where we're adding a new version of API (HAL based) but still have to support the previous API for quite some time - it would be nice if we could have a simple way of hooking a dedicated `ObjectMapper` instance into Spring HATEOAS config. At the moment we're using a `BeanPostProcessor` in a similar manner like [described here](https://github.com/spring-projects/spring-hateoas/issues/263#issuecomment-463392669), which obviously isn't ideal.
1.0
Consider adding support for injecting a custom ObjectMapper bean - To my knowledge, currently it isn't possible to inject a custom, Spring HATEOAS dedicated, `ObjectMapper` bean into `ConverterRegisteringWebMvcConfigurer` as it will either work with the primary instance, or create new one: https://github.com/spring-projects/spring-hateoas/blob/9131a82b9070187b3c3dbb07fd8a7dd244bddd18/src/main/java/org/springframework/hateoas/config/ConverterRegisteringWebMvcConfigurer.java#L105 It would be nice to have the capability to configure Spring HATEOAS to work with a non-primary `ObjectMapper` bean as in quite a few non-trivial apps the `ObjectMapper` will be used in different places and therefore with different configuration requirements. For example, in Spring Session we have qualifiers that enable users to provide their own, Spring Session dedicated, bean definitions for infrastructure related components. As an example, see: - [`RedisHttpSessionConfiguration`](https://github.com/spring-projects/spring-session/blob/0c9fbedd05517036c0b17184eaf42fba8586fe23/spring-session-data-redis/src/main/java/org/springframework/session/data/redis/config/annotation/web/http/RedisHttpSessionConfiguration.java#L180-L215) - allows injection of Spring Session specific `RedisConnectionFactory`, `RedisSerializer`, `Executor`. - [`JdbcHttpSessionConfiguration`](https://github.com/spring-projects/spring-session/blob/0c9fbedd05517036c0b17184eaf42fba8586fe23/spring-session-jdbc/src/main/java/org/springframework/session/jdbc/config/annotation/web/http/JdbcHttpSessionConfiguration.java#L149-L175) - allows injection of Spring Session specific `DataSource`, `LobHandler`, `ConversionService`. It would be nice if Spring HATEOAS would provide similar configuration capabilities. I'm working on a project where we're adding a new version of API (HAL based) but still have to support the previous API for quite some time - it would be nice if we could have a simple way of hooking a dedicated `ObjectMapper` instance into Spring HATEOAS config. At the moment we're using a `BeanPostProcessor` in a similar manner like [described here](https://github.com/spring-projects/spring-hateoas/issues/263#issuecomment-463392669), which obviously isn't ideal.
process
consider adding support for injecting a custom objectmapper bean to my knowledge currently it isn t possible to inject a custom spring hateoas dedicated objectmapper bean into converterregisteringwebmvcconfigurer as it will either work with the primary instance or create new one it would be nice to have the capability to configure spring hateoas to work with a non primary objectmapper bean as in quite a few non trivial apps the objectmapper will be used in different places and therefore with different configuration requirements for example in spring session we have qualifiers that enable users to provide their own spring session dedicated bean definitions for infrastructure related components as an example see allows injection of spring session specific redisconnectionfactory redisserializer executor allows injection of spring session specific datasource lobhandler conversionservice it would be nice if spring hateoas would provide similar configuration capabilities i m working on a project where we re adding a new version of api hal based but still have to support the previous api for quite some time it would be nice if we could have a simple way of hooking a dedicated objectmapper instance into spring hateoas config at the moment we re using a beanpostprocessor in a similar manner like which obviously isn t ideal
1
89,475
17,930,889,991
IssuesEvent
2021-09-10 09:05:36
pulumi/pulumi
https://api.github.com/repos/pulumi/pulumi
opened
ReplaceOnChanges: Found recursive object warnings when upgrading Azure-Native to 3.12.0
impact/usability area/codegen needs-triage
Upgrading Azure-Native to codegen v3.12.0 causes a very large number of warnings: ``` warning: Failed to genereate full `ReplaceOnChanges`: Found recursive object "virtualNetworkTaps" warning: Failed to genereate full `ReplaceOnChanges`: Found recursive object "applicationGatewayBackendAddressPools" warning: Failed to genereate full `ReplaceOnChanges`: Found recursive object "backendIPConfigurations" warning: Failed to genereate full `ReplaceOnChanges`: Found recursive object "loadBalancerInboundNatRules" warning: Failed to genereate full `ReplaceOnChanges`: Found recursive object "publicIPAddress" warning: Failed to genereate full `ReplaceOnChanges`: Found recursive object "subnet" warning: Failed to genereate full `ReplaceOnChanges`: Found recursive object "linkedPublicIPAddress" warning: Failed to genereate full `ReplaceOnChanges`: Found recursive object "servicePublicIPAddress" warning: Failed to genereate full `ReplaceOnChanges`: Found recursive object "subnet" ``` This totally screws the output of code generation... Recursive objects are expected and are indeed used heavily in azure-native. We should adjust codegen accordingly.
1.0
ReplaceOnChanges: Found recursive object warnings when upgrading Azure-Native to 3.12.0 - Upgrading Azure-Native to codegen v3.12.0 causes a very large number of warnings: ``` warning: Failed to genereate full `ReplaceOnChanges`: Found recursive object "virtualNetworkTaps" warning: Failed to genereate full `ReplaceOnChanges`: Found recursive object "applicationGatewayBackendAddressPools" warning: Failed to genereate full `ReplaceOnChanges`: Found recursive object "backendIPConfigurations" warning: Failed to genereate full `ReplaceOnChanges`: Found recursive object "loadBalancerInboundNatRules" warning: Failed to genereate full `ReplaceOnChanges`: Found recursive object "publicIPAddress" warning: Failed to genereate full `ReplaceOnChanges`: Found recursive object "subnet" warning: Failed to genereate full `ReplaceOnChanges`: Found recursive object "linkedPublicIPAddress" warning: Failed to genereate full `ReplaceOnChanges`: Found recursive object "servicePublicIPAddress" warning: Failed to genereate full `ReplaceOnChanges`: Found recursive object "subnet" ``` This totally screws the output of code generation... Recursive objects are expected and are indeed used heavily in azure-native. We should adjust codegen accordingly.
non_process
replaceonchanges found recursive object warnings when upgrading azure native to upgrading azure native to codegen causes a very large number of warnings warning failed to genereate full replaceonchanges found recursive object virtualnetworktaps warning failed to genereate full replaceonchanges found recursive object applicationgatewaybackendaddresspools warning failed to genereate full replaceonchanges found recursive object backendipconfigurations warning failed to genereate full replaceonchanges found recursive object loadbalancerinboundnatrules warning failed to genereate full replaceonchanges found recursive object publicipaddress warning failed to genereate full replaceonchanges found recursive object subnet warning failed to genereate full replaceonchanges found recursive object linkedpublicipaddress warning failed to genereate full replaceonchanges found recursive object servicepublicipaddress warning failed to genereate full replaceonchanges found recursive object subnet this totally screws the output of code generation recursive objects are expected and are indeed used heavily in azure native we should adjust codegen accordingly
0
105,769
9,100,682,108
IssuesEvent
2019-02-20 09:13:06
humera987/FXLabs-Test-Automation
https://api.github.com/repos/humera987/FXLabs-Test-Automation
closed
Test : ApiV1ProjectsSearchGetQueryParamPageNegativeNumber
test
Project : Test Job : Default Env : Default Category : Negative_Number Tags : [OWASP - OTG-BUSLOGIC-001, Fuzz] Severity : Major Region : US_WEST Result : fail Status Code : 404 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=NGJiYjU2ZDItMDU3MS00ZTExLTllYzYtY2FmNWJlNGFkMTU0; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 20 Feb 2019 07:56:56 GMT]} Endpoint : http://13.56.210.25/api/v1/api/v1/projects/search?page=-1 Request : Response : { "timestamp" : "2019-02-20T07:56:57.251+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/api/v1/projects/search" } Logs : 2019-02-20 07:56:57 DEBUG [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : URL [http://13.56.210.25/api/v1/api/v1/projects/search?page=-1] 2019-02-20 07:56:57 DEBUG [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : Method [GET] 2019-02-20 07:56:57 DEBUG [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : Request [] 2019-02-20 07:56:57 DEBUG [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : Request-Headers [{Content-Type=[application/json], Accept=[application/json], Authorization=[Basic SHVtZXJhLy9odW1lcmFAZnhsYWJzLmlvOmh1bWVyYTEyMyQ=]}] 2019-02-20 07:56:57 DEBUG [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : Response [{ "timestamp" : "2019-02-20T07:56:57.251+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/api/v1/projects/search" }] 2019-02-20 07:56:57 DEBUG [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : Response-Headers [{X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=NGJiYjU2ZDItMDU3MS00ZTExLTllYzYtY2FmNWJlNGFkMTU0; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 20 Feb 2019 07:56:56 GMT]}] 2019-02-20 07:56:57 DEBUG [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : StatusCode [404] 2019-02-20 07:56:57 DEBUG [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : Time [13146] 2019-02-20 07:56:57 DEBUG [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : Size [150] 2019-02-20 07:56:57 INFO [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed] 2019-02-20 07:56:57 ERROR [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed] --- FX Bot ---
1.0
Test : ApiV1ProjectsSearchGetQueryParamPageNegativeNumber - Project : Test Job : Default Env : Default Category : Negative_Number Tags : [OWASP - OTG-BUSLOGIC-001, Fuzz] Severity : Major Region : US_WEST Result : fail Status Code : 404 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=NGJiYjU2ZDItMDU3MS00ZTExLTllYzYtY2FmNWJlNGFkMTU0; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 20 Feb 2019 07:56:56 GMT]} Endpoint : http://13.56.210.25/api/v1/api/v1/projects/search?page=-1 Request : Response : { "timestamp" : "2019-02-20T07:56:57.251+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/api/v1/projects/search" } Logs : 2019-02-20 07:56:57 DEBUG [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : URL [http://13.56.210.25/api/v1/api/v1/projects/search?page=-1] 2019-02-20 07:56:57 DEBUG [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : Method [GET] 2019-02-20 07:56:57 DEBUG [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : Request [] 2019-02-20 07:56:57 DEBUG [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : Request-Headers [{Content-Type=[application/json], Accept=[application/json], Authorization=[Basic SHVtZXJhLy9odW1lcmFAZnhsYWJzLmlvOmh1bWVyYTEyMyQ=]}] 2019-02-20 07:56:57 DEBUG [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : Response [{ "timestamp" : "2019-02-20T07:56:57.251+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/api/v1/projects/search" }] 2019-02-20 07:56:57 DEBUG [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : Response-Headers [{X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=NGJiYjU2ZDItMDU3MS00ZTExLTllYzYtY2FmNWJlNGFkMTU0; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 20 Feb 2019 07:56:56 GMT]}] 2019-02-20 07:56:57 DEBUG [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : StatusCode [404] 2019-02-20 07:56:57 DEBUG [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : Time [13146] 2019-02-20 07:56:57 DEBUG [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : Size [150] 2019-02-20 07:56:57 INFO [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed] 2019-02-20 07:56:57 ERROR [ApiV1ProjectsSearchGetQueryParamPageNegativeNumber] : Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed] --- FX Bot ---
non_process
test project test job default env default category negative number tags severity major region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api projects search logs debug url debug method debug request debug request headers accept authorization debug response timestamp status error not found message no message available path api api projects search debug response headers x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date debug statuscode debug time debug size info assertion resolved to result error assertion resolved to result fx bot
0
664
3,134,011,113
IssuesEvent
2015-09-10 07:25:46
e-government-ua/i
https://api.github.com/repos/e-government-ua/i
closed
Доработка визуализации при работе с обязательными полями на дашборде
active In process of testing test
1. Необходимо отмечать звуздечкой обязательные к заполнению поля. 2. Если пользователь пытается отправить заявку, без заполнения обязательного поля, это поле и его название необходимо подсвечивать красным. Также необходимо дополнительно отображать во всплывающем окне понятный текст сообщения с просьбой заполнить все обязательные поля. Сейчас отображается ошибка, которая пользователей пугает - https://files.slack.com/files-pri/T040NRX6X-F06P03C4F/pasted_image_at_2015_06_24_11_12_am.png
1.0
Доработка визуализации при работе с обязательными полями на дашборде - 1. Необходимо отмечать звуздечкой обязательные к заполнению поля. 2. Если пользователь пытается отправить заявку, без заполнения обязательного поля, это поле и его название необходимо подсвечивать красным. Также необходимо дополнительно отображать во всплывающем окне понятный текст сообщения с просьбой заполнить все обязательные поля. Сейчас отображается ошибка, которая пользователей пугает - https://files.slack.com/files-pri/T040NRX6X-F06P03C4F/pasted_image_at_2015_06_24_11_12_am.png
process
доработка визуализации при работе с обязательными полями на дашборде необходимо отмечать звуздечкой обязательные к заполнению поля если пользователь пытается отправить заявку без заполнения обязательного поля это поле и его название необходимо подсвечивать красным также необходимо дополнительно отображать во всплывающем окне понятный текст сообщения с просьбой заполнить все обязательные поля сейчас отображается ошибка которая пользователей пугает
1
2,289
5,112,185,207
IssuesEvent
2017-01-06 10:09:23
nodejs/node
https://api.github.com/repos/nodejs/node
closed
I spawn through the creation of sub-thread How non-stdio: 'inherit' set tty to true
child_process question
I spawn through the creation of sub-thread How non-stdio: 'inherit' set tty to true
1.0
I spawn through the creation of sub-thread How non-stdio: 'inherit' set tty to true - I spawn through the creation of sub-thread How non-stdio: 'inherit' set tty to true
process
i spawn through the creation of sub thread how non stdio inherit set tty to true i spawn through the creation of sub thread how non stdio inherit set tty to true
1
99,151
16,430,832,162
IssuesEvent
2021-05-20 01:10:19
Sam-Marx/anti_nude_bot
https://api.github.com/repos/Sam-Marx/anti_nude_bot
opened
CVE-2021-29517 (Low) detected in tensorflow-1.14.0-cp27-cp27mu-manylinux1_x86_64.whl
security vulnerability
## CVE-2021-29517 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.14.0-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/d3/59/d88fe8c58ffb66aca21d03c0e290cd68327cc133591130c674985e98a482/tensorflow-1.14.0-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d3/59/d88fe8c58ffb66aca21d03c0e290cd68327cc133591130c674985e98a482/tensorflow-1.14.0-cp27-cp27mu-manylinux1_x86_64.whl</a></p> <p>Path to dependency file: /anti_nude_bot/requirements.txt</p> <p>Path to vulnerable library: teSource-ArchiveExtractor_0c4fd107-566e-4a98-973e-bda8edd30ae2/20190703163800_95826/20190703163719_depth_0/h5py-2.9.0-cp27-cp27mu-manylinux1_x86_64/h5py/tests/hl</p> <p> Dependency Hierarchy: - :x: **tensorflow-1.14.0-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an end-to-end open source platform for machine learning. A malicious user could trigger a division by 0 in `Conv3D` implementation. The implementation(https://github.com/tensorflow/tensorflow/blob/42033603003965bffac51ae171b51801565e002d/tensorflow/core/kernels/conv_ops_3d.cc#L143-L145) does a modulo operation based on user controlled input. Thus, when `filter` has a 0 as the fifth element, this results in a division by 0. Additionally, if the shape of the two tensors is not valid, an Eigen assertion can be triggered, resulting in a program crash. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range. <p>Publish Date: 2021-05-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29517>CVE-2021-29517</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>2.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-84mw-34w6-2q43">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-84mw-34w6-2q43</a></p> <p>Release Date: 2021-05-14</p> <p>Fix Resolution: tensorflow - 2.5.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-29517 (Low) detected in tensorflow-1.14.0-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2021-29517 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.14.0-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/d3/59/d88fe8c58ffb66aca21d03c0e290cd68327cc133591130c674985e98a482/tensorflow-1.14.0-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d3/59/d88fe8c58ffb66aca21d03c0e290cd68327cc133591130c674985e98a482/tensorflow-1.14.0-cp27-cp27mu-manylinux1_x86_64.whl</a></p> <p>Path to dependency file: /anti_nude_bot/requirements.txt</p> <p>Path to vulnerable library: teSource-ArchiveExtractor_0c4fd107-566e-4a98-973e-bda8edd30ae2/20190703163800_95826/20190703163719_depth_0/h5py-2.9.0-cp27-cp27mu-manylinux1_x86_64/h5py/tests/hl</p> <p> Dependency Hierarchy: - :x: **tensorflow-1.14.0-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an end-to-end open source platform for machine learning. A malicious user could trigger a division by 0 in `Conv3D` implementation. The implementation(https://github.com/tensorflow/tensorflow/blob/42033603003965bffac51ae171b51801565e002d/tensorflow/core/kernels/conv_ops_3d.cc#L143-L145) does a modulo operation based on user controlled input. Thus, when `filter` has a 0 as the fifth element, this results in a division by 0. Additionally, if the shape of the two tensors is not valid, an Eigen assertion can be triggered, resulting in a program crash. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range. <p>Publish Date: 2021-05-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29517>CVE-2021-29517</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>2.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-84mw-34w6-2q43">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-84mw-34w6-2q43</a></p> <p>Release Date: 2021-05-14</p> <p>Fix Resolution: tensorflow - 2.5.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve low detected in tensorflow whl cve low severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file anti nude bot requirements txt path to vulnerable library tesource archiveextractor depth tests hl dependency hierarchy x tensorflow whl vulnerable library vulnerability details tensorflow is an end to end open source platform for machine learning a malicious user could trigger a division by in implementation the implementation does a modulo operation based on user controlled input thus when filter has a as the fifth element this results in a division by additionally if the shape of the two tensors is not valid an eigen assertion can be triggered resulting in a program crash the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow step up your open source security game with whitesource
0
277,789
21,057,763,041
IssuesEvent
2022-04-01 06:18:02
QIN2DIM/hcaptcha-challenger
https://api.github.com/repos/QIN2DIM/hcaptcha-challenger
closed
feat(add): Handle `airplane in the sky flying left` challenge with SK-IMAGE solution
documentation feature fix
## Introduction ## Usage You can run the demo project associated with the solution by running the following command: ```bash # hcaptcha-challenge/src python3 main.py demo_v3 ``` ## Linked pull requests @beiyuouo https://github.com/QIN2DIM/hcaptcha-challenger/commit/30ae3915b95e65cedfd9d0d8a9896bfdd9cd114f ## Demo ![skimage-solution-2](https://user-images.githubusercontent.com/62018067/156694812-c52926e1-8d6d-4127-a241-94f5d742a694.gif)
1.0
feat(add): Handle `airplane in the sky flying left` challenge with SK-IMAGE solution - ## Introduction ## Usage You can run the demo project associated with the solution by running the following command: ```bash # hcaptcha-challenge/src python3 main.py demo_v3 ``` ## Linked pull requests @beiyuouo https://github.com/QIN2DIM/hcaptcha-challenger/commit/30ae3915b95e65cedfd9d0d8a9896bfdd9cd114f ## Demo ![skimage-solution-2](https://user-images.githubusercontent.com/62018067/156694812-c52926e1-8d6d-4127-a241-94f5d742a694.gif)
non_process
feat add handle airplane in the sky flying left challenge with sk image solution introduction usage you can run the demo project associated with the solution by running the following command bash hcaptcha challenge src main py demo linked pull requests beiyuouo demo
0
9,460
12,438,857,729
IssuesEvent
2020-05-26 09:08:02
prisma/prisma
https://api.github.com/repos/prisma/prisma
closed
[Introspection] Omit `_RelayId` table from the introspection result
kind/improvement process/candidate topic: introspection topic: prisma1
## Problem `_RelayId` was a table created by datamodel v1.0 Prisma 1 to track all the nodes in a database. This was used to create the Relay compatible node API. This table is introspected as a model with Prisma 2 introspection. This might be confusing if you are migrating to Prisma 2 from a Prisma 1 project. ## Suggested solution We should omit this model in Prisma 2 introspection.
1.0
[Introspection] Omit `_RelayId` table from the introspection result - ## Problem `_RelayId` was a table created by datamodel v1.0 Prisma 1 to track all the nodes in a database. This was used to create the Relay compatible node API. This table is introspected as a model with Prisma 2 introspection. This might be confusing if you are migrating to Prisma 2 from a Prisma 1 project. ## Suggested solution We should omit this model in Prisma 2 introspection.
process
omit relayid table from the introspection result problem relayid was a table created by datamodel prisma to track all the nodes in a database this was used to create the relay compatible node api this table is introspected as a model with prisma introspection this might be confusing if you are migrating to prisma from a prisma project suggested solution we should omit this model in prisma introspection
1
121,215
4,806,504,987
IssuesEvent
2016-11-02 18:44:04
radiasoft/sirepo
https://api.github.com/repos/radiasoft/sirepo
closed
SRW - hide Mask element on production servers
1st Priority
Development of the Mask element is far from its final stage and the ultimate goal, so per request of the Metrology group we should hide it on production servers of Sirepo (like alpha, beta), however it should be available for BNL installations. @robnagler, any ideas about it? Probably we could introduce another environment variable like `SIREPO_USE_UNRELEASED_FUNCTIONALITY` or something like that. If it's false or not set, new functionality should not appear in Sirepo.
1.0
SRW - hide Mask element on production servers - Development of the Mask element is far from its final stage and the ultimate goal, so per request of the Metrology group we should hide it on production servers of Sirepo (like alpha, beta), however it should be available for BNL installations. @robnagler, any ideas about it? Probably we could introduce another environment variable like `SIREPO_USE_UNRELEASED_FUNCTIONALITY` or something like that. If it's false or not set, new functionality should not appear in Sirepo.
non_process
srw hide mask element on production servers development of the mask element is far from its final stage and the ultimate goal so per request of the metrology group we should hide it on production servers of sirepo like alpha beta however it should be available for bnl installations robnagler any ideas about it probably we could introduce another environment variable like sirepo use unreleased functionality or something like that if it s false or not set new functionality should not appear in sirepo
0
13,448
15,894,957,398
IssuesEvent
2021-04-11 12:15:50
NOAA-EMC/NCEPLIBS
https://api.github.com/repos/NOAA-EMC/NCEPLIBS
closed
We should always include administrators in NCEPLIBS repo branch protection rules for develop branch, and require CI to pass
process
On all NCEPLIBS projects we should have branch protection for the develop branch, and that should be enforced on admins as well. (IMO this should be the GitHub default.) To protect develop against admins means that admins will not be able to merge to develop from the command line - pull requests must be used. This is an important process element and should be enforced everywhere. Especially on admins, who are generally the ones editing the code most often, and therefore most likely to make a mistake and accidentally merge to develop. Here's a picture of the check-box that needs to be checked for every project: ![image](https://user-images.githubusercontent.com/38856240/96878555-3aabe500-1438-11eb-9b9a-431b0e1d34b3.png)
1.0
We should always include administrators in NCEPLIBS repo branch protection rules for develop branch, and require CI to pass - On all NCEPLIBS projects we should have branch protection for the develop branch, and that should be enforced on admins as well. (IMO this should be the GitHub default.) To protect develop against admins means that admins will not be able to merge to develop from the command line - pull requests must be used. This is an important process element and should be enforced everywhere. Especially on admins, who are generally the ones editing the code most often, and therefore most likely to make a mistake and accidentally merge to develop. Here's a picture of the check-box that needs to be checked for every project: ![image](https://user-images.githubusercontent.com/38856240/96878555-3aabe500-1438-11eb-9b9a-431b0e1d34b3.png)
process
we should always include administrators in nceplibs repo branch protection rules for develop branch and require ci to pass on all nceplibs projects we should have branch protection for the develop branch and that should be enforced on admins as well imo this should be the github default to protect develop against admins means that admins will not be able to merge to develop from the command line pull requests must be used this is an important process element and should be enforced everywhere especially on admins who are generally the ones editing the code most often and therefore most likely to make a mistake and accidentally merge to develop here s a picture of the check box that needs to be checked for every project
1
683,131
23,369,202,957
IssuesEvent
2022-08-10 18:10:03
txj-xyz/rs3-ability-tracker
https://api.github.com/repos/txj-xyz/rs3-ability-tracker
closed
New Keybinds window layout changes
enhancement help wanted high priority
Swap to a tab based window, each tab title is a Bar name and if there is no bars then just have 1 tab with "Global" on it
1.0
New Keybinds window layout changes - Swap to a tab based window, each tab title is a Bar name and if there is no bars then just have 1 tab with "Global" on it
non_process
new keybinds window layout changes swap to a tab based window each tab title is a bar name and if there is no bars then just have tab with global on it
0
8,514
11,696,375,508
IssuesEvent
2020-03-06 09:40:04
tueit/it_management
https://api.github.com/repos/tueit/it_management
opened
"Get Items from" in Sales Invoice
enhancement process usability
We would like to be able to fetch all Time Sheets from Issues, IT Service Reports, Tasks and Projects in DocType Sales Invoice.
1.0
"Get Items from" in Sales Invoice - We would like to be able to fetch all Time Sheets from Issues, IT Service Reports, Tasks and Projects in DocType Sales Invoice.
process
get items from in sales invoice we would like to be able to fetch all time sheets from issues it service reports tasks and projects in doctype sales invoice
1
16,224
20,754,755,432
IssuesEvent
2022-03-15 11:05:47
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
[CLI] Review exit code behaviour of CLI commands
process/candidate topic: cli team/schema topic: breaking change
Migrate status still returns a 0 exit code even though a clearly defined error is returned (P1000 failed to connect to db in my case). We should change the code to non-zero. We should also review the behaviour of the other CLI commands.
1.0
[CLI] Review exit code behaviour of CLI commands - Migrate status still returns a 0 exit code even though a clearly defined error is returned (P1000 failed to connect to db in my case). We should change the code to non-zero. We should also review the behaviour of the other CLI commands.
process
review exit code behaviour of cli commands migrate status still returns a exit code even though a clearly defined error is returned failed to connect to db in my case we should change the code to non zero we should also review the behaviour of the other cli commands
1
766,578
26,889,803,784
IssuesEvent
2023-02-06 07:59:53
GSM-MSG/GCMS-FrontEnd-V2
https://api.github.com/repos/GSM-MSG/GCMS-FrontEnd-V2
closed
github action을 통한 CI 추가
1️⃣ Priority: High
### Describe _No response_ ### Additional - ci에서 동작해야 하는 것들 - yarn install --frozen-lockfile - yarn test - yarn build - 디스코드 알림
1.0
github action을 통한 CI 추가 - ### Describe _No response_ ### Additional - ci에서 동작해야 하는 것들 - yarn install --frozen-lockfile - yarn test - yarn build - 디스코드 알림
non_process
github action을 통한 ci 추가 describe no response additional ci에서 동작해야 하는 것들 yarn install frozen lockfile yarn test yarn build 디스코드 알림
0
12,764
15,144,605,849
IssuesEvent
2021-02-11 01:47:30
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
crash in graphical modeler due to QList index invalidation
Bug Crash/Data Corruption High Priority Modeller Processing
**Describe the bug** crash in Processing Graphical Modeler due to index invalidation @nyalldawson Found another race condition that was fixed in: https://github.com/qgis/QGIS/pull/39200 and https://github.com/qgis/QGIS/pull/39009 **How to Reproduce** I will attach comlex model that generate the issue btw seems the issue happen when 1. create a main model using another sub-model linking all input to the sub-model 2. update sub model adding new not hidden input 3. edit main-model => crash **QGIS and OS versions** QGIS version | 3.16.1-Hannover | QGIS code revision | 37972328b7 -- | -- | -- | -- Compiled against Qt | 5.12.8 | Running against Qt | 5.12.8 Compiled against GDAL/OGR | 3.0.4 | Running against GDAL/OGR | 3.0.4 Compiled against GEOS | 3.8.0-CAPI-1.13.1 | Running against GEOS | 3.8.0-CAPI-1.13.1 Compiled against SQLite | 3.31.1 | Running against SQLite | 3.31.1 PostgreSQL Client Version | 12.5 (Ubuntu 12.5-0ubuntu0.20.04.1) | SpatiaLite Version | 4.3.0a QWT Version | 6.1.4 | QScintilla2 Version | 2.11.2 Compiled against PROJ | 6.3.1 | Running against PROJ | Rel. 6.3.1, February 10th, 2020 OS Version | Ubuntu 20.04.1 LTS | This copy of QGIS writes debugging output.
1.0
crash in graphical modeler due to QList index invalidation - **Describe the bug** crash in Processing Graphical Modeler due to index invalidation @nyalldawson Found another race condition that was fixed in: https://github.com/qgis/QGIS/pull/39200 and https://github.com/qgis/QGIS/pull/39009 **How to Reproduce** I will attach comlex model that generate the issue btw seems the issue happen when 1. create a main model using another sub-model linking all input to the sub-model 2. update sub model adding new not hidden input 3. edit main-model => crash **QGIS and OS versions** QGIS version | 3.16.1-Hannover | QGIS code revision | 37972328b7 -- | -- | -- | -- Compiled against Qt | 5.12.8 | Running against Qt | 5.12.8 Compiled against GDAL/OGR | 3.0.4 | Running against GDAL/OGR | 3.0.4 Compiled against GEOS | 3.8.0-CAPI-1.13.1 | Running against GEOS | 3.8.0-CAPI-1.13.1 Compiled against SQLite | 3.31.1 | Running against SQLite | 3.31.1 PostgreSQL Client Version | 12.5 (Ubuntu 12.5-0ubuntu0.20.04.1) | SpatiaLite Version | 4.3.0a QWT Version | 6.1.4 | QScintilla2 Version | 2.11.2 Compiled against PROJ | 6.3.1 | Running against PROJ | Rel. 6.3.1, February 10th, 2020 OS Version | Ubuntu 20.04.1 LTS | This copy of QGIS writes debugging output.
process
crash in graphical modeler due to qlist index invalidation describe the bug crash in processing graphical modeler due to index invalidation nyalldawson found another race condition that was fixed in and how to reproduce i will attach comlex model that generate the issue btw seems the issue happen when create a main model using another sub model linking all input to the sub model update sub model adding new not hidden input edit main model crash qgis and os versions qgis version hannover qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version ubuntu spatialite version qwt version version compiled against proj running against proj rel february os version ubuntu lts this copy of qgis writes debugging output
1
75,895
21,042,853,570
IssuesEvent
2022-03-31 13:45:34
spesmilo/electrum
https://api.github.com/repos/spesmilo/electrum
closed
Feature Request: provide ARM64 releases on https://electrum.org/#download
build/packaging 📦
<!-- Note: This website is for bug reports, not general questions. Do not post issues about non-bitcoin versions of Electrum. --> With the increase of ARM64-based devices, both desktop and mobile, it would be great to have official releases on https://electrum.org/#download for such devices as well. As I suspect most of such devices still run some Linux distro it seems logical to start there.
1.0
Feature Request: provide ARM64 releases on https://electrum.org/#download - <!-- Note: This website is for bug reports, not general questions. Do not post issues about non-bitcoin versions of Electrum. --> With the increase of ARM64-based devices, both desktop and mobile, it would be great to have official releases on https://electrum.org/#download for such devices as well. As I suspect most of such devices still run some Linux distro it seems logical to start there.
non_process
feature request provide releases on note this website is for bug reports not general questions do not post issues about non bitcoin versions of electrum with the increase of based devices both desktop and mobile it would be great to have official releases on for such devices as well as i suspect most of such devices still run some linux distro it seems logical to start there
0
70,480
7,189,209,266
IssuesEvent
2018-02-02 13:12:56
geosolutions-it/MapStore2
https://api.github.com/repos/geosolutions-it/MapStore2
closed
Default Identify RowViewer shows the internal "feature" property (it should ignore it)
In Test
### Description Default Identify RowViewer shows a feature property (it should ignore it) ### In case of Bug (otherwise remove this paragraph) *Browser Affected* (use this site: https://www.whatsmybrowser.org/ for non expert users) - [x] Internet Explorer - [x] Chrome - [x] Firefox - [x] Safari *Browser Version Affected* - Indicate the browser version in which the issue has been found *Steps to reproduce* * Import a shapefile * Click on any geometry of the shapefile * You will see a feature property with a GeoJSON representation of the clicked object *Expected Result* * the feature property should not be shown *Current Result* * the feature property is shown ### Other useful information (optional):
1.0
Default Identify RowViewer shows the internal "feature" property (it should ignore it) - ### Description Default Identify RowViewer shows a feature property (it should ignore it) ### In case of Bug (otherwise remove this paragraph) *Browser Affected* (use this site: https://www.whatsmybrowser.org/ for non expert users) - [x] Internet Explorer - [x] Chrome - [x] Firefox - [x] Safari *Browser Version Affected* - Indicate the browser version in which the issue has been found *Steps to reproduce* * Import a shapefile * Click on any geometry of the shapefile * You will see a feature property with a GeoJSON representation of the clicked object *Expected Result* * the feature property should not be shown *Current Result* * the feature property is shown ### Other useful information (optional):
non_process
default identify rowviewer shows the internal feature property it should ignore it description default identify rowviewer shows a feature property it should ignore it in case of bug otherwise remove this paragraph browser affected use this site for non expert users internet explorer chrome firefox safari browser version affected indicate the browser version in which the issue has been found steps to reproduce import a shapefile click on any geometry of the shapefile you will see a feature property with a geojson representation of the clicked object expected result the feature property should not be shown current result the feature property is shown other useful information optional
0
22,362
31,075,281,102
IssuesEvent
2023-08-12 12:07:50
bitfocus/companion-module-requests
https://api.github.com/repos/bitfocus/companion-module-requests
closed
Rodecaster Pro 2
NOT YET PROCESSED
- [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested** The name of the device, hardware, or software you would like to control: Rode Rodecaster Pro 2 What you would like to be able to make it do from Companion: - Mute or Unmute audio sources - Control MIDI Pads - Record/Stop Record - ... Direct links or attachments to the ethernet control protocol or API: https://rode.com/en/user-guides/rodecaster-pro-ii
1.0
Rodecaster Pro 2 - - [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested** The name of the device, hardware, or software you would like to control: Rode Rodecaster Pro 2 What you would like to be able to make it do from Companion: - Mute or Unmute audio sources - Control MIDI Pads - Record/Stop Record - ... Direct links or attachments to the ethernet control protocol or API: https://rode.com/en/user-guides/rodecaster-pro-ii
process
rodecaster pro i have researched the list of existing companion modules and requests and have determined this has not yet been requested the name of the device hardware or software you would like to control rode rodecaster pro what you would like to be able to make it do from companion mute or unmute audio sources control midi pads record stop record direct links or attachments to the ethernet control protocol or api
1
19,900
26,350,505,582
IssuesEvent
2023-01-11 04:07:21
dart-lang/linter
https://api.github.com/repos/dart-lang/linter
closed
update `rule.dart` to add a `LintCode` field to rule stubs
type-enhancement type-task P2 process
We might also consider generating a unit test stub (rather than the old `test_data` approach).
1.0
update `rule.dart` to add a `LintCode` field to rule stubs - We might also consider generating a unit test stub (rather than the old `test_data` approach).
process
update rule dart to add a lintcode field to rule stubs we might also consider generating a unit test stub rather than the old test data approach
1
602,888
18,513,290,277
IssuesEvent
2021-10-20 07:14:01
nimblehq/nimble-medium-ios
https://api.github.com/repos/nimblehq/nimble-medium-ios
opened
Apply Danger and SwiftFormat to CI for handling PR review automatically with inline comments support.
type : chore priority : medium
## Why Apply Danger and SwiftFormat to CI for handling PR review automatically with inline comments support. More information about the implementation from ios-templates is here: - https://github.com/nimblehq/ios-templates/pull/200/files ## Acceptance Criteria Danger and SwiftFormat are applied to Github Action CI.
1.0
Apply Danger and SwiftFormat to CI for handling PR review automatically with inline comments support. - ## Why Apply Danger and SwiftFormat to CI for handling PR review automatically with inline comments support. More information about the implementation from ios-templates is here: - https://github.com/nimblehq/ios-templates/pull/200/files ## Acceptance Criteria Danger and SwiftFormat are applied to Github Action CI.
non_process
apply danger and swiftformat to ci for handling pr review automatically with inline comments support why apply danger and swiftformat to ci for handling pr review automatically with inline comments support more information about the implementation from ios templates is here acceptance criteria danger and swiftformat are applied to github action ci
0
514,930
14,947,112,116
IssuesEvent
2021-01-26 08:07:29
PazerOP/tf2_bot_detector
https://api.github.com/repos/PazerOP/tf2_bot_detector
opened
Scoreboard-Only Mode
Priority: Low Type: Enhancement
(cwedit 2 moeb n otherz) **Why?** Although the chat and log panel may be useful to many people, others may consider them as irrelevant or cluttered. Currently, users can shrink the chat panel, while the scoreboard resizes automatically. However, that shrinking also hides important buttons at the top-left. And the log panel may still be visible. ![1](https://user-images.githubusercontent.com/77030765/105802206-b00fe000-5f68-11eb-8245-1dce9f5b204f.png) ![2](https://user-images.githubusercontent.com/77030765/105801903-2cee8a00-5f68-11eb-95c3-5151f5a03fc0.png) **Optional:** - Option to remove only the chat panel, or only the log panel. - Add a `View` menu between `File` and `Settings`, and place a `Scoreboard-Only Mode` checkbox inside of it.
1.0
Scoreboard-Only Mode - (cwedit 2 moeb n otherz) **Why?** Although the chat and log panel may be useful to many people, others may consider them as irrelevant or cluttered. Currently, users can shrink the chat panel, while the scoreboard resizes automatically. However, that shrinking also hides important buttons at the top-left. And the log panel may still be visible. ![1](https://user-images.githubusercontent.com/77030765/105802206-b00fe000-5f68-11eb-8245-1dce9f5b204f.png) ![2](https://user-images.githubusercontent.com/77030765/105801903-2cee8a00-5f68-11eb-95c3-5151f5a03fc0.png) **Optional:** - Option to remove only the chat panel, or only the log panel. - Add a `View` menu between `File` and `Settings`, and place a `Scoreboard-Only Mode` checkbox inside of it.
non_process
scoreboard only mode cwedit moeb n otherz why although the chat and log panel may be useful to many people others may consider them as irrelevant or cluttered currently users can shrink the chat panel while the scoreboard resizes automatically however that shrinking also hides important buttons at the top left and the log panel may still be visible optional option to remove only the chat panel or only the log panel add a view menu between file and settings and place a scoreboard only mode checkbox inside of it
0
15,988
20,188,189,699
IssuesEvent
2022-02-11 01:16:34
savitamittalmsft/WAS-SEC-TEST
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
opened
Follow DevOps security guidance and automation for securing applications
WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Operational Model & DevOps General
<a href="https://docs.microsoft.com/azure/architecture/framework/security/deploy-code">Follow DevOps security guidance and automation for securing applications</a> <p><b>Why Consider This?</b></p> Organizations should leverage existing guidance and automation when securing applications in the cloud, rather than starting from zero. <p><b>Context</b></p> <p><span>Using resources and lessons learned by external organizations that are early adopters of these models can accelerate the improvement of an organizations security posture with less expenditure of effort and resources.</span></p> <p><b>Suggested Actions</b></p> <p><span>Incorporate Secure Devops on Azure toolkit and the guidance published by the Organization for Web App Security Project (OWASP) or an equivalent guiding organization.</span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/Security/applications-services#follow-devops-security-guidance" target="_blank"><span>Follow DevOps security guidance</span></a><span /></p>
1.0
Follow DevOps security guidance and automation for securing applications - <a href="https://docs.microsoft.com/azure/architecture/framework/security/deploy-code">Follow DevOps security guidance and automation for securing applications</a> <p><b>Why Consider This?</b></p> Organizations should leverage existing guidance and automation when securing applications in the cloud, rather than starting from zero. <p><b>Context</b></p> <p><span>Using resources and lessons learned by external organizations that are early adopters of these models can accelerate the improvement of an organizations security posture with less expenditure of effort and resources.</span></p> <p><b>Suggested Actions</b></p> <p><span>Incorporate Secure Devops on Azure toolkit and the guidance published by the Organization for Web App Security Project (OWASP) or an equivalent guiding organization.</span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/Security/applications-services#follow-devops-security-guidance" target="_blank"><span>Follow DevOps security guidance</span></a><span /></p>
process
follow devops security guidance and automation for securing applications why consider this organizations should leverage existing guidance and automation when securing applications in the cloud rather than starting from zero context using resources and lessons learned by external organizations that are early adopters of these models can accelerate the improvement of an organizations security posture with less expenditure of effort and resources suggested actions incorporate secure devops on azure toolkit and the guidance published by the organization for web app security project owasp or an equivalent guiding organization learn more follow devops security guidance
1
6,673
9,789,721,184
IssuesEvent
2019-06-10 10:36:25
linnovate/root
https://api.github.com/repos/linnovate/root
reopened
Documents: can't send a document.
Process bug critical
@abrahamos upload a document. fill the fields. click on send. fill the fields. click on send. nothing happened.
1.0
Documents: can't send a document. - @abrahamos upload a document. fill the fields. click on send. fill the fields. click on send. nothing happened.
process
documents can t send a document abrahamos upload a document fill the fields click on send fill the fields click on send nothing happened
1
9,849
12,838,385,381
IssuesEvent
2020-07-07 17:22:07
Open-EO/openeo-api
https://api.github.com/repos/Open-EO/openeo-api
closed
Processes: pre-defined + user-defined + sharing: namespacing/prefixing?
process discovery
(I remember discussions about this but couldn't find in-depth github ticket about this, feel free to close as duplicate) At VITO, we're working on support for user-defined processes and basic sharing, which raises some concerns about naming collisions, not only between pre-defined and user-defined processes, but also across users when some kind of sharing mechanism is in place. Some data points: - From #256: > Decide whether user-defined processes should be prefixed in process graphs (e.g. user:ndvi) - I'll leave this up to the back-end for now. We may revisit this for 1.0 final. - The description of the `process_id` component: https://github.com/Open-EO/openeo-api/blob/12471721d9c74262ae962f7ad6c746ac77aad382/openapi.yaml#L4661-L4663 I think there is a contradiction/inconsistency here: `process_id` MUST be unique .... but if backend adds same name, it's ok to be not unique anymore (can user still update their UDP afterwards?). I think this is an indication that it "needs more work". - Process ids currently must follow this regex: https://github.com/Open-EO/openeo-api/blob/12471721d9c74262ae962f7ad6c746ac77aad382/openapi.yaml#L4667 meaning Latin characters, digits and underscore. Which means there is very little wiggle room to implement some kind of robust namespace/prefix scheme. Also somewhat relevant here, especially when sharing across users comes in the picture: in context of OpenID Connect, the user id returned by the OIDC provider might not fit `\w+` (dashes, dots, ...) I don't have a concrete suggestions at the moment, but just wanted to start off this discussion
1.0
Processes: pre-defined + user-defined + sharing: namespacing/prefixing? - (I remember discussions about this but couldn't find in-depth github ticket about this, feel free to close as duplicate) At VITO, we're working on support for user-defined processes and basic sharing, which raises some concerns about naming collisions, not only between pre-defined and user-defined processes, but also across users when some kind of sharing mechanism is in place. Some data points: - From #256: > Decide whether user-defined processes should be prefixed in process graphs (e.g. user:ndvi) - I'll leave this up to the back-end for now. We may revisit this for 1.0 final. - The description of the `process_id` component: https://github.com/Open-EO/openeo-api/blob/12471721d9c74262ae962f7ad6c746ac77aad382/openapi.yaml#L4661-L4663 I think there is a contradiction/inconsistency here: `process_id` MUST be unique .... but if backend adds same name, it's ok to be not unique anymore (can user still update their UDP afterwards?). I think this is an indication that it "needs more work". - Process ids currently must follow this regex: https://github.com/Open-EO/openeo-api/blob/12471721d9c74262ae962f7ad6c746ac77aad382/openapi.yaml#L4667 meaning Latin characters, digits and underscore. Which means there is very little wiggle room to implement some kind of robust namespace/prefix scheme. Also somewhat relevant here, especially when sharing across users comes in the picture: in context of OpenID Connect, the user id returned by the OIDC provider might not fit `\w+` (dashes, dots, ...) I don't have a concrete suggestions at the moment, but just wanted to start off this discussion
process
processes pre defined user defined sharing namespacing prefixing i remember discussions about this but couldn t find in depth github ticket about this feel free to close as duplicate at vito we re working on support for user defined processes and basic sharing which raises some concerns about naming collisions not only between pre defined and user defined processes but also across users when some kind of sharing mechanism is in place some data points from decide whether user defined processes should be prefixed in process graphs e g user ndvi i ll leave this up to the back end for now we may revisit this for final the description of the process id component i think there is a contradiction inconsistency here process id must be unique but if backend adds same name it s ok to be not unique anymore can user still update their udp afterwards i think this is an indication that it needs more work process ids currently must follow this regex meaning latin characters digits and underscore which means there is very little wiggle room to implement some kind of robust namespace prefix scheme also somewhat relevant here especially when sharing across users comes in the picture in context of openid connect the user id returned by the oidc provider might not fit w dashes dots i don t have a concrete suggestions at the moment but just wanted to start off this discussion
1
45,362
24,030,436,974
IssuesEvent
2022-09-15 14:39:38
389ds/389-ds-base
https://api.github.com/repos/389ds/389-ds-base
closed
performance search rate: checking if an entry is a referral is expensive
performance
Cloned from Pagure issue: https://pagure.io/389-ds-base/issue/51255 - Created at 2020-08-31 19:47:43 by [tbordaz](https://pagure.io/user/tbordaz) (@tbordaz) - Assigned to nobody --- #### Issue Description During search rate, the server checks that the base search is not a referral (attribute 'ref'). Most of the time this check is useless as the base entry is not a referral. The checking of the entry is expensive. Improving this tests improves searchrate by 5% (gdb) where 0 0x00007f46038ac470 in attrlist_find 1 0x00007f46038c6046 in slapi_entry_attr_find 2 0x00007f45f599f97e in check_entry_for_referral at ldap/servers/slapd/back-ldbm/findentry.c:35 3 0x00007f45f599fe5f in find_entry_internal_dn 4 0x00007f45f599fe5f in find_entry_internal at ldap/servers/slapd/back-ldbm/findentry.c:322 5 0x00007f45f599fe5f in find_entry_internal at ldap/servers/slapd/back-ldbm/findentry.c:302 6 0x00007f45f59a02bf in find_entry 7 0x00007f45f59d05b4 in ldbm_back_search 8 0x00007f46038fec2b in op_shared_search 9 0x000055a1fe0278f3 in do_search 10 0x000055a1fe01525d in connection_dispatch_operation 11 0x000055a1fe01525d in connection_threadmain 12 0x00007f460123e568 in _pt_root 13 0x00007f4600bd8322 in start_thread 14 0x00007f4600367823 in clone #### Package Version and Platform all versions #### Steps to reproduce to be provided #### Actual results #### Expected results
True
performance search rate: checking if an entry is a referral is expensive - Cloned from Pagure issue: https://pagure.io/389-ds-base/issue/51255 - Created at 2020-08-31 19:47:43 by [tbordaz](https://pagure.io/user/tbordaz) (@tbordaz) - Assigned to nobody --- #### Issue Description During search rate, the server checks that the base search is not a referral (attribute 'ref'). Most of the time this check is useless as the base entry is not a referral. The checking of the entry is expensive. Improving this tests improves searchrate by 5% (gdb) where 0 0x00007f46038ac470 in attrlist_find 1 0x00007f46038c6046 in slapi_entry_attr_find 2 0x00007f45f599f97e in check_entry_for_referral at ldap/servers/slapd/back-ldbm/findentry.c:35 3 0x00007f45f599fe5f in find_entry_internal_dn 4 0x00007f45f599fe5f in find_entry_internal at ldap/servers/slapd/back-ldbm/findentry.c:322 5 0x00007f45f599fe5f in find_entry_internal at ldap/servers/slapd/back-ldbm/findentry.c:302 6 0x00007f45f59a02bf in find_entry 7 0x00007f45f59d05b4 in ldbm_back_search 8 0x00007f46038fec2b in op_shared_search 9 0x000055a1fe0278f3 in do_search 10 0x000055a1fe01525d in connection_dispatch_operation 11 0x000055a1fe01525d in connection_threadmain 12 0x00007f460123e568 in _pt_root 13 0x00007f4600bd8322 in start_thread 14 0x00007f4600367823 in clone #### Package Version and Platform all versions #### Steps to reproduce to be provided #### Actual results #### Expected results
non_process
performance search rate checking if an entry is a referral is expensive cloned from pagure issue created at by tbordaz assigned to nobody issue description during search rate the server checks that the base search is not a referral attribute ref most of the time this check is useless as the base entry is not a referral the checking of the entry is expensive improving this tests improves searchrate by gdb where in attrlist find in slapi entry attr find in check entry for referral at ldap servers slapd back ldbm findentry c in find entry internal dn in find entry internal at ldap servers slapd back ldbm findentry c in find entry internal at ldap servers slapd back ldbm findentry c in find entry in ldbm back search in op shared search in do search in connection dispatch operation in connection threadmain in pt root in start thread in clone package version and platform all versions steps to reproduce to be provided actual results expected results
0