Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
11,185
13,957,696,741
IssuesEvent
2020-10-24 08:11:56
alexanderkotsev/geoportal
https://api.github.com/repos/alexanderkotsev/geoportal
opened
PT: Harvesting
Geoportal Harvesting process PT - Portugal
Geoportal team, We kindly request that you start a harvesting to the Portuguese catalogue. We have made some updatings and we would like to see if they are some results of our work. Thank you! Best regards, Vanda Marcos
1.0
PT: Harvesting - Geoportal team, We kindly request that you start a harvesting to the Portuguese catalogue. We have made some updatings and we would like to see if they are some results of our work. Thank you! Best regards, Vanda Marcos
process
pt harvesting geoportal team we kindly request that you start a harvesting to the portuguese catalogue we have made some updatings and we would like to see if they are some results of our work thank you best regards vanda marcos
1
14,132
17,024,050,055
IssuesEvent
2021-07-03 05:16:07
abbey2017/wind-energy-analytics
https://api.github.com/repos/abbey2017/wind-energy-analytics
closed
Write test code for power curve filtering module
data preprocessing
This improves robustness of the code base when there are other contributors to the project.
1.0
Write test code for power curve filtering module - This improves robustness of the code base when there are other contributors to the project.
process
write test code for power curve filtering module this improves robustness of the code base when there are other contributors to the project
1
21,494
29,659,023,833
IssuesEvent
2023-06-10 00:35:45
devssa/onde-codar-em-salvador
https://api.github.com/repos/devssa/onde-codar-em-salvador
closed
[Remoto] Requirements Analyst na Coodesh
SALVADOR PJ REST TYPESCRIPT REACT MOBILE REQUISITOS REMOTO GITHUB UMA DOCUMENTAÇÃO APIs GEOPROCESSAMENTO COLETA DE DADOS DASHBOARD Stale
## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/jobs/analista-de-requisitos-165400437?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>A Geobyte está em busca de Requirements Analyst para compor seu time!</p> <p>Quem somos nós:</p> <p>Somos uma empresa de desenvolvimento de software, focada em soluções ambientais utilizando geoprocessamento. Nossos principais clientes são ONG’s e consultorias ambientais. Trabalhamos sempre em conjunto com nossos parceiros, estabelecendo uma comunicação transparente para entregar os melhores resultados, através de muito esforço e dedicação.</p> <p>Sobre a oportunidade:</p> <p>O analista de requisitos deve trabalhar em conjunto com os clientes e a equipe de desenvolvimento para levantar soluções técnicas escaláveis, confiáveis e adequadas aos produtos. Para isso é necessário que o profissional tenha conhecimento técnico, seja responsável, criativo e tenha visão estratégica.</p> <p>Responsabilidades:</p> <ul> <li>Levantar requisitos;</li> <li>Elaborar protótipos;</li> <li>Gerenciar e atualizar os levantamentos de requisitos;</li> <li>Elaborar documentação de uso do sistema;</li> <li>Participar de reuniões com o cliente.</li> </ul> ## Geobyte: <p>A Geobyte é uma empresa especializada em tecnologia, meio ambiente e Geoprocessamento que busca, por meio do seu conhecimento, agregar valor às soluções dos clientes. Além de dominar as tecnologias necessárias para desenvolver as soluções, possuímos amplo conhecimento em diversas áreas do meio ambiente e geoprocessamento, que auxilia seus clientes a encontrar as melhores alternativas para seu projeto.&nbsp;</p> <p>Possuímos projetos elaborados em diversos segmentos relacionados ao meio ambiente, como análise de cobertura e uso do solo, análise e mapeamento social, criação de diversos sistemas webgis, aplicativo mobile para coleta de dados off-line em campo e posterior alimentação do sistema web, geração de relatórios e dashboard personalidades além de análises e filtros espaciais. Trabalhamos com empresas privadas, em consultorias ambientais, mineradores e outros. Setor público, com projetos nas secretarias de meio ambiente de Minas Gerais e Espírito Santo. Terceiro setor, em ONGs e Observatórios ambientais.</p><a href='https://coodesh.com/companies/geobyte'>Veja mais no site</a> ## Habilidades: - React Native - Typescript - REST APIs ## Local: 100% Remoto ## Requisitos: - Experiência em especificação de requisitos; - Forte habilidade de organização, já que precisará lidar com a organização de muitas documentações; - Experiência em conduzir reuniões técnicas e funcionais (interface com usuários e time técnico); - Conhecimento em levantamento de requisitos e documentação de projetos. ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Requirements Analyst na Geobyte](https://coodesh.com/jobs/analista-de-requisitos-165400437?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Remoto #### Regime PJ #### Categoria Gestão em TI
1.0
[Remoto] Requirements Analyst na Coodesh - ## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/jobs/analista-de-requisitos-165400437?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>A Geobyte está em busca de Requirements Analyst para compor seu time!</p> <p>Quem somos nós:</p> <p>Somos uma empresa de desenvolvimento de software, focada em soluções ambientais utilizando geoprocessamento. Nossos principais clientes são ONG’s e consultorias ambientais. Trabalhamos sempre em conjunto com nossos parceiros, estabelecendo uma comunicação transparente para entregar os melhores resultados, através de muito esforço e dedicação.</p> <p>Sobre a oportunidade:</p> <p>O analista de requisitos deve trabalhar em conjunto com os clientes e a equipe de desenvolvimento para levantar soluções técnicas escaláveis, confiáveis e adequadas aos produtos. Para isso é necessário que o profissional tenha conhecimento técnico, seja responsável, criativo e tenha visão estratégica.</p> <p>Responsabilidades:</p> <ul> <li>Levantar requisitos;</li> <li>Elaborar protótipos;</li> <li>Gerenciar e atualizar os levantamentos de requisitos;</li> <li>Elaborar documentação de uso do sistema;</li> <li>Participar de reuniões com o cliente.</li> </ul> ## Geobyte: <p>A Geobyte é uma empresa especializada em tecnologia, meio ambiente e Geoprocessamento que busca, por meio do seu conhecimento, agregar valor às soluções dos clientes. Além de dominar as tecnologias necessárias para desenvolver as soluções, possuímos amplo conhecimento em diversas áreas do meio ambiente e geoprocessamento, que auxilia seus clientes a encontrar as melhores alternativas para seu projeto.&nbsp;</p> <p>Possuímos projetos elaborados em diversos segmentos relacionados ao meio ambiente, como análise de cobertura e uso do solo, análise e mapeamento social, criação de diversos sistemas webgis, aplicativo mobile para coleta de dados off-line em campo e posterior alimentação do sistema web, geração de relatórios e dashboard personalidades além de análises e filtros espaciais. Trabalhamos com empresas privadas, em consultorias ambientais, mineradores e outros. Setor público, com projetos nas secretarias de meio ambiente de Minas Gerais e Espírito Santo. Terceiro setor, em ONGs e Observatórios ambientais.</p><a href='https://coodesh.com/companies/geobyte'>Veja mais no site</a> ## Habilidades: - React Native - Typescript - REST APIs ## Local: 100% Remoto ## Requisitos: - Experiência em especificação de requisitos; - Forte habilidade de organização, já que precisará lidar com a organização de muitas documentações; - Experiência em conduzir reuniões técnicas e funcionais (interface com usuários e time técnico); - Conhecimento em levantamento de requisitos e documentação de projetos. ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Requirements Analyst na Geobyte](https://coodesh.com/jobs/analista-de-requisitos-165400437?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Remoto #### Regime PJ #### Categoria Gestão em TI
process
requirements analyst na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a geobyte está em busca de requirements analyst para compor seu time quem somos nós somos uma empresa de desenvolvimento de software focada em soluções ambientais utilizando geoprocessamento nossos principais clientes são ong’s e consultorias ambientais trabalhamos sempre em conjunto com nossos parceiros estabelecendo uma comunicação transparente para entregar os melhores resultados através de muito esforço e dedicação sobre a oportunidade o analista de requisitos deve trabalhar em conjunto com os clientes e a equipe de desenvolvimento para levantar soluções técnicas escaláveis confiáveis e adequadas aos produtos para isso é necessário que o profissional tenha conhecimento técnico seja responsável criativo e tenha visão estratégica responsabilidades levantar requisitos elaborar protótipos gerenciar e atualizar os levantamentos de requisitos elaborar documentação de uso do sistema participar de reuniões com o cliente geobyte a geobyte é uma empresa especializada em tecnologia meio ambiente e geoprocessamento que busca por meio do seu conhecimento agregar valor às soluções dos clientes além de dominar as tecnologias necessárias para desenvolver as soluções possuímos amplo conhecimento em diversas áreas do meio ambiente e geoprocessamento que auxilia seus clientes a encontrar as melhores alternativas para seu projeto nbsp possuímos projetos elaborados em diversos segmentos relacionados ao meio ambiente como análise de cobertura e uso do solo análise e mapeamento social criação de diversos sistemas webgis aplicativo mobile para coleta de dados off line em campo e posterior alimentação do sistema web geração de relatórios e dashboard personalidades além de análises e filtros espaciais trabalhamos com empresas privadas em consultorias ambientais mineradores e outros setor público com projetos nas secretarias de meio ambiente de minas gerais e espírito santo terceiro setor em ongs e observatórios ambientais habilidades react native typescript rest apis local remoto requisitos experiência em especificação de requisitos forte habilidade de organização já que precisará lidar com a organização de muitas documentações experiência em conduzir reuniões técnicas e funcionais interface com usuários e time técnico conhecimento em levantamento de requisitos e documentação de projetos como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto regime pj categoria gestão em ti
1
2,650
8,102,838,058
IssuesEvent
2018-08-13 04:48:56
openshiftio/openshift.io
https://api.github.com/repos/openshiftio/openshift.io
closed
Jenkins is becoming Idle for pipeline build in OSIO launcher flow.
SEV2-high area/architecture/build priority/P4 sprint/next team/build-cd type/bug
Due to this Jenkins issue, No build could not able to see the finish line. This is a critical issue from the build pipeline endpoint. Please check the below screenshot. ![jekins_idle](https://user-images.githubusercontent.com/11207106/38484498-754ac61e-3bf4-11e8-9dfd-f82601e17d9b.png)
1.0
Jenkins is becoming Idle for pipeline build in OSIO launcher flow. - Due to this Jenkins issue, No build could not able to see the finish line. This is a critical issue from the build pipeline endpoint. Please check the below screenshot. ![jekins_idle](https://user-images.githubusercontent.com/11207106/38484498-754ac61e-3bf4-11e8-9dfd-f82601e17d9b.png)
non_process
jenkins is becoming idle for pipeline build in osio launcher flow due to this jenkins issue no build could not able to see the finish line this is a critical issue from the build pipeline endpoint please check the below screenshot
0
117,905
11,958,644,318
IssuesEvent
2020-04-04 18:51:20
hocyadav/flipzon-backend
https://api.github.com/repos/hocyadav/flipzon-backend
opened
Create git feature branch and complete feature and push code
documentation
CASE 1. Create from github console (local machine) CASE 2. Create from github.com
1.0
Create git feature branch and complete feature and push code - CASE 1. Create from github console (local machine) CASE 2. Create from github.com
non_process
create git feature branch and complete feature and push code case create from github console local machine case create from github com
0
537,239
15,725,695,528
IssuesEvent
2021-03-29 10:18:30
wso2/product-apim
https://api.github.com/repos/wso2/product-apim
opened
Errors in Traffic Manager in distributed Setup
Priority/Highest Type/Bug
### Description: When starting TM in a distributed setup encountered the following error **StackTrace** : ``` TID: [-1234] [internal/data/v1] [2021-03-29 08:42:48,436] ERROR {org.wso2.carbon.apimgt.rest.api.util.interceptors.auth.BasicAuthenticationInterceptor} - Error occurred while authenticating user: wso2apim java.lang.NullPointerException at org.wso2.carbon.apimgt.rest.api.util.interceptors.auth.BasicAuthenticationInterceptor.authenticate(BasicAuthenticationInterceptor.java:137) at org.wso2.carbon.apimgt.rest.api.util.interceptors.auth.BasicAuthenticationInterceptor.handleMessage(BasicAuthenticationInterceptor.java:105) at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308) at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121) at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:267) at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:234) at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:208) at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:160) at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:225) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:296) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doGet(AbstractHTTPServlet.java:220) at javax.servlet.http.HttpServlet.service(HttpServlet.java:634) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:271) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:107) at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:110) at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:102) at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99) at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49) at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62) at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:145) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:690) at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57) at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:126) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:373) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1590) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) ``` **Deployment.toml** : https://notepad.pw/zzaff79 ### Affected Product Version: APIM 4.0.0
1.0
Errors in Traffic Manager in distributed Setup - ### Description: When starting TM in a distributed setup encountered the following error **StackTrace** : ``` TID: [-1234] [internal/data/v1] [2021-03-29 08:42:48,436] ERROR {org.wso2.carbon.apimgt.rest.api.util.interceptors.auth.BasicAuthenticationInterceptor} - Error occurred while authenticating user: wso2apim java.lang.NullPointerException at org.wso2.carbon.apimgt.rest.api.util.interceptors.auth.BasicAuthenticationInterceptor.authenticate(BasicAuthenticationInterceptor.java:137) at org.wso2.carbon.apimgt.rest.api.util.interceptors.auth.BasicAuthenticationInterceptor.handleMessage(BasicAuthenticationInterceptor.java:105) at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308) at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121) at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:267) at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:234) at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:208) at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:160) at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:225) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:296) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doGet(AbstractHTTPServlet.java:220) at javax.servlet.http.HttpServlet.service(HttpServlet.java:634) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:271) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:107) at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:110) at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:102) at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99) at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49) at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62) at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:145) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:690) at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57) at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:126) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:373) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1590) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) ``` **Deployment.toml** : https://notepad.pw/zzaff79 ### Affected Product Version: APIM 4.0.0
non_process
errors in traffic manager in distributed setup description when starting tm in a distributed setup encountered the following error stacktrace tid error org carbon apimgt rest api util interceptors auth basicauthenticationinterceptor error occurred while authenticating user java lang nullpointerexception at org carbon apimgt rest api util interceptors auth basicauthenticationinterceptor authenticate basicauthenticationinterceptor java at org carbon apimgt rest api util interceptors auth basicauthenticationinterceptor handlemessage basicauthenticationinterceptor java at org apache cxf phase phaseinterceptorchain dointercept phaseinterceptorchain java at org apache cxf transport chaininitiationobserver onmessage chaininitiationobserver java at org apache cxf transport http abstracthttpdestination invoke abstracthttpdestination java at org apache cxf transport servlet servletcontroller invokedestination servletcontroller java at org apache cxf transport servlet servletcontroller invoke servletcontroller java at org apache cxf transport servlet servletcontroller invoke servletcontroller java at org apache cxf transport servlet cxfnonspringservlet invoke cxfnonspringservlet java at org apache cxf transport servlet abstracthttpservlet handlerequest abstracthttpservlet java at org apache cxf transport servlet abstracthttpservlet doget abstracthttpservlet java at javax servlet http httpservlet service httpservlet java at org apache cxf transport servlet abstracthttpservlet service abstracthttpservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache tomcat websocket server wsfilter dofilter wsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina authenticator authenticatorbase invoke authenticatorbase java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org carbon identity context rewrite valve tenantcontextrewritevalve invoke tenantcontextrewritevalve java at org carbon identity authz valve authorizationvalve invoke authorizationvalve java at org carbon identity auth valve authenticationvalve invoke authenticationvalve java at org carbon tomcat ext valves compositevalve continueinvocation compositevalve java at org carbon tomcat ext valves tomcatvalvecontainer invokevalves tomcatvalvecontainer java at org carbon tomcat ext valves compositevalve invoke compositevalve java at org carbon tomcat ext valves carbonstuckthreaddetectionvalve invoke carbonstuckthreaddetectionvalve java at org apache catalina valves abstractaccesslogvalve invoke abstractaccesslogvalve java at org carbon tomcat ext valves carboncontextcreatorvalve invoke carboncontextcreatorvalve java at org carbon tomcat ext valves requestcorrelationidvalve invoke requestcorrelationidvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote service java at org apache coyote abstractprocessorlight process abstractprocessorlight java at org apache coyote abstractprotocol connectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net socketprocessorbase run socketprocessorbase java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at org apache tomcat util threads taskthread wrappingrunnable run taskthread java at java lang thread run thread java deployment toml affected product version apim
0
15,809
20,011,027,305
IssuesEvent
2022-02-01 06:26:52
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
opened
System.Diagnostics.Process work planned for .NET 7
Epic area-System.Diagnostics.Process Priority:3 Team:Libraries
**This issue captures the planned work for .NET 7. This list is expected to change throughout the release cycle according to ongoing planning and discussions, with possible additions and subtractions to the scope.** ## Summary We are not planning any notable investments into the System.Diagnostics.Process area in .NET 7. We will address high-impact issues and resolve test issues. We will consider small community contributions that improve cross-platform compatibility. ## Planned for .NET 7 - [ ] #58492 - [ ] #44453 - [ ] #45017 - [ ] #63937 - [ ] #53095 - [ ] #49107 - [ ] #45685
1.0
System.Diagnostics.Process work planned for .NET 7 - **This issue captures the planned work for .NET 7. This list is expected to change throughout the release cycle according to ongoing planning and discussions, with possible additions and subtractions to the scope.** ## Summary We are not planning any notable investments into the System.Diagnostics.Process area in .NET 7. We will address high-impact issues and resolve test issues. We will consider small community contributions that improve cross-platform compatibility. ## Planned for .NET 7 - [ ] #58492 - [ ] #44453 - [ ] #45017 - [ ] #63937 - [ ] #53095 - [ ] #49107 - [ ] #45685
process
system diagnostics process work planned for net this issue captures the planned work for net this list is expected to change throughout the release cycle according to ongoing planning and discussions with possible additions and subtractions to the scope summary we are not planning any notable investments into the system diagnostics process area in net we will address high impact issues and resolve test issues we will consider small community contributions that improve cross platform compatibility planned for net
1
3,349
6,486,693,586
IssuesEvent
2017-08-19 22:18:33
Great-Hill-Corporation/quickBlocks
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
closed
One-Hit Wonder detection
monitors-all status-inprocess type-enhancement
Akamai's web server used Bloom filters to remove one-hit wonders for their caching mechanism. They found that 75% of the entries in the cache were one-hit wonders. Suggestion: a multi-level pre-filter that only allows three or four hit wonders to enter the bloom filter cache. First, find how many addresses use 1,2,3,4 transactions and then quit. Second, only add addresses to the 'growing bloom' is added when it has hit more than four times.
1.0
One-Hit Wonder detection - Akamai's web server used Bloom filters to remove one-hit wonders for their caching mechanism. They found that 75% of the entries in the cache were one-hit wonders. Suggestion: a multi-level pre-filter that only allows three or four hit wonders to enter the bloom filter cache. First, find how many addresses use 1,2,3,4 transactions and then quit. Second, only add addresses to the 'growing bloom' is added when it has hit more than four times.
process
one hit wonder detection akamai s web server used bloom filters to remove one hit wonders for their caching mechanism they found that of the entries in the cache were one hit wonders suggestion a multi level pre filter that only allows three or four hit wonders to enter the bloom filter cache first find how many addresses use transactions and then quit second only add addresses to the growing bloom is added when it has hit more than four times
1
235,557
18,051,950,368
IssuesEvent
2021-09-19 22:18:07
lcpulzone/tea_time
https://api.github.com/repos/lcpulzone/tea_time
closed
Create Project Board
documentation
- [x] Add issues for minimum requirements - [x] Create issues for step by step process - [x] Add any foreseeable 'sticky points'
1.0
Create Project Board - - [x] Add issues for minimum requirements - [x] Create issues for step by step process - [x] Add any foreseeable 'sticky points'
non_process
create project board add issues for minimum requirements create issues for step by step process add any foreseeable sticky points
0
314,558
27,010,704,232
IssuesEvent
2023-02-10 15:11:10
daisy/pipeline-ui
https://api.github.com/repos/daisy/pipeline-ui
closed
DAISY Pipeline has unusual behavior in Windows compared to other applications
tester-feedback accessibility
When I use an application, I normally launch it and then will close it with ALT + F4. However, when you use ALT + F4, the DAISY Pipeline will be minimized to the task bar. It no longer will appear in the ALT + Tab order to cycle between running applications. When you go to the task bar and select DAISY Pipeline, it says that DAISY Pipeline is running. If you press enter on it, it does not move to the foreground. One must go to the task bar, select DAISY Pipeline and then use the context menu to bring up the selections, and then you can get to settings or create a job. This will move the DAISY Pipeline to the foreground and put it back into the ALT + Tab order. I am not sure about this, but I normally use the NVDA key + B to go to the task bar. I am finding it difficult to get to the task bar after launching the DAISY Pipeline. I don't know where this behavior is coming from. It could be my configuration and I am using a beta of NVDA. I am using: DAISY Pipeline releases/tag/0.2.0-alpha Windows 11 2022 H2 NVDA Version: 2022.4rc1 (2022.4.0.27372)
1.0
DAISY Pipeline has unusual behavior in Windows compared to other applications - When I use an application, I normally launch it and then will close it with ALT + F4. However, when you use ALT + F4, the DAISY Pipeline will be minimized to the task bar. It no longer will appear in the ALT + Tab order to cycle between running applications. When you go to the task bar and select DAISY Pipeline, it says that DAISY Pipeline is running. If you press enter on it, it does not move to the foreground. One must go to the task bar, select DAISY Pipeline and then use the context menu to bring up the selections, and then you can get to settings or create a job. This will move the DAISY Pipeline to the foreground and put it back into the ALT + Tab order. I am not sure about this, but I normally use the NVDA key + B to go to the task bar. I am finding it difficult to get to the task bar after launching the DAISY Pipeline. I don't know where this behavior is coming from. It could be my configuration and I am using a beta of NVDA. I am using: DAISY Pipeline releases/tag/0.2.0-alpha Windows 11 2022 H2 NVDA Version: 2022.4rc1 (2022.4.0.27372)
non_process
daisy pipeline has unusual behavior in windows compared to other applications when i use an application i normally launch it and then will close it with alt however when you use alt the daisy pipeline will be minimized to the task bar it no longer will appear in the alt tab order to cycle between running applications when you go to the task bar and select daisy pipeline it says that daisy pipeline is running if you press enter on it it does not move to the foreground one must go to the task bar select daisy pipeline and then use the context menu to bring up the selections and then you can get to settings or create a job this will move the daisy pipeline to the foreground and put it back into the alt tab order i am not sure about this but i normally use the nvda key b to go to the task bar i am finding it difficult to get to the task bar after launching the daisy pipeline i don t know where this behavior is coming from it could be my configuration and i am using a beta of nvda i am using daisy pipeline releases tag alpha windows nvda version
0
318,595
23,728,090,911
IssuesEvent
2022-08-30 21:46:29
chiefonboarding/ChiefOnboarding
https://api.github.com/repos/chiefonboarding/ChiefOnboarding
closed
Dropbox integration
documentation enhancement
Some teams might want to add a new hire to their Dropbox teams. This integration will allow them to do so. Must need OAuth2 token. OAuth2 needs to be set up for this. OAuth info: https://www.dropbox.com/developers/reference/auth-types#team Adding people to teams: https://www.dropbox.com/developers/documentation/http/teams#team-groups-members-add This also needs some extra info in the docs on how to set this up.
1.0
Dropbox integration - Some teams might want to add a new hire to their Dropbox teams. This integration will allow them to do so. Must need OAuth2 token. OAuth2 needs to be set up for this. OAuth info: https://www.dropbox.com/developers/reference/auth-types#team Adding people to teams: https://www.dropbox.com/developers/documentation/http/teams#team-groups-members-add This also needs some extra info in the docs on how to set this up.
non_process
dropbox integration some teams might want to add a new hire to their dropbox teams this integration will allow them to do so must need token needs to be set up for this oauth info adding people to teams this also needs some extra info in the docs on how to set this up
0
58,179
8,233,593,158
IssuesEvent
2018-09-08 02:54:39
thoughtbot/factory_bot
https://api.github.com/repos/thoughtbot/factory_bot
opened
Document using string for class when defining a factory
documentation
If factory_bot definitions get loaded before the relevant model gets defined (see https://github.com/thoughtbot/factory_bot_rails/pull/264) the factory definition will raise a `NameError: uninitialized constant`: ``` factory :access_token, class: Doorkeeper::AccessToken ``` But using a string or symbol works fine, since we don't try to constantize it until later, when actually running the factory (https://github.com/thoughtbot/factory_bot/blob/master/lib/factory_bot/factory.rb#L22): ``` factory :access_token, class: "Doorkeeper::AccessToken" ``` I don't believe we have clear documentation about using a string or symbol for the class.
1.0
Document using string for class when defining a factory - If factory_bot definitions get loaded before the relevant model gets defined (see https://github.com/thoughtbot/factory_bot_rails/pull/264) the factory definition will raise a `NameError: uninitialized constant`: ``` factory :access_token, class: Doorkeeper::AccessToken ``` But using a string or symbol works fine, since we don't try to constantize it until later, when actually running the factory (https://github.com/thoughtbot/factory_bot/blob/master/lib/factory_bot/factory.rb#L22): ``` factory :access_token, class: "Doorkeeper::AccessToken" ``` I don't believe we have clear documentation about using a string or symbol for the class.
non_process
document using string for class when defining a factory if factory bot definitions get loaded before the relevant model gets defined see the factory definition will raise a nameerror uninitialized constant factory access token class doorkeeper accesstoken but using a string or symbol works fine since we don t try to constantize it until later when actually running the factory factory access token class doorkeeper accesstoken i don t believe we have clear documentation about using a string or symbol for the class
0
2,061
23,129,772,830
IssuesEvent
2022-07-28 09:19:41
celo-org/react-celo
https://api.github.com/repos/celo-org/react-celo
closed
Simplify Callback / setting event listeners
v4.1 focus:reliability
Between WC-Wallet in the @celo/wallet-wallet-connect vs package. and the walletconnect connector there way event listners for walletconnect events get set is confusing in some cases functions on the class get overwritten outside of the class (or so it seems?) make this easier to follow. no overwriting or setting up callbacks post init.
True
Simplify Callback / setting event listeners - Between WC-Wallet in the @celo/wallet-wallet-connect vs package. and the walletconnect connector there way event listners for walletconnect events get set is confusing in some cases functions on the class get overwritten outside of the class (or so it seems?) make this easier to follow. no overwriting or setting up callbacks post init.
non_process
simplify callback setting event listeners between wc wallet in the celo wallet wallet connect vs package and the walletconnect connector there way event listners for walletconnect events get set is confusing in some cases functions on the class get overwritten outside of the class or so it seems make this easier to follow no overwriting or setting up callbacks post init
0
2,730
5,619,667,312
IssuesEvent
2017-04-04 02:48:46
codefordenver/org
https://api.github.com/repos/codefordenver/org
closed
Clean up Google Drive folders
Process
Refer to David's document for cleaning up the folders: https://docs.google.com/document/d/1qNMiOaFh0AC0D0YvEt4vPbS-Px48tjGyM5cTf1U3kLY/edit?usp=sharing 1) Mock up how we want to organize the drive folders (link above) 2) reorganize the ORG folder only 3) Come back and finalize how we want it organized 4) Disseminate information to the project teams
1.0
Clean up Google Drive folders - Refer to David's document for cleaning up the folders: https://docs.google.com/document/d/1qNMiOaFh0AC0D0YvEt4vPbS-Px48tjGyM5cTf1U3kLY/edit?usp=sharing 1) Mock up how we want to organize the drive folders (link above) 2) reorganize the ORG folder only 3) Come back and finalize how we want it organized 4) Disseminate information to the project teams
process
clean up google drive folders refer to david s document for cleaning up the folders mock up how we want to organize the drive folders link above reorganize the org folder only come back and finalize how we want it organized disseminate information to the project teams
1
134,043
5,219,391,621
IssuesEvent
2017-01-26 18:58:08
modxcms/revolution
https://api.github.com/repos/modxcms/revolution
opened
Improve reporting of bad links so they are easier to find
area-core enhancement priority-2-high
### Summary Currently, invalid link tags or calls to makeUrl with non-existent resources log a generic error that makes it impossible to find the source. This should be improved so finding broken links is easier. ### Step to reproduce Create a link tag, e.g. `[[~0]]`, or call the `$modx->makeUrl(0)` function with a resource id that does not exist. ### Observed behavior No information is logged for link tags and makeUrl calls only provide a generic error message that is not useful for finding the broken link. ### Expected behavior Log information about the location of the broken link. ### Environment ALL environments.
1.0
Improve reporting of bad links so they are easier to find - ### Summary Currently, invalid link tags or calls to makeUrl with non-existent resources log a generic error that makes it impossible to find the source. This should be improved so finding broken links is easier. ### Step to reproduce Create a link tag, e.g. `[[~0]]`, or call the `$modx->makeUrl(0)` function with a resource id that does not exist. ### Observed behavior No information is logged for link tags and makeUrl calls only provide a generic error message that is not useful for finding the broken link. ### Expected behavior Log information about the location of the broken link. ### Environment ALL environments.
non_process
improve reporting of bad links so they are easier to find summary currently invalid link tags or calls to makeurl with non existent resources log a generic error that makes it impossible to find the source this should be improved so finding broken links is easier step to reproduce create a link tag e g or call the modx makeurl function with a resource id that does not exist observed behavior no information is logged for link tags and makeurl calls only provide a generic error message that is not useful for finding the broken link expected behavior log information about the location of the broken link environment all environments
0
179,871
21,582,958,034
IssuesEvent
2022-05-02 20:52:22
timf-app-demo/SecurityShepherd
https://api.github.com/repos/timf-app-demo/SecurityShepherd
opened
json-simple-1.1.1.jar: 1 vulnerabilities (highest severity is: 5.5)
security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-simple-1.1.1.jar</b></p></summary> <p></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/junit/junit/4.10/junit-4.10.jar</p> <p> <p>Found in HEAD commit: <a href="https://github.com/timf-app-demo/SecurityShepherd/commit/fc868a9e9701052c214a6e65c10a56599beb9f7e">fc868a9e9701052c214a6e65c10a56599beb9f7e</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2020-15250](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15250) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | junit-4.10.jar | Transitive | N/A | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-15250</summary> ### Vulnerable Library - <b>junit-4.10.jar</b></p> <p>JUnit is a regression testing framework written by Erich Gamma and Kent Beck. It is used by the developer who implements unit tests in Java.</p> <p>Library home page: <a href="http://junit.org">http://junit.org</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/junit/junit/4.10/junit-4.10.jar</p> <p> Dependency Hierarchy: - json-simple-1.1.1.jar (Root Library) - :x: **junit-4.10.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/timf-app-demo/SecurityShepherd/commit/fc868a9e9701052c214a6e65c10a56599beb9f7e">fc868a9e9701052c214a6e65c10a56599beb9f7e</a></p> <p>Found in base branch: <b>dev</b></p> </p> <p></p> ### Vulnerability Details <p> In JUnit4 from version 4.7 and before 4.13.1, the test rule TemporaryFolder contains a local information disclosure vulnerability. On Unix like systems, the system's temporary directory is shared between all users on that system. Because of this, when files and directories are written into this directory they are, by default, readable by other users on that same system. This vulnerability does not allow other users to overwrite the contents of these directories or files. This is purely an information disclosure vulnerability. This vulnerability impacts you if the JUnit tests write sensitive information, like API keys or passwords, into the temporary folder, and the JUnit tests execute in an environment where the OS has other untrusted users. Because certain JDK file system APIs were only added in JDK 1.7, this this fix is dependent upon the version of the JDK you are using. For Java 1.7 and higher users: this vulnerability is fixed in 4.13.1. For Java 1.6 and lower users: no patch is available, you must use the workaround below. If you are unable to patch, or are stuck running on Java 1.6, specifying the `java.io.tmpdir` system environment variable to a directory that is exclusively owned by the executing user will fix this vulnerability. For more information, including an example of vulnerable code, see the referenced GitHub Security Advisory. <p>Publish Date: 2020-10-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15250>CVE-2020-15250</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/junit-team/junit4/security/advisories/GHSA-269g-pwp5-87pp">https://github.com/junit-team/junit4/security/advisories/GHSA-269g-pwp5-87pp</a></p> <p>Release Date: 2020-10-12</p> <p>Fix Resolution: junit:junit:4.13.1</p> </p> <p></p> </details> <!-- <REMEDIATE>[{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"junit","packageName":"junit","packageVersion":"4.10","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.googlecode.json-simple:json-simple:1.1.1;junit:junit:4.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"junit:junit:4.13.1","isBinary":false}],"baseBranches":["dev"],"vulnerabilityIdentifier":"CVE-2020-15250","vulnerabilityDetails":"In JUnit4 from version 4.7 and before 4.13.1, the test rule TemporaryFolder contains a local information disclosure vulnerability. On Unix like systems, the system\u0027s temporary directory is shared between all users on that system. Because of this, when files and directories are written into this directory they are, by default, readable by other users on that same system. This vulnerability does not allow other users to overwrite the contents of these directories or files. This is purely an information disclosure vulnerability. This vulnerability impacts you if the JUnit tests write sensitive information, like API keys or passwords, into the temporary folder, and the JUnit tests execute in an environment where the OS has other untrusted users. Because certain JDK file system APIs were only added in JDK 1.7, this this fix is dependent upon the version of the JDK you are using. For Java 1.7 and higher users: this vulnerability is fixed in 4.13.1. For Java 1.6 and lower users: no patch is available, you must use the workaround below. If you are unable to patch, or are stuck running on Java 1.6, specifying the `java.io.tmpdir` system environment variable to a directory that is exclusively owned by the executing user will fix this vulnerability. For more information, including an example of vulnerable code, see the referenced GitHub Security Advisory.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15250","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Local","I":"None"},"extraData":{}}]</REMEDIATE> -->
True
json-simple-1.1.1.jar: 1 vulnerabilities (highest severity is: 5.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-simple-1.1.1.jar</b></p></summary> <p></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/junit/junit/4.10/junit-4.10.jar</p> <p> <p>Found in HEAD commit: <a href="https://github.com/timf-app-demo/SecurityShepherd/commit/fc868a9e9701052c214a6e65c10a56599beb9f7e">fc868a9e9701052c214a6e65c10a56599beb9f7e</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2020-15250](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15250) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | junit-4.10.jar | Transitive | N/A | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-15250</summary> ### Vulnerable Library - <b>junit-4.10.jar</b></p> <p>JUnit is a regression testing framework written by Erich Gamma and Kent Beck. It is used by the developer who implements unit tests in Java.</p> <p>Library home page: <a href="http://junit.org">http://junit.org</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/junit/junit/4.10/junit-4.10.jar</p> <p> Dependency Hierarchy: - json-simple-1.1.1.jar (Root Library) - :x: **junit-4.10.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/timf-app-demo/SecurityShepherd/commit/fc868a9e9701052c214a6e65c10a56599beb9f7e">fc868a9e9701052c214a6e65c10a56599beb9f7e</a></p> <p>Found in base branch: <b>dev</b></p> </p> <p></p> ### Vulnerability Details <p> In JUnit4 from version 4.7 and before 4.13.1, the test rule TemporaryFolder contains a local information disclosure vulnerability. On Unix like systems, the system's temporary directory is shared between all users on that system. Because of this, when files and directories are written into this directory they are, by default, readable by other users on that same system. This vulnerability does not allow other users to overwrite the contents of these directories or files. This is purely an information disclosure vulnerability. This vulnerability impacts you if the JUnit tests write sensitive information, like API keys or passwords, into the temporary folder, and the JUnit tests execute in an environment where the OS has other untrusted users. Because certain JDK file system APIs were only added in JDK 1.7, this this fix is dependent upon the version of the JDK you are using. For Java 1.7 and higher users: this vulnerability is fixed in 4.13.1. For Java 1.6 and lower users: no patch is available, you must use the workaround below. If you are unable to patch, or are stuck running on Java 1.6, specifying the `java.io.tmpdir` system environment variable to a directory that is exclusively owned by the executing user will fix this vulnerability. For more information, including an example of vulnerable code, see the referenced GitHub Security Advisory. <p>Publish Date: 2020-10-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15250>CVE-2020-15250</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/junit-team/junit4/security/advisories/GHSA-269g-pwp5-87pp">https://github.com/junit-team/junit4/security/advisories/GHSA-269g-pwp5-87pp</a></p> <p>Release Date: 2020-10-12</p> <p>Fix Resolution: junit:junit:4.13.1</p> </p> <p></p> </details> <!-- <REMEDIATE>[{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"junit","packageName":"junit","packageVersion":"4.10","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.googlecode.json-simple:json-simple:1.1.1;junit:junit:4.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"junit:junit:4.13.1","isBinary":false}],"baseBranches":["dev"],"vulnerabilityIdentifier":"CVE-2020-15250","vulnerabilityDetails":"In JUnit4 from version 4.7 and before 4.13.1, the test rule TemporaryFolder contains a local information disclosure vulnerability. On Unix like systems, the system\u0027s temporary directory is shared between all users on that system. Because of this, when files and directories are written into this directory they are, by default, readable by other users on that same system. This vulnerability does not allow other users to overwrite the contents of these directories or files. This is purely an information disclosure vulnerability. This vulnerability impacts you if the JUnit tests write sensitive information, like API keys or passwords, into the temporary folder, and the JUnit tests execute in an environment where the OS has other untrusted users. Because certain JDK file system APIs were only added in JDK 1.7, this this fix is dependent upon the version of the JDK you are using. For Java 1.7 and higher users: this vulnerability is fixed in 4.13.1. For Java 1.6 and lower users: no patch is available, you must use the workaround below. If you are unable to patch, or are stuck running on Java 1.6, specifying the `java.io.tmpdir` system environment variable to a directory that is exclusively owned by the executing user will fix this vulnerability. For more information, including an example of vulnerable code, see the referenced GitHub Security Advisory.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15250","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Local","I":"None"},"extraData":{}}]</REMEDIATE> -->
non_process
json simple jar vulnerabilities highest severity is vulnerable library json simple jar path to dependency file pom xml path to vulnerable library home wss scanner repository junit junit junit jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available medium junit jar transitive n a details cve vulnerable library junit jar junit is a regression testing framework written by erich gamma and kent beck it is used by the developer who implements unit tests in java library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository junit junit junit jar dependency hierarchy json simple jar root library x junit jar vulnerable library found in head commit a href found in base branch dev vulnerability details in from version and before the test rule temporaryfolder contains a local information disclosure vulnerability on unix like systems the system s temporary directory is shared between all users on that system because of this when files and directories are written into this directory they are by default readable by other users on that same system this vulnerability does not allow other users to overwrite the contents of these directories or files this is purely an information disclosure vulnerability this vulnerability impacts you if the junit tests write sensitive information like api keys or passwords into the temporary folder and the junit tests execute in an environment where the os has other untrusted users because certain jdk file system apis were only added in jdk this this fix is dependent upon the version of the jdk you are using for java and higher users this vulnerability is fixed in for java and lower users no patch is available you must use the workaround below if you are unable to patch or are stuck running on java specifying the java io tmpdir system environment variable to a directory that is exclusively owned by the executing user will fix this vulnerability for more information including an example of vulnerable code see the referenced github security advisory publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution junit junit istransitivedependency true dependencytree com googlecode json simple json simple junit junit isminimumfixversionavailable true minimumfixversion junit junit isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails in from version and before the test rule temporaryfolder contains a local information disclosure vulnerability on unix like systems the system temporary directory is shared between all users on that system because of this when files and directories are written into this directory they are by default readable by other users on that same system this vulnerability does not allow other users to overwrite the contents of these directories or files this is purely an information disclosure vulnerability this vulnerability impacts you if the junit tests write sensitive information like api keys or passwords into the temporary folder and the junit tests execute in an environment where the os has other untrusted users because certain jdk file system apis were only added in jdk this this fix is dependent upon the version of the jdk you are using for java and higher users this vulnerability is fixed in for java and lower users no patch is available you must use the workaround below if you are unable to patch or are stuck running on java specifying the java io tmpdir system environment variable to a directory that is exclusively owned by the executing user will fix this vulnerability for more information including an example of vulnerable code see the referenced github security advisory vulnerabilityurl
0
21,589
29,975,251,478
IssuesEvent
2023-06-24 00:42:15
devssa/onde-codar-em-salvador
https://api.github.com/repos/devssa/onde-codar-em-salvador
closed
[Remoto] DevOps Engineer na Coodesh
SALVADOR PJ INFRAESTRUTURA PYTHON JIRA STARTUP DOCKER KUBERNETES DEVOPS AWS REQUISITOS REMOTO PROCESSOS INOVAÇÃO GITLAB GITHUB SHELL CI CD AZURE SEGURANÇA UMA C R LIDERANÇA ECS VIRTUALIZAÇÃO TERRAFORM MANUTENÇÃO CONFLUENCE GRPC INTELIGÊNCIA ARTIFICIAL PIPELINE BITBUCKET SUPORTE Stale
## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/jobs/devops-engineer-205714626?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>A Kor está em busca de DevOps Engineer Sênior para compor seu time!</p> <p>Somos uma startup LawTech em crescimento super acelerado, que atua nas resoluções de conflitos judiciais e extrajudiciais entre as empresas e seus consumidores, além da recuperação de crédito! Pensamos sempre na manutenção dos clientes e na preservação da imagem da empresa e promovemos uma rápida negociação automatizada entre as partes, via Inteligência Artificial, aquela combinação perfeita entre inovação tecnológica, serviços e integração com o negócio.</p> <p>Sua gigante missão:</p> <ul> <li>Configurar e gerenciar as infraestruturas AWS para as aplicações;</li> <li>Criar e manter infraestrutura como código Terraform;</li> <li>Monitorar o desempenho e a disponibilidade das aplicações na nuvem AWS;</li> <li>Criar laboratórios, POV (prova de valor) e POC (prova de conceito);</li> <li>Gerenciar a infraestrutura e monitorar as aplicações, criando processos de integração contínua dentro da KOR, responsável por guardar e cuidar da arquitetura cloud AWS da KOR;</li> <li>Trabalhar em estreita colaboração com o time de desenvolvimento para garantir a integração da infraestrutura e das aplicações;</li> <li>Oferecer suporte técnico à equipe de TI em questões relacionadas à plataforma AWS e;</li> <li>Realizar levantamento do cenário atual (as is).</li> </ul> <p>Por aqui temos:</p> <ul> <li>Oportunidade de construir a sua história em uma empresa GIGANTE;</li> <li>Ambiente sensacional e sem burocras;</li> <li>Liderança HUMANIZADA;</li> <li>Crescimento meritocrático e humanizado;</li> <li>Formato 100% remoto;</li> <li>Horário das 09h00 às 18h00.</li> </ul> <p>Todas as aplicações de vagas na KOR são consideradas sem distinção de gênero, orientação sexual, etnia, cultura, origem, religião, deficiência, idade etc.&nbsp;</p> ## KOR Solutions: <p>A KOR auxilia empresas de todos os ramos a negociar incontáveis processos e disputas de seus clientes, fazendo isso de forma conveniente, segura e com validade jurídica. Uma vez que a proposta de acordo tenha sido aceita pelo parte contrária que processou a empresa, o sistema gera automaticamente a minuta do acordo e o contrato é executado. A assinatura de ambas as partes podem ser físicas ou digitais, facilitando o acordo conforme conveniência. Tudo isso pode ser concluído no mesmo dia! Além da negociação automatizada, oferecemos diversas outras soluções de automação, inteligência e análise nas demais etapas e processos do setor jurídico das empresas.</p></p> ## Habilidades: - CI/CD - AWS - Docker - Kubernetes ## Local: 100% Remoto ## Requisitos: - Graduação Completa em Sistemas da Informação ou Engenharia da Computação; - Conhecimentos sólidos na implantação e gerenciamento de infraestruturas AWS; - Conhecimentos avançados em automação, virtualização e segurança da informação; - Forte habilidade de comunicação e colaboração em equipe; - Experiência em Integrações (Gateways e ferramentas, API, Filas, gRPC); - Experiência em CI/CD (Github Action, Azure Devops, CircleCI, Gitlab, Bitbucket Pipeline etc.); - Experiência em Arquitetura de micro serviços; - Experiência em Containers (Docker, ECS, Kubernetes); - Experiência em Versionamento de código; - Experiência em Ferramentas Ágeis (Jira, Confluence, Azure Devops) e; - Experiência em FaaS (AWS Lambda). ## Diferenciais: - Experiência em desenvolvimento de scripts (Shell, Python etc.). ## Benefícios: - Gympass. ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [DevOps Engineer na KOR Solutions](https://coodesh.com/jobs/devops-engineer-205714626?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Remoto #### Regime PJ #### Categoria DevOps
1.0
[Remoto] DevOps Engineer na Coodesh - ## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/jobs/devops-engineer-205714626?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>A Kor está em busca de DevOps Engineer Sênior para compor seu time!</p> <p>Somos uma startup LawTech em crescimento super acelerado, que atua nas resoluções de conflitos judiciais e extrajudiciais entre as empresas e seus consumidores, além da recuperação de crédito! Pensamos sempre na manutenção dos clientes e na preservação da imagem da empresa e promovemos uma rápida negociação automatizada entre as partes, via Inteligência Artificial, aquela combinação perfeita entre inovação tecnológica, serviços e integração com o negócio.</p> <p>Sua gigante missão:</p> <ul> <li>Configurar e gerenciar as infraestruturas AWS para as aplicações;</li> <li>Criar e manter infraestrutura como código Terraform;</li> <li>Monitorar o desempenho e a disponibilidade das aplicações na nuvem AWS;</li> <li>Criar laboratórios, POV (prova de valor) e POC (prova de conceito);</li> <li>Gerenciar a infraestrutura e monitorar as aplicações, criando processos de integração contínua dentro da KOR, responsável por guardar e cuidar da arquitetura cloud AWS da KOR;</li> <li>Trabalhar em estreita colaboração com o time de desenvolvimento para garantir a integração da infraestrutura e das aplicações;</li> <li>Oferecer suporte técnico à equipe de TI em questões relacionadas à plataforma AWS e;</li> <li>Realizar levantamento do cenário atual (as is).</li> </ul> <p>Por aqui temos:</p> <ul> <li>Oportunidade de construir a sua história em uma empresa GIGANTE;</li> <li>Ambiente sensacional e sem burocras;</li> <li>Liderança HUMANIZADA;</li> <li>Crescimento meritocrático e humanizado;</li> <li>Formato 100% remoto;</li> <li>Horário das 09h00 às 18h00.</li> </ul> <p>Todas as aplicações de vagas na KOR são consideradas sem distinção de gênero, orientação sexual, etnia, cultura, origem, religião, deficiência, idade etc.&nbsp;</p> ## KOR Solutions: <p>A KOR auxilia empresas de todos os ramos a negociar incontáveis processos e disputas de seus clientes, fazendo isso de forma conveniente, segura e com validade jurídica. Uma vez que a proposta de acordo tenha sido aceita pelo parte contrária que processou a empresa, o sistema gera automaticamente a minuta do acordo e o contrato é executado. A assinatura de ambas as partes podem ser físicas ou digitais, facilitando o acordo conforme conveniência. Tudo isso pode ser concluído no mesmo dia! Além da negociação automatizada, oferecemos diversas outras soluções de automação, inteligência e análise nas demais etapas e processos do setor jurídico das empresas.</p></p> ## Habilidades: - CI/CD - AWS - Docker - Kubernetes ## Local: 100% Remoto ## Requisitos: - Graduação Completa em Sistemas da Informação ou Engenharia da Computação; - Conhecimentos sólidos na implantação e gerenciamento de infraestruturas AWS; - Conhecimentos avançados em automação, virtualização e segurança da informação; - Forte habilidade de comunicação e colaboração em equipe; - Experiência em Integrações (Gateways e ferramentas, API, Filas, gRPC); - Experiência em CI/CD (Github Action, Azure Devops, CircleCI, Gitlab, Bitbucket Pipeline etc.); - Experiência em Arquitetura de micro serviços; - Experiência em Containers (Docker, ECS, Kubernetes); - Experiência em Versionamento de código; - Experiência em Ferramentas Ágeis (Jira, Confluence, Azure Devops) e; - Experiência em FaaS (AWS Lambda). ## Diferenciais: - Experiência em desenvolvimento de scripts (Shell, Python etc.). ## Benefícios: - Gympass. ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [DevOps Engineer na KOR Solutions](https://coodesh.com/jobs/devops-engineer-205714626?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Remoto #### Regime PJ #### Categoria DevOps
process
devops engineer na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a kor está em busca de devops engineer sênior para compor seu time somos uma startup lawtech em crescimento super acelerado que atua nas resoluções de conflitos judiciais e extrajudiciais entre as empresas e seus consumidores além da recuperação de crédito pensamos sempre na manutenção dos clientes e na preservação da imagem da empresa e promovemos uma rápida negociação automatizada entre as partes via inteligência artificial aquela combinação perfeita entre inovação tecnológica serviços e integração com o negócio sua gigante missão configurar e gerenciar as infraestruturas aws para as aplicações criar e manter infraestrutura como código terraform monitorar o desempenho e a disponibilidade das aplicações na nuvem aws criar laboratórios pov prova de valor e poc prova de conceito gerenciar a infraestrutura e monitorar as aplicações criando processos de integração contínua dentro da kor responsável por guardar e cuidar da arquitetura cloud aws da kor trabalhar em estreita colaboração com o time de desenvolvimento para garantir a integração da infraestrutura e das aplicações oferecer suporte técnico à equipe de ti em questões relacionadas à plataforma aws e realizar levantamento do cenário atual as is por aqui temos oportunidade de construir a sua história em uma empresa gigante ambiente sensacional e sem burocras liderança humanizada crescimento meritocrático e humanizado formato remoto horário das às todas as aplicações de vagas na kor são consideradas sem distinção de gênero orientação sexual etnia cultura origem religião deficiência idade etc nbsp kor solutions a kor auxilia empresas de todos os ramos a negociar incontáveis processos e disputas de seus clientes fazendo isso de forma conveniente segura e com validade jurídica uma vez que a proposta de acordo tenha sido aceita pelo parte contrária que processou a empresa o sistema gera automaticamente a minuta do acordo e o contrato é executado a assinatura de ambas as partes podem ser físicas ou digitais facilitando o acordo conforme conveniência tudo isso pode ser concluído no mesmo dia além da negociação automatizada oferecemos diversas outras soluções de automação inteligência e análise nas demais etapas e processos do setor jurídico das empresas habilidades ci cd aws docker kubernetes local remoto requisitos graduação completa em sistemas da informação ou engenharia da computação conhecimentos sólidos na implantação e gerenciamento de infraestruturas aws conhecimentos avançados em automação virtualização e segurança da informação forte habilidade de comunicação e colaboração em equipe experiência em integrações gateways e ferramentas api filas grpc experiência em ci cd github action azure devops circleci gitlab bitbucket pipeline etc experiência em arquitetura de micro serviços experiência em containers docker ecs kubernetes experiência em versionamento de código experiência em ferramentas ágeis jira confluence azure devops e experiência em faas aws lambda diferenciais experiência em desenvolvimento de scripts shell python etc benefícios gympass como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto regime pj categoria devops
1
9,007
12,121,654,420
IssuesEvent
2020-04-22 09:39:13
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Support for group boxes
Feature Request Processing
Author Name: **Magnus Nilsson** (Magnus Nilsson) Original Redmine Issue: [5455](https://issues.qgis.org/issues/5455) Redmine category:processing/modeller Assignee: Victor Olaya --- Add support for adding boxes around a set of tools that perform a task together, i.e. creating groups of tools in the modeler. That would help organize the model better. --- Related issue(s): #26886 (duplicates) Redmine related issue(s): [19056](https://issues.qgis.org/issues/19056) ---
1.0
Support for group boxes - Author Name: **Magnus Nilsson** (Magnus Nilsson) Original Redmine Issue: [5455](https://issues.qgis.org/issues/5455) Redmine category:processing/modeller Assignee: Victor Olaya --- Add support for adding boxes around a set of tools that perform a task together, i.e. creating groups of tools in the modeler. That would help organize the model better. --- Related issue(s): #26886 (duplicates) Redmine related issue(s): [19056](https://issues.qgis.org/issues/19056) ---
process
support for group boxes author name magnus nilsson magnus nilsson original redmine issue redmine category processing modeller assignee victor olaya add support for adding boxes around a set of tools that perform a task together i e creating groups of tools in the modeler that would help organize the model better related issue s duplicates redmine related issue s
1
1,681
4,323,688,241
IssuesEvent
2016-07-25 17:47:24
DynareTeam/dynare
https://api.github.com/repos/DynareTeam/dynare
reopened
Depth issue
bug preprocessor
Looking into #1175 , by reverting commit 3c7e60b744567f6f39a9c611bce6dcaadcd52bc6, I obtained the following error from matlab when trying to run Christiano-Motto-Rostagno model (the one in subfolder ```figure4``` of the archive available [here](http://faculty.wcas.northwestern.edu/~lchrist/research/ECB/risk_shocks/20100922_data.zip) ``` Error: File: cmr_static.m Line: 1292 Column: 16331 Nesting of {, [, and ( cannot exceed a depth of 32. ``` @MichelJuillard May this be a consequence of your patch about on auxiliary variables in steady state and static files (see #1133)?
1.0
Depth issue - Looking into #1175 , by reverting commit 3c7e60b744567f6f39a9c611bce6dcaadcd52bc6, I obtained the following error from matlab when trying to run Christiano-Motto-Rostagno model (the one in subfolder ```figure4``` of the archive available [here](http://faculty.wcas.northwestern.edu/~lchrist/research/ECB/risk_shocks/20100922_data.zip) ``` Error: File: cmr_static.m Line: 1292 Column: 16331 Nesting of {, [, and ( cannot exceed a depth of 32. ``` @MichelJuillard May this be a consequence of your patch about on auxiliary variables in steady state and static files (see #1133)?
process
depth issue looking into by reverting commit i obtained the following error from matlab when trying to run christiano motto rostagno model the one in subfolder of the archive available error file cmr static m line column nesting of and cannot exceed a depth of micheljuillard may this be a consequence of your patch about on auxiliary variables in steady state and static files see
1
16,328
20,983,784,015
IssuesEvent
2022-03-28 23:18:50
dtcenter/MET
https://api.github.com/repos/dtcenter/MET
closed
The latest 10.0 bugfix produces different output at seneca
type: bug priority: medium alert: NEED ACCOUNT KEY component: CI/CD requestor: METplus Team MET: PreProcessing Tools (Point)
*Replace italics below with details for this issue.* ## Describe the Problem ## *Provide a clear and concise description of the bug here.* ### Expected Behavior ### *Provide a clear and concise description of what you expected to happen here.* The latest build should produces the same outputs with the reference build 1. test_output/ioda2nc/[odb_sonde_16019_all.nc](http://odb_sonde_16019_all.nc/) main_v10.0-ref: hdr_typ_table = "\001" ; main_v10.0: hdr_typ_table = "pNj.\374\177" ; 2. test_output/pb2nc/[nam.20210311.t00z.prepbufr.tm00.pbl.nc](http://nam.20210311.t00z.prepbufr.tm00.pbl.nc/) main_v10.0-ref: nobs = 293 ; main_v10.0: nobs = 245 ; this difference was detected 03-17-2022 (did not happen before) ### Environment ### Describe your runtime environment: *1. Machine: (e.g. HPC name, Linux Workstation, Mac Laptop): senaca* *2. OS: (e.g. RedHat Linux, MacOS): linux* *3. Software version number(s) 10.0 bugfix* ### To Reproduce ### Describe the steps to reproduce the behavior: *1. Compare unit test outputs (by nightly build)* *2. See error: /d1/projects/MET/MET_regression/main_v10.0/NB2022031?/test_regression_2022031?.log* *Post relevant sample data following these instructions:* *https://dtcenter.org/community-code/model-evaluation-tools-met/met-help-desk#ftp* ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [x] Select **engineer(s)** or **no engineer** required: None - [x] Select **scientist(s)** or **no scientist** required: None ### Labels ### - [x] Select **component(s)** - [x] Select **priority** - [x] Select **requestor(s)** ### Projects and Milestone ### - [ ] Select **Organization** level **Project** for support of the current coordinated release - [ ] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label - [ ] Select **Milestone** as the next bugfix version ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) ## Bugfix Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [ ] Fork this repository or create a branch of **main_\<Version>**. Branch name: `bugfix_<Issue Number>_main_<Version>_<Description>` - [ ] Fix the bug and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **main_\<Version>**. Pull request: `bugfix <Issue Number> main_<Version> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Linked issues** Select: **Organization** level software support **Project** for the current coordinated release Select: **Milestone** as the next bugfix version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Complete the steps above to fix the bug on the **develop** branch. Branch name: `bugfix_<Issue Number>_develop_<Description>` Pull request: `bugfix <Issue Number> develop <Description>` Select: **Reviewer(s)** and **Linked issues** Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Close this issue.
1.0
The latest 10.0 bugfix produces different output at seneca - *Replace italics below with details for this issue.* ## Describe the Problem ## *Provide a clear and concise description of the bug here.* ### Expected Behavior ### *Provide a clear and concise description of what you expected to happen here.* The latest build should produces the same outputs with the reference build 1. test_output/ioda2nc/[odb_sonde_16019_all.nc](http://odb_sonde_16019_all.nc/) main_v10.0-ref: hdr_typ_table = "\001" ; main_v10.0: hdr_typ_table = "pNj.\374\177" ; 2. test_output/pb2nc/[nam.20210311.t00z.prepbufr.tm00.pbl.nc](http://nam.20210311.t00z.prepbufr.tm00.pbl.nc/) main_v10.0-ref: nobs = 293 ; main_v10.0: nobs = 245 ; this difference was detected 03-17-2022 (did not happen before) ### Environment ### Describe your runtime environment: *1. Machine: (e.g. HPC name, Linux Workstation, Mac Laptop): senaca* *2. OS: (e.g. RedHat Linux, MacOS): linux* *3. Software version number(s) 10.0 bugfix* ### To Reproduce ### Describe the steps to reproduce the behavior: *1. Compare unit test outputs (by nightly build)* *2. See error: /d1/projects/MET/MET_regression/main_v10.0/NB2022031?/test_regression_2022031?.log* *Post relevant sample data following these instructions:* *https://dtcenter.org/community-code/model-evaluation-tools-met/met-help-desk#ftp* ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [x] Select **engineer(s)** or **no engineer** required: None - [x] Select **scientist(s)** or **no scientist** required: None ### Labels ### - [x] Select **component(s)** - [x] Select **priority** - [x] Select **requestor(s)** ### Projects and Milestone ### - [ ] Select **Organization** level **Project** for support of the current coordinated release - [ ] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label - [ ] Select **Milestone** as the next bugfix version ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) ## Bugfix Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [ ] Fork this repository or create a branch of **main_\<Version>**. Branch name: `bugfix_<Issue Number>_main_<Version>_<Description>` - [ ] Fix the bug and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **main_\<Version>**. Pull request: `bugfix <Issue Number> main_<Version> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Linked issues** Select: **Organization** level software support **Project** for the current coordinated release Select: **Milestone** as the next bugfix version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Complete the steps above to fix the bug on the **develop** branch. Branch name: `bugfix_<Issue Number>_develop_<Description>` Pull request: `bugfix <Issue Number> develop <Description>` Select: **Reviewer(s)** and **Linked issues** Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Close this issue.
process
the latest bugfix produces different output at seneca replace italics below with details for this issue describe the problem provide a clear and concise description of the bug here expected behavior provide a clear and concise description of what you expected to happen here the latest build should produces the same outputs with the reference build test output main ref hdr typ table main hdr typ table pnj test output main ref nobs main nobs this difference was detected did not happen before environment describe your runtime environment machine e g hpc name linux workstation mac laptop senaca os e g redhat linux macos linux software version number s bugfix to reproduce describe the steps to reproduce the behavior compare unit test outputs by nightly build see error projects met met regression main test regression log post relevant sample data following these instructions relevant deadlines list relevant project deadlines here or state none funding source define the source of funding and account keys here or state none define the metadata assignee select engineer s or no engineer required none select scientist s or no scientist required none labels select component s select priority select requestor s projects and milestone select organization level project for support of the current coordinated release select repository level project for development toward the next official release or add alert need project assignment label select milestone as the next bugfix version define related issue s consider the impact to the other metplus components bugfix checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of main branch name bugfix main fix the bug and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into main pull request bugfix main define the pull request metadata as permissions allow select reviewer s and linked issues select organization level software support project for the current coordinated release select milestone as the next bugfix version iterate until the reviewer s accept and merge your changes delete your fork or branch complete the steps above to fix the bug on the develop branch branch name bugfix develop pull request bugfix develop select reviewer s and linked issues select repository level development cycle project for the next official release select milestone as the next official version close this issue
1
20,257
6,018,477,085
IssuesEvent
2017-06-07 12:26:57
mozilla-mobile/focus-android
https://api.github.com/repos/mozilla-mobile/focus-android
opened
Extract locale switcher into standalone library
code
This is hard to get right and to integrate. By moving this code into a standalone library with a clean interface integrating this should get easier.
1.0
Extract locale switcher into standalone library - This is hard to get right and to integrate. By moving this code into a standalone library with a clean interface integrating this should get easier.
non_process
extract locale switcher into standalone library this is hard to get right and to integrate by moving this code into a standalone library with a clean interface integrating this should get easier
0
17,907
23,889,798,986
IssuesEvent
2022-09-08 10:32:20
galasa-dev/projectmanagement
https://api.github.com/repos/galasa-dev/projectmanagement
closed
Test dataset load library not added to DFHRPL of SEM complex JCL
Manager: zOS Manager: SEM Conversion Process
When a @ZosProgram is compiled it goes to a load library of USERID.GALASA.RUNID.LOAD This is not concatenated in the DFHRPL of the SEM complex JCL
1.0
Test dataset load library not added to DFHRPL of SEM complex JCL - When a @ZosProgram is compiled it goes to a load library of USERID.GALASA.RUNID.LOAD This is not concatenated in the DFHRPL of the SEM complex JCL
process
test dataset load library not added to dfhrpl of sem complex jcl when a zosprogram is compiled it goes to a load library of userid galasa runid load this is not concatenated in the dfhrpl of the sem complex jcl
1
14,150
17,035,883,170
IssuesEvent
2021-07-05 07:02:39
osstotalsoft/nbb
https://api.github.com/repos/osstotalsoft/nbb
closed
NBB.ProcessManager documentation
process manager
Move documentation from NBB.docs to package readme.md, if any. The documentation should also include samples in order to help anyone in the getting started process.
1.0
NBB.ProcessManager documentation - Move documentation from NBB.docs to package readme.md, if any. The documentation should also include samples in order to help anyone in the getting started process.
process
nbb processmanager documentation move documentation from nbb docs to package readme md if any the documentation should also include samples in order to help anyone in the getting started process
1
120,180
17,644,026,890
IssuesEvent
2021-08-20 01:29:54
AkshayMukkavilli/Tensorflow
https://api.github.com/repos/AkshayMukkavilli/Tensorflow
opened
CVE-2021-37643 (High) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl
security vulnerability
## CVE-2021-37643 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p> <p>Path to dependency file: /Tensorflow/src/requirements.txt</p> <p>Path to vulnerable library: teSource-ArchiveExtractor_5ea86033-7612-4210-97f3-8edb65806ddf/20190525011619_2843/20190525011537_depth_0/2/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p> <p> Dependency Hierarchy: - :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an end-to-end open source platform for machine learning. If a user does not provide a valid padding value to `tf.raw_ops.MatrixDiagPartOp`, then the code triggers a null pointer dereference (if input is empty) or produces invalid behavior, ignoring all values after the first. The [implementation](https://github.com/tensorflow/tensorflow/blob/8d72537c6abf5a44103b57b9c2e22c14f5f49698/tensorflow/core/kernels/linalg/matrix_diag_op.cc#L89) reads the first value from a tensor buffer without first checking that the tensor has values to read from. We have patched the issue in GitHub commit 482da92095c4d48f8784b1f00dda4f81c28d2988. The fix will be included in TensorFlow 2.6.0. We will also cherrypick this commit on TensorFlow 2.5.1, TensorFlow 2.4.3, and TensorFlow 2.3.4, as these are also affected and still in supported range. <p>Publish Date: 2021-08-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37643>CVE-2021-37643</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-fcwc-p4fc-c5cc">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-fcwc-p4fc-c5cc</a></p> <p>Release Date: 2021-08-12</p> <p>Fix Resolution: tensorflow - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-cpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-gpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-37643 (High) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2021-37643 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p> <p>Path to dependency file: /Tensorflow/src/requirements.txt</p> <p>Path to vulnerable library: teSource-ArchiveExtractor_5ea86033-7612-4210-97f3-8edb65806ddf/20190525011619_2843/20190525011537_depth_0/2/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p> <p> Dependency Hierarchy: - :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an end-to-end open source platform for machine learning. If a user does not provide a valid padding value to `tf.raw_ops.MatrixDiagPartOp`, then the code triggers a null pointer dereference (if input is empty) or produces invalid behavior, ignoring all values after the first. The [implementation](https://github.com/tensorflow/tensorflow/blob/8d72537c6abf5a44103b57b9c2e22c14f5f49698/tensorflow/core/kernels/linalg/matrix_diag_op.cc#L89) reads the first value from a tensor buffer without first checking that the tensor has values to read from. We have patched the issue in GitHub commit 482da92095c4d48f8784b1f00dda4f81c28d2988. The fix will be included in TensorFlow 2.6.0. We will also cherrypick this commit on TensorFlow 2.5.1, TensorFlow 2.4.3, and TensorFlow 2.3.4, as these are also affected and still in supported range. <p>Publish Date: 2021-08-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37643>CVE-2021-37643</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-fcwc-p4fc-c5cc">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-fcwc-p4fc-c5cc</a></p> <p>Release Date: 2021-08-12</p> <p>Fix Resolution: tensorflow - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-cpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-gpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in tensorflow whl cve high severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file tensorflow src requirements txt path to vulnerable library tesource archiveextractor depth tensorflow tensorflow data purelib tensorflow dependency hierarchy x tensorflow whl vulnerable library vulnerability details tensorflow is an end to end open source platform for machine learning if a user does not provide a valid padding value to tf raw ops matrixdiagpartop then the code triggers a null pointer dereference if input is empty or produces invalid behavior ignoring all values after the first the reads the first value from a tensor buffer without first checking that the tensor has values to read from we have patched the issue in github commit the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with whitesource
0
338,714
24,596,463,692
IssuesEvent
2022-10-14 08:45:29
ycherkes/ObjectDumper
https://api.github.com/repos/ycherkes/ObjectDumper
opened
Need Documentation Help
documentation help wanted
I'm not a native English speaker, so it would be nice if someone can write simple and useful documentation for this tool.
1.0
Need Documentation Help - I'm not a native English speaker, so it would be nice if someone can write simple and useful documentation for this tool.
non_process
need documentation help i m not a native english speaker so it would be nice if someone can write simple and useful documentation for this tool
0
18,836
24,742,063,967
IssuesEvent
2022-10-21 06:27:58
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Watershed delineation Error
Feedback Processing Bug
### Feature description While doing Watershed Delineation using Qgis we are able to do upto Strahler Order only. After Strahler Order while processing Upslope Area we are getting an error note as shown in the uploaded image. Kindly advise or resolve this issue ![Upslope Area - Error note](https://user-images.githubusercontent.com/116272460/196930902-f551a726-6831-4abd-ac7e-d62618f41cc2.jpg) ### Additional context _No response_
1.0
Watershed delineation Error - ### Feature description While doing Watershed Delineation using Qgis we are able to do upto Strahler Order only. After Strahler Order while processing Upslope Area we are getting an error note as shown in the uploaded image. Kindly advise or resolve this issue ![Upslope Area - Error note](https://user-images.githubusercontent.com/116272460/196930902-f551a726-6831-4abd-ac7e-d62618f41cc2.jpg) ### Additional context _No response_
process
watershed delineation error feature description while doing watershed delineation using qgis we are able to do upto strahler order only after strahler order while processing upslope area we are getting an error note as shown in the uploaded image kindly advise or resolve this issue additional context no response
1
198,163
15,702,171,849
IssuesEvent
2021-03-26 12:15:28
ropensci/rtweet
https://api.github.com/repos/ropensci/rtweet
closed
Mentions of deleted accounts cause confusion in tweets data
documentation
This is relatively minor. I found that mentions of accounts that have been deleted cause some discrepancies in the tweets dataframes. The "text" variable will still display the deleted account, but it's not displayed in the remaining variables where it normally would be (e.g., "reply_to_user_id", "mentions_user_id", etc.). I assume this discrepancy comes directly from the API, but if there were a good place to just mention it in the documentation that could be helpful. Took me a little bit to figure out what was going on. The discrepancy can be found easily in some of my recent tweets: get_timeline("llsigerson").
1.0
Mentions of deleted accounts cause confusion in tweets data - This is relatively minor. I found that mentions of accounts that have been deleted cause some discrepancies in the tweets dataframes. The "text" variable will still display the deleted account, but it's not displayed in the remaining variables where it normally would be (e.g., "reply_to_user_id", "mentions_user_id", etc.). I assume this discrepancy comes directly from the API, but if there were a good place to just mention it in the documentation that could be helpful. Took me a little bit to figure out what was going on. The discrepancy can be found easily in some of my recent tweets: get_timeline("llsigerson").
non_process
mentions of deleted accounts cause confusion in tweets data this is relatively minor i found that mentions of accounts that have been deleted cause some discrepancies in the tweets dataframes the text variable will still display the deleted account but it s not displayed in the remaining variables where it normally would be e g reply to user id mentions user id etc i assume this discrepancy comes directly from the api but if there were a good place to just mention it in the documentation that could be helpful took me a little bit to figure out what was going on the discrepancy can be found easily in some of my recent tweets get timeline llsigerson
0
133,555
10,840,783,818
IssuesEvent
2019-11-12 09:07:06
naver/pinpoint
https://api.github.com/repos/naver/pinpoint
closed
Add unit tests for MapUtils and ListUtils
module:project-common test
#### description related issue #5753 Thank you very much for you're contribution @Braavos96
1.0
Add unit tests for MapUtils and ListUtils - #### description related issue #5753 Thank you very much for you're contribution @Braavos96
non_process
add unit tests for maputils and listutils description related issue thank you very much for you re contribution
0
753,003
26,337,355,690
IssuesEvent
2023-01-10 15:14:44
hyperledger/aries-cloudagent-python
https://api.github.com/repos/hyperledger/aries-cloudagent-python
closed
Help Wanted: Update CI/CD Pipeline to eliminate Circle/CI
help wanted High Priority
Circle/CI is becoming problematic and we need to move off of it for this repo and over to (I assume) a pure GHActions approach. Looking for someone in the community that has some experience in this area to grab this issue. Suggested approach: - Offer to take this task - Evaluate what is done with Circle/CI now and design a replacement - Post an approach for a quick review from others that have expertise in this area - Assuming a thumbs up -- move forward with the switch. Thanks!
1.0
Help Wanted: Update CI/CD Pipeline to eliminate Circle/CI - Circle/CI is becoming problematic and we need to move off of it for this repo and over to (I assume) a pure GHActions approach. Looking for someone in the community that has some experience in this area to grab this issue. Suggested approach: - Offer to take this task - Evaluate what is done with Circle/CI now and design a replacement - Post an approach for a quick review from others that have expertise in this area - Assuming a thumbs up -- move forward with the switch. Thanks!
non_process
help wanted update ci cd pipeline to eliminate circle ci circle ci is becoming problematic and we need to move off of it for this repo and over to i assume a pure ghactions approach looking for someone in the community that has some experience in this area to grab this issue suggested approach offer to take this task evaluate what is done with circle ci now and design a replacement post an approach for a quick review from others that have expertise in this area assuming a thumbs up move forward with the switch thanks
0
10,854
13,629,590,658
IssuesEvent
2020-09-24 15:18:40
googleapis/python-bigquery-storage
https://api.github.com/repos/googleapis/python-bigquery-storage
closed
Transition the library to microgenerator
api: bigquerystorage type: process
Microgenerator is ready and we can use it to regenerate the code here. Doing that is a breaking change (e.g. drops Python 2.7, 3.5), meaning that it needs to be released in a new major version.
1.0
Transition the library to microgenerator - Microgenerator is ready and we can use it to regenerate the code here. Doing that is a breaking change (e.g. drops Python 2.7, 3.5), meaning that it needs to be released in a new major version.
process
transition the library to microgenerator microgenerator is ready and we can use it to regenerate the code here doing that is a breaking change e g drops python meaning that it needs to be released in a new major version
1
118,974
10,021,065,480
IssuesEvent
2019-07-16 13:56:24
dojot/dojot
https://api.github.com/repos/dojot/dojot
closed
[GUI] Image mgmt always enabled
Status:ToTest Team:Frontend Type:Bug
Even when you disable the image management for a template, it still contains some image attributes.
1.0
[GUI] Image mgmt always enabled - Even when you disable the image management for a template, it still contains some image attributes.
non_process
image mgmt always enabled even when you disable the image management for a template it still contains some image attributes
0
17,075
22,575,026,000
IssuesEvent
2022-06-28 06:25:21
weiquany/KTVAnywhere
https://api.github.com/repos/weiquany/KTVAnywhere
closed
Vocal separation
feature: song preprocessing
Separate vocals and music from any song. ## User story As a user, * I only have songs with vocals but I want to only hear the background music of a song when singing. * I want to be able to toggle the vocals on if I want to. ### Acceptance criteria The application should: - [x] Process the song file provided to get a track with only background music - [x] Process the song file provided to get a track with only vocals - [x] Have a folder to store the processed song files - [x] Have a toggle for the user to turn vocals on or off ## Complexities * Uncertainty in the machine learning model to be used * Difficulty in testing the effectiveness of the separation
1.0
Vocal separation - Separate vocals and music from any song. ## User story As a user, * I only have songs with vocals but I want to only hear the background music of a song when singing. * I want to be able to toggle the vocals on if I want to. ### Acceptance criteria The application should: - [x] Process the song file provided to get a track with only background music - [x] Process the song file provided to get a track with only vocals - [x] Have a folder to store the processed song files - [x] Have a toggle for the user to turn vocals on or off ## Complexities * Uncertainty in the machine learning model to be used * Difficulty in testing the effectiveness of the separation
process
vocal separation separate vocals and music from any song user story as a user i only have songs with vocals but i want to only hear the background music of a song when singing i want to be able to toggle the vocals on if i want to acceptance criteria the application should process the song file provided to get a track with only background music process the song file provided to get a track with only vocals have a folder to store the processed song files have a toggle for the user to turn vocals on or off complexities uncertainty in the machine learning model to be used difficulty in testing the effectiveness of the separation
1
184,657
14,289,809,622
IssuesEvent
2020-11-23 19:51:46
github-vet/rangeclosure-findings
https://api.github.com/repos/github-vet/rangeclosure-findings
closed
pi-pi-miao/eye: vendor/go.etcd.io/etcd/integration/v3_watch_test.go; 91 LoC
fresh medium test
Found a possible issue in [pi-pi-miao/eye](https://www.github.com/pi-pi-miao/eye) at [vendor/go.etcd.io/etcd/integration/v3_watch_test.go](https://github.com/pi-pi-miao/eye/blob/6183d469de4cf8d7fcc95e2781827db079601911/vendor/go.etcd.io/etcd/integration/v3_watch_test.go#L206-L296) The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements which capture loop variables. [Click here to see the code in its original context.](https://github.com/pi-pi-miao/eye/blob/6183d469de4cf8d7fcc95e2781827db079601911/vendor/go.etcd.io/etcd/integration/v3_watch_test.go#L206-L296) <details> <summary>Click here to show the 91 line(s) of Go which triggered the analyzer.</summary> ```go for i, tt := range tests { clus := NewClusterV3(t, &ClusterConfig{Size: 3}) wAPI := toGRPC(clus.RandClient()).Watch ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) defer cancel() wStream, err := wAPI.Watch(ctx) if err != nil { t.Fatalf("#%d: wAPI.Watch error: %v", i, err) } err = wStream.Send(tt.watchRequest) if err != nil { t.Fatalf("#%d: wStream.Send error: %v", i, err) } // ensure watcher request created a new watcher cresp, err := wStream.Recv() if err != nil { t.Errorf("#%d: wStream.Recv error: %v", i, err) clus.Terminate(t) continue } if !cresp.Created { t.Errorf("#%d: did not create watchid, got %+v", i, cresp) clus.Terminate(t) continue } if cresp.Canceled { t.Errorf("#%d: canceled watcher on create %+v", i, cresp) clus.Terminate(t) continue } createdWatchId := cresp.WatchId if cresp.Header == nil || cresp.Header.Revision != 1 { t.Errorf("#%d: header revision got +%v, wanted revison 1", i, cresp) clus.Terminate(t) continue } // asynchronously create keys ch := make(chan struct{}, 1) go func() { for _, k := range tt.putKeys { kvc := toGRPC(clus.RandClient()).KV req := &pb.PutRequest{Key: []byte(k), Value: []byte("bar")} if _, err := kvc.Put(context.TODO(), req); err != nil { t.Errorf("#%d: couldn't put key (%v)", i, err) } } ch <- struct{}{} }() // check stream results for j, wresp := range tt.wresps { resp, err := wStream.Recv() if err != nil { t.Errorf("#%d.%d: wStream.Recv error: %v", i, j, err) } if resp.Header == nil { t.Fatalf("#%d.%d: unexpected nil resp.Header", i, j) } if resp.Header.Revision != wresp.Header.Revision { t.Errorf("#%d.%d: resp.Header.Revision got = %d, want = %d", i, j, resp.Header.Revision, wresp.Header.Revision) } if wresp.Created != resp.Created { t.Errorf("#%d.%d: resp.Created got = %v, want = %v", i, j, resp.Created, wresp.Created) } if resp.WatchId != createdWatchId { t.Errorf("#%d.%d: resp.WatchId got = %d, want = %d", i, j, resp.WatchId, createdWatchId) } if !reflect.DeepEqual(resp.Events, wresp.Events) { t.Errorf("#%d.%d: resp.Events got = %+v, want = %+v", i, j, resp.Events, wresp.Events) } } rok, nr := waitResponse(wStream, 1*time.Second) if !rok { t.Errorf("unexpected pb.WatchResponse is received %+v", nr) } // wait for the client to finish sending the keys before terminating the cluster <-ch // can't defer because tcp ports will be in use clus.Terminate(t) } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 6183d469de4cf8d7fcc95e2781827db079601911
1.0
pi-pi-miao/eye: vendor/go.etcd.io/etcd/integration/v3_watch_test.go; 91 LoC - Found a possible issue in [pi-pi-miao/eye](https://www.github.com/pi-pi-miao/eye) at [vendor/go.etcd.io/etcd/integration/v3_watch_test.go](https://github.com/pi-pi-miao/eye/blob/6183d469de4cf8d7fcc95e2781827db079601911/vendor/go.etcd.io/etcd/integration/v3_watch_test.go#L206-L296) The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements which capture loop variables. [Click here to see the code in its original context.](https://github.com/pi-pi-miao/eye/blob/6183d469de4cf8d7fcc95e2781827db079601911/vendor/go.etcd.io/etcd/integration/v3_watch_test.go#L206-L296) <details> <summary>Click here to show the 91 line(s) of Go which triggered the analyzer.</summary> ```go for i, tt := range tests { clus := NewClusterV3(t, &ClusterConfig{Size: 3}) wAPI := toGRPC(clus.RandClient()).Watch ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) defer cancel() wStream, err := wAPI.Watch(ctx) if err != nil { t.Fatalf("#%d: wAPI.Watch error: %v", i, err) } err = wStream.Send(tt.watchRequest) if err != nil { t.Fatalf("#%d: wStream.Send error: %v", i, err) } // ensure watcher request created a new watcher cresp, err := wStream.Recv() if err != nil { t.Errorf("#%d: wStream.Recv error: %v", i, err) clus.Terminate(t) continue } if !cresp.Created { t.Errorf("#%d: did not create watchid, got %+v", i, cresp) clus.Terminate(t) continue } if cresp.Canceled { t.Errorf("#%d: canceled watcher on create %+v", i, cresp) clus.Terminate(t) continue } createdWatchId := cresp.WatchId if cresp.Header == nil || cresp.Header.Revision != 1 { t.Errorf("#%d: header revision got +%v, wanted revison 1", i, cresp) clus.Terminate(t) continue } // asynchronously create keys ch := make(chan struct{}, 1) go func() { for _, k := range tt.putKeys { kvc := toGRPC(clus.RandClient()).KV req := &pb.PutRequest{Key: []byte(k), Value: []byte("bar")} if _, err := kvc.Put(context.TODO(), req); err != nil { t.Errorf("#%d: couldn't put key (%v)", i, err) } } ch <- struct{}{} }() // check stream results for j, wresp := range tt.wresps { resp, err := wStream.Recv() if err != nil { t.Errorf("#%d.%d: wStream.Recv error: %v", i, j, err) } if resp.Header == nil { t.Fatalf("#%d.%d: unexpected nil resp.Header", i, j) } if resp.Header.Revision != wresp.Header.Revision { t.Errorf("#%d.%d: resp.Header.Revision got = %d, want = %d", i, j, resp.Header.Revision, wresp.Header.Revision) } if wresp.Created != resp.Created { t.Errorf("#%d.%d: resp.Created got = %v, want = %v", i, j, resp.Created, wresp.Created) } if resp.WatchId != createdWatchId { t.Errorf("#%d.%d: resp.WatchId got = %d, want = %d", i, j, resp.WatchId, createdWatchId) } if !reflect.DeepEqual(resp.Events, wresp.Events) { t.Errorf("#%d.%d: resp.Events got = %+v, want = %+v", i, j, resp.Events, wresp.Events) } } rok, nr := waitResponse(wStream, 1*time.Second) if !rok { t.Errorf("unexpected pb.WatchResponse is received %+v", nr) } // wait for the client to finish sending the keys before terminating the cluster <-ch // can't defer because tcp ports will be in use clus.Terminate(t) } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 6183d469de4cf8d7fcc95e2781827db079601911
non_process
pi pi miao eye vendor go etcd io etcd integration watch test go loc found a possible issue in at the below snippet of go code triggered static analysis which searches for goroutines and or defer statements which capture loop variables click here to show the line s of go which triggered the analyzer go for i tt range tests clus t clusterconfig size wapi togrpc clus randclient watch ctx cancel context withtimeout context background time second defer cancel wstream err wapi watch ctx if err nil t fatalf d wapi watch error v i err err wstream send tt watchrequest if err nil t fatalf d wstream send error v i err ensure watcher request created a new watcher cresp err wstream recv if err nil t errorf d wstream recv error v i err clus terminate t continue if cresp created t errorf d did not create watchid got v i cresp clus terminate t continue if cresp canceled t errorf d canceled watcher on create v i cresp clus terminate t continue createdwatchid cresp watchid if cresp header nil cresp header revision t errorf d header revision got v wanted revison i cresp clus terminate t continue asynchronously create keys ch make chan struct go func for k range tt putkeys kvc togrpc clus randclient kv req pb putrequest key byte k value byte bar if err kvc put context todo req err nil t errorf d couldn t put key v i err ch struct check stream results for j wresp range tt wresps resp err wstream recv if err nil t errorf d d wstream recv error v i j err if resp header nil t fatalf d d unexpected nil resp header i j if resp header revision wresp header revision t errorf d d resp header revision got d want d i j resp header revision wresp header revision if wresp created resp created t errorf d d resp created got v want v i j resp created wresp created if resp watchid createdwatchid t errorf d d resp watchid got d want d i j resp watchid createdwatchid if reflect deepequal resp events wresp events t errorf d d resp events got v want v i j resp events wresp events rok nr waitresponse wstream time second if rok t errorf unexpected pb watchresponse is received v nr wait for the client to finish sending the keys before terminating the cluster ch can t defer because tcp ports will be in use clus terminate t leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
0
284,699
24,616,973,632
IssuesEvent
2022-10-15 12:48:29
RachelAlcraft/LeuciWeb
https://api.github.com/repos/RachelAlcraft/LeuciWeb
opened
Acceptance testing - planes
RSE Standards testing
Acceptance testing of planes is needed - how do I know they are correct? Create a simple demonstration of the matrix numbers and visualation in python. Add a tool to the interface to download the raw numbers and coordinates in format eg 23 45 56 1.22 3.12 0 0 etc and a python tool that takes that and makes a plane.
1.0
Acceptance testing - planes - Acceptance testing of planes is needed - how do I know they are correct? Create a simple demonstration of the matrix numbers and visualation in python. Add a tool to the interface to download the raw numbers and coordinates in format eg 23 45 56 1.22 3.12 0 0 etc and a python tool that takes that and makes a plane.
non_process
acceptance testing planes acceptance testing of planes is needed how do i know they are correct create a simple demonstration of the matrix numbers and visualation in python add a tool to the interface to download the raw numbers and coordinates in format eg etc and a python tool that takes that and makes a plane
0
2,843
5,806,973,152
IssuesEvent
2017-05-04 05:50:54
RennurApps/AwareIM-resources
https://api.github.com/repos/RennurApps/AwareIM-resources
opened
Active processes not timing out.
CT Context CT Process v6.0
There are processes that appear in the _active processes_ table that are either not timed out or killed by the system. These processes are stored in _execution_context_ database table. As reported by some developers, this may be the cause various system issues from application performance to complete server halt.
1.0
Active processes not timing out. - There are processes that appear in the _active processes_ table that are either not timed out or killed by the system. These processes are stored in _execution_context_ database table. As reported by some developers, this may be the cause various system issues from application performance to complete server halt.
process
active processes not timing out there are processes that appear in the active processes table that are either not timed out or killed by the system these processes are stored in execution context database table as reported by some developers this may be the cause various system issues from application performance to complete server halt
1
19,634
25,996,238,131
IssuesEvent
2022-12-20 11:49:35
ni/grpc-labview
https://api.github.com/repos/ni/grpc-labview
opened
Bump VIPB Verison workflow fails on autocommit because of branch protection
type: process improvement
VIPB version bump workflow fails due to restricted branch.
1.0
Bump VIPB Verison workflow fails on autocommit because of branch protection - VIPB version bump workflow fails due to restricted branch.
process
bump vipb verison workflow fails on autocommit because of branch protection vipb version bump workflow fails due to restricted branch
1
5,804
8,643,540,687
IssuesEvent
2018-11-25 18:55:10
gfrebello/qs-trip-planning-procedure
https://api.github.com/repos/gfrebello/qs-trip-planning-procedure
closed
Solve flight price issues
Priority:High Process:Implement Requirement
The flight base prices are currently stored in the Flight entities. However, the price to be paid by the passenger will depend if they are in the Economy or Executive class, and also on the number of passengers. Functions need to be made to calculate (and store) the prices according to the user choices.
1.0
Solve flight price issues - The flight base prices are currently stored in the Flight entities. However, the price to be paid by the passenger will depend if they are in the Economy or Executive class, and also on the number of passengers. Functions need to be made to calculate (and store) the prices according to the user choices.
process
solve flight price issues the flight base prices are currently stored in the flight entities however the price to be paid by the passenger will depend if they are in the economy or executive class and also on the number of passengers functions need to be made to calculate and store the prices according to the user choices
1
7,894
11,083,030,904
IssuesEvent
2019-12-13 13:35:44
Open-EO/openeo-processes
https://api.github.com/repos/Open-EO/openeo-processes
opened
import_*
interoperability new process
This originates from #83: There are probably several processes, which could be useful to import data from "non-API" sources, similarly to `load_uploaded_files`. Some ideas: * `import_s3`: Load data from Amazon S3 (see [example in #83](https://github.com/Open-EO/openeo-processes/issues/83#issuecomment-540573097)) * `import_nfs`: Import from a network file storage system attached to the back-end. Example from VITO: [load_disk_data](http://openeo.vgt.vito.be/openeo/0.4.0/processes/load_disk_data) * `import_stac`: Import from a STAC catalog/API * `import_external_result`: Import a openEO job result from another back-end * ...
1.0
import_* - This originates from #83: There are probably several processes, which could be useful to import data from "non-API" sources, similarly to `load_uploaded_files`. Some ideas: * `import_s3`: Load data from Amazon S3 (see [example in #83](https://github.com/Open-EO/openeo-processes/issues/83#issuecomment-540573097)) * `import_nfs`: Import from a network file storage system attached to the back-end. Example from VITO: [load_disk_data](http://openeo.vgt.vito.be/openeo/0.4.0/processes/load_disk_data) * `import_stac`: Import from a STAC catalog/API * `import_external_result`: Import a openEO job result from another back-end * ...
process
import this originates from there are probably several processes which could be useful to import data from non api sources similarly to load uploaded files some ideas import load data from amazon see import nfs import from a network file storage system attached to the back end example from vito import stac import from a stac catalog api import external result import a openeo job result from another back end
1
21,092
28,045,020,869
IssuesEvent
2023-03-28 21:54:03
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
[MLv2] Implement functions for available fields for a given stage of the query
.metabase-lib .Team/QueryProcessor :hammer_and_wrench:
This is for powering various bits of the UI, e.g. <img width="214" alt="image" src="https://user-images.githubusercontent.com/1455846/221763022-b6a284f0-dfe6-45c0-bb4e-012b8cc358ba.png"> Function might look something like ```clj (visible-fields query stage-number) ``` and ```clj (visible-fields query) ; default to stage -1 (last stage) ``` Not sure about the name. This can probably just take the `:lib/stage-metadata` for a stage of a query (this is added by #28717) and return that (the JS version would need to do `clj->js` on the maps before returning them). We might need to add some extra info not already in the metadata -- TBD. Figuring out what we need to change if anything is part of this task
1.0
[MLv2] Implement functions for available fields for a given stage of the query - This is for powering various bits of the UI, e.g. <img width="214" alt="image" src="https://user-images.githubusercontent.com/1455846/221763022-b6a284f0-dfe6-45c0-bb4e-012b8cc358ba.png"> Function might look something like ```clj (visible-fields query stage-number) ``` and ```clj (visible-fields query) ; default to stage -1 (last stage) ``` Not sure about the name. This can probably just take the `:lib/stage-metadata` for a stage of a query (this is added by #28717) and return that (the JS version would need to do `clj->js` on the maps before returning them). We might need to add some extra info not already in the metadata -- TBD. Figuring out what we need to change if anything is part of this task
process
implement functions for available fields for a given stage of the query this is for powering various bits of the ui e g img width alt image src function might look something like clj visible fields query stage number and clj visible fields query default to stage last stage not sure about the name this can probably just take the lib stage metadata for a stage of a query this is added by and return that the js version would need to do clj js on the maps before returning them we might need to add some extra info not already in the metadata tbd figuring out what we need to change if anything is part of this task
1
1,053
3,520,776,903
IssuesEvent
2016-01-12 22:16:37
sysown/proxysql
https://api.github.com/repos/sysown/proxysql
opened
Create a global variable to disable multiplexing
ADMIN CONNECTION POOL MYSQL PROTOCOL QUERY PROCESSOR
ProxySQL is so efficient at performing multiplexing that there is currently no way to disable multiplexing other than ProxySQL detecting it isn't safe to perform it. Although, after hitting this bug https://mariadb.atlassian.net/browse/MDEV-8338 (Select NOW() is stuck and shows same time) it seems that disabling multiplexing could be a way to work around this bug. The implementation of disabling multiplexing needs some special attention regarding the re-connection and failover
1.0
Create a global variable to disable multiplexing - ProxySQL is so efficient at performing multiplexing that there is currently no way to disable multiplexing other than ProxySQL detecting it isn't safe to perform it. Although, after hitting this bug https://mariadb.atlassian.net/browse/MDEV-8338 (Select NOW() is stuck and shows same time) it seems that disabling multiplexing could be a way to work around this bug. The implementation of disabling multiplexing needs some special attention regarding the re-connection and failover
process
create a global variable to disable multiplexing proxysql is so efficient at performing multiplexing that there is currently no way to disable multiplexing other than proxysql detecting it isn t safe to perform it although after hitting this bug select now is stuck and shows same time it seems that disabling multiplexing could be a way to work around this bug the implementation of disabling multiplexing needs some special attention regarding the re connection and failover
1
163,994
12,753,083,953
IssuesEvent
2020-06-27 20:03:10
coq/coq
https://api.github.com/repos/coq/coq
closed
running `make` in `test-suite` always rebuilds the modules files
part: test-suite
``` $ make -j10 [53/1899] TEST modules/PO.v TEST modules/ind.v TEST modules/objects.v TEST modules/modeq.v (-top modeq) TEST modules/mod_decl.v TEST modules/grammar.v CHECK modules/objects.v CHECK modules/ind.v TEST modules/sig.v CHECK modules/modeq.v TEST modules/nested_mod_types.v TEST modules/cumpoly.v CHECK modules/mod_decl.v CHECK modules/grammar.v TEST modules/errors.v (-impredicative-set) CHECK modules/sig.v CHECK modules/PO.v CHECK modules/nested_mod_types.v TEST modules/injection_discriminate_inversion.v TEST modules/objects2.v CHECK modules/cumpoly.v TEST modules/resolver.v TEST modules/Nat.v TEST modules/modul.v (-top modul) CHECK modules/injection_discriminate_inversion.v CHECK modules/objects2.v TEST modules/SeveralWith.v CHECK modules/errors.v TEST coq-makefile/local-late-extension CHECK modules/resolver.v CHECK modules/modul.v CHECK modules/Nat.v CHECK modules/SeveralWith.v make report make[1]: Entering directory '/home/jgross/Documents/repos/coq/test-suite' BUILDING SUMMARY FILE NO FAILURES make[1]: Leaving directory '/home/jgross/Documents/repos/coq/test-suite' jgross@jgross-Leopard-WS:~/Documents/repos/coq/test-suite$ make -j10 TEST modules/PO.v TEST modules/ind.v TEST modules/objects.v TEST modules/modeq.v (-top modeq) TEST modules/mod_decl.v TEST modules/grammar.v CHECK modules/ind.v TEST modules/sig.v TEST modules/nested_mod_types.v CHECK modules/objects.v CHECK modules/modeq.v CHECK modules/mod_decl.v TEST modules/cumpoly.v CHECK modules/grammar.v CHECK modules/PO.v TEST modules/errors.v (-impredicative-set) CHECK modules/sig.v CHECK modules/nested_mod_types.v TEST modules/injection_discriminate_inversion.v CHECK modules/cumpoly.v TEST modules/objects2.v TEST modules/resolver.v TEST modules/modul.v (-top modul) CHECK modules/injection_discriminate_inversion.v TEST modules/plik.v CHECK modules/objects2.v CHECK modules/errors.v TEST modules/Demo.v CHECK modules/resolver.v CHECK modules/modul.v TEST modules/polymorphism.v TEST modules/WithDefUBinders.v CHECK modules/plik.v TEST modules/subtyping.v TEST modules/Tescik.v TEST modules/pseudo_circular_with.v CHECK modules/Demo.v TEST modules/polymorphism2.v CHECK modules/WithDefUBinders.v TEST modules/Przyklad.v CHECK modules/polymorphism.v CHECK modules/subtyping.v CHECK modules/Tescik.v CHECK modules/pseudo_circular_with.v TEST modules/fun_objects.v (-impredicative-set) TEST modules/pliczek.v TEST modules/sub_objects.v CHECK modules/polymorphism2.v TEST modules/obj.v CHECK modules/Przyklad.v CHECK modules/fun_objects.v CHECK modules/pliczek.v CHECK modules/sub_objects.v CHECK modules/obj.v make report make[1]: Entering directory '/home/jgross/Documents/repos/coq/test-suite' BUILDING SUMMARY FILE ``` This is because we have https://github.com/coq/coq/blob/82485e9f2a36a7a52a56622a553817436636b00b/test-suite/Makefile#L631-L634 which says that `Nat.v.log` (and therefore `Nat.vo`) should be rebuilt anytime `Nat.v.log` is older than `plik.v.vo`, and also says that `plik.v.log` (and therefore `plik.vo`) should be rebuilt anytime `plik.v.log` is older than `Nat.vo`. This is circular, and results in infinite rebuilds.
1.0
running `make` in `test-suite` always rebuilds the modules files - ``` $ make -j10 [53/1899] TEST modules/PO.v TEST modules/ind.v TEST modules/objects.v TEST modules/modeq.v (-top modeq) TEST modules/mod_decl.v TEST modules/grammar.v CHECK modules/objects.v CHECK modules/ind.v TEST modules/sig.v CHECK modules/modeq.v TEST modules/nested_mod_types.v TEST modules/cumpoly.v CHECK modules/mod_decl.v CHECK modules/grammar.v TEST modules/errors.v (-impredicative-set) CHECK modules/sig.v CHECK modules/PO.v CHECK modules/nested_mod_types.v TEST modules/injection_discriminate_inversion.v TEST modules/objects2.v CHECK modules/cumpoly.v TEST modules/resolver.v TEST modules/Nat.v TEST modules/modul.v (-top modul) CHECK modules/injection_discriminate_inversion.v CHECK modules/objects2.v TEST modules/SeveralWith.v CHECK modules/errors.v TEST coq-makefile/local-late-extension CHECK modules/resolver.v CHECK modules/modul.v CHECK modules/Nat.v CHECK modules/SeveralWith.v make report make[1]: Entering directory '/home/jgross/Documents/repos/coq/test-suite' BUILDING SUMMARY FILE NO FAILURES make[1]: Leaving directory '/home/jgross/Documents/repos/coq/test-suite' jgross@jgross-Leopard-WS:~/Documents/repos/coq/test-suite$ make -j10 TEST modules/PO.v TEST modules/ind.v TEST modules/objects.v TEST modules/modeq.v (-top modeq) TEST modules/mod_decl.v TEST modules/grammar.v CHECK modules/ind.v TEST modules/sig.v TEST modules/nested_mod_types.v CHECK modules/objects.v CHECK modules/modeq.v CHECK modules/mod_decl.v TEST modules/cumpoly.v CHECK modules/grammar.v CHECK modules/PO.v TEST modules/errors.v (-impredicative-set) CHECK modules/sig.v CHECK modules/nested_mod_types.v TEST modules/injection_discriminate_inversion.v CHECK modules/cumpoly.v TEST modules/objects2.v TEST modules/resolver.v TEST modules/modul.v (-top modul) CHECK modules/injection_discriminate_inversion.v TEST modules/plik.v CHECK modules/objects2.v CHECK modules/errors.v TEST modules/Demo.v CHECK modules/resolver.v CHECK modules/modul.v TEST modules/polymorphism.v TEST modules/WithDefUBinders.v CHECK modules/plik.v TEST modules/subtyping.v TEST modules/Tescik.v TEST modules/pseudo_circular_with.v CHECK modules/Demo.v TEST modules/polymorphism2.v CHECK modules/WithDefUBinders.v TEST modules/Przyklad.v CHECK modules/polymorphism.v CHECK modules/subtyping.v CHECK modules/Tescik.v CHECK modules/pseudo_circular_with.v TEST modules/fun_objects.v (-impredicative-set) TEST modules/pliczek.v TEST modules/sub_objects.v CHECK modules/polymorphism2.v TEST modules/obj.v CHECK modules/Przyklad.v CHECK modules/fun_objects.v CHECK modules/pliczek.v CHECK modules/sub_objects.v CHECK modules/obj.v make report make[1]: Entering directory '/home/jgross/Documents/repos/coq/test-suite' BUILDING SUMMARY FILE ``` This is because we have https://github.com/coq/coq/blob/82485e9f2a36a7a52a56622a553817436636b00b/test-suite/Makefile#L631-L634 which says that `Nat.v.log` (and therefore `Nat.vo`) should be rebuilt anytime `Nat.v.log` is older than `plik.v.vo`, and also says that `plik.v.log` (and therefore `plik.vo`) should be rebuilt anytime `plik.v.log` is older than `Nat.vo`. This is circular, and results in infinite rebuilds.
non_process
running make in test suite always rebuilds the modules files make test modules po v test modules ind v test modules objects v test modules modeq v top modeq test modules mod decl v test modules grammar v check modules objects v check modules ind v test modules sig v check modules modeq v test modules nested mod types v test modules cumpoly v check modules mod decl v check modules grammar v test modules errors v impredicative set check modules sig v check modules po v check modules nested mod types v test modules injection discriminate inversion v test modules v check modules cumpoly v test modules resolver v test modules nat v test modules modul v top modul check modules injection discriminate inversion v check modules v test modules severalwith v check modules errors v test coq makefile local late extension check modules resolver v check modules modul v check modules nat v check modules severalwith v make report make entering directory home jgross documents repos coq test suite building summary file no failures make leaving directory home jgross documents repos coq test suite jgross jgross leopard ws documents repos coq test suite make test modules po v test modules ind v test modules objects v test modules modeq v top modeq test modules mod decl v test modules grammar v check modules ind v test modules sig v test modules nested mod types v check modules objects v check modules modeq v check modules mod decl v test modules cumpoly v check modules grammar v check modules po v test modules errors v impredicative set check modules sig v check modules nested mod types v test modules injection discriminate inversion v check modules cumpoly v test modules v test modules resolver v test modules modul v top modul check modules injection discriminate inversion v test modules plik v check modules v check modules errors v test modules demo v check modules resolver v check modules modul v test modules polymorphism v test modules withdefubinders v check modules plik v test modules subtyping v test modules tescik v test modules pseudo circular with v check modules demo v test modules v check modules withdefubinders v test modules przyklad v check modules polymorphism v check modules subtyping v check modules tescik v check modules pseudo circular with v test modules fun objects v impredicative set test modules pliczek v test modules sub objects v check modules v test modules obj v check modules przyklad v check modules fun objects v check modules pliczek v check modules sub objects v check modules obj v make report make entering directory home jgross documents repos coq test suite building summary file this is because we have which says that nat v log and therefore nat vo should be rebuilt anytime nat v log is older than plik v vo and also says that plik v log and therefore plik vo should be rebuilt anytime plik v log is older than nat vo this is circular and results in infinite rebuilds
0
15,096
18,820,686,572
IssuesEvent
2021-11-10 07:55:32
streamnative/pulsar-flink
https://api.github.com/repos/streamnative/pulsar-flink
closed
[BUG] java.lang.RuntimeException: start message id beyond the last commit
type/bug platform/data-processing
**Describe the bug** java.lang.RuntimeException: start message id beyond the last commit at org.apache.flink.streaming.connectors.pulsar.internal.ReaderThread.handleTooLargeCursor(ReaderThread.java:159) at org.apache.flink.streaming.connectors.pulsar.internal.ReaderThread.run(ReaderThread.java:103) **To Reproduce** Flink 1.13.2 Pulsar 2.8.1 pulsar-flink-connector 1.13.1.4 **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Additional context** Add any other context about the problem here.
1.0
[BUG] java.lang.RuntimeException: start message id beyond the last commit - **Describe the bug** java.lang.RuntimeException: start message id beyond the last commit at org.apache.flink.streaming.connectors.pulsar.internal.ReaderThread.handleTooLargeCursor(ReaderThread.java:159) at org.apache.flink.streaming.connectors.pulsar.internal.ReaderThread.run(ReaderThread.java:103) **To Reproduce** Flink 1.13.2 Pulsar 2.8.1 pulsar-flink-connector 1.13.1.4 **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Additional context** Add any other context about the problem here.
process
java lang runtimeexception start message id beyond the last commit describe the bug java lang runtimeexception start message id beyond the last commit at org apache flink streaming connectors pulsar internal readerthread handletoolargecursor readerthread java at org apache flink streaming connectors pulsar internal readerthread run readerthread java to reproduce flink pulsar pulsar flink connector expected behavior a clear and concise description of what you expected to happen screenshots if applicable add screenshots to help explain your problem additional context add any other context about the problem here
1
11,005
3,155,683,972
IssuesEvent
2015-09-17 10:16:59
medic/medic-webapp
https://api.github.com/repos/medic/medic-webapp
closed
Restricted user cannot submit enketo form
3 - Acceptance testing Bug
1. Restrict a user to a district 2. Log in as that user and allow everything to sync to pouchdb 3. Go to Reports tab 4. Click New Report button ### Expected All forms are available for selection. ### Actual No forms are available
1.0
Restricted user cannot submit enketo form - 1. Restrict a user to a district 2. Log in as that user and allow everything to sync to pouchdb 3. Go to Reports tab 4. Click New Report button ### Expected All forms are available for selection. ### Actual No forms are available
non_process
restricted user cannot submit enketo form restrict a user to a district log in as that user and allow everything to sync to pouchdb go to reports tab click new report button expected all forms are available for selection actual no forms are available
0
11,775
14,611,205,903
IssuesEvent
2020-12-22 02:37:43
didi/mpx
https://api.github.com/repos/didi/mpx
closed
无法引入 vant-dialog 组件
processing
**问题描述** 1. 问题触发的条件 ``` <script type="application/json"> { "usingComponents": { // 引入 vant-dialog "van-dialog": "vant-weapp/dist/dialog/index" } } </script> ``` 2. 期望的表现 正常引入 van-dialog 组件。 3. 实际的表现 微信小程序开发工具报错:Component is not found in path "components/vant-weapp3df1e56b/dialog/index"。 **环境信息描述** 至少包含以下部分: 1. 系统类型(Mac或者Windows) macOS 2. Mpx依赖版本(@mpxjs/core、@mpxjs/webpack-plugin和@mpxjs/api-proxy的具体版本,可以通过package-lock.json或者实际去node_modules当中查看) "@mpxjs/core": "^2.6.12", "@mpxjs/webpack-plugin": "2.6.11", "@mpxjs/api-proxy": "^2.5.33", 3. 小程序开发者工具信息(小程序平台、开发者工具版本、基础库版本) 小程序平台:微信 开发者工具版本:1.03.2008270 基础库版本:2.12.2 **最简复现demo** 已上传。 [van-dialog-demo.zip](https://github.com/didi/mpx/files/5179587/van-dialog-demo.zip)
1.0
无法引入 vant-dialog 组件 - **问题描述** 1. 问题触发的条件 ``` <script type="application/json"> { "usingComponents": { // 引入 vant-dialog "van-dialog": "vant-weapp/dist/dialog/index" } } </script> ``` 2. 期望的表现 正常引入 van-dialog 组件。 3. 实际的表现 微信小程序开发工具报错:Component is not found in path "components/vant-weapp3df1e56b/dialog/index"。 **环境信息描述** 至少包含以下部分: 1. 系统类型(Mac或者Windows) macOS 2. Mpx依赖版本(@mpxjs/core、@mpxjs/webpack-plugin和@mpxjs/api-proxy的具体版本,可以通过package-lock.json或者实际去node_modules当中查看) "@mpxjs/core": "^2.6.12", "@mpxjs/webpack-plugin": "2.6.11", "@mpxjs/api-proxy": "^2.5.33", 3. 小程序开发者工具信息(小程序平台、开发者工具版本、基础库版本) 小程序平台:微信 开发者工具版本:1.03.2008270 基础库版本:2.12.2 **最简复现demo** 已上传。 [van-dialog-demo.zip](https://github.com/didi/mpx/files/5179587/van-dialog-demo.zip)
process
无法引入 vant dialog 组件 问题描述 问题触发的条件 usingcomponents 引入 vant dialog van dialog vant weapp dist dialog index 期望的表现 正常引入 van dialog 组件。 实际的表现 微信小程序开发工具报错:component is not found in path components vant dialog index 。 环境信息描述 至少包含以下部分: 系统类型 mac或者windows macos mpx依赖版本 mpxjs core、 mpxjs webpack plugin和 mpxjs api proxy的具体版本,可以通过package lock json或者实际去node modules当中查看 mpxjs core mpxjs webpack plugin mpxjs api proxy 小程序开发者工具信息 小程序平台、开发者工具版本、基础库版本) 小程序平台:微信 开发者工具版本: 基础库版本: 最简复现demo 已上传。
1
234,061
7,715,773,406
IssuesEvent
2018-05-23 08:43:00
internship2016/sovolo
https://api.github.com/repos/internship2016/sovolo
closed
「sovol.moeに戻る」をクリックするとメールアドレス登録画面に移動する。
Twitter bug priority: high
Twitter登録画面で「CLEAR」をクリックして、次に「sovol.moeに戻る」をクリックするとメールアドレス登録画面に移動する。 ![snapcrab_noname_2018-5-22_16-11-39_no-00](https://user-images.githubusercontent.com/34121151/40348222-f4b0e7b6-5ddd-11e8-9204-be9c9624bff7.png) ![snapcrab_noname_2018-5-22_16-11-13_no-00](https://user-images.githubusercontent.com/34121151/40348224-f6dc7df2-5ddd-11e8-81a8-a023d94892d3.png)
1.0
「sovol.moeに戻る」をクリックするとメールアドレス登録画面に移動する。 - Twitter登録画面で「CLEAR」をクリックして、次に「sovol.moeに戻る」をクリックするとメールアドレス登録画面に移動する。 ![snapcrab_noname_2018-5-22_16-11-39_no-00](https://user-images.githubusercontent.com/34121151/40348222-f4b0e7b6-5ddd-11e8-9204-be9c9624bff7.png) ![snapcrab_noname_2018-5-22_16-11-13_no-00](https://user-images.githubusercontent.com/34121151/40348224-f6dc7df2-5ddd-11e8-81a8-a023d94892d3.png)
non_process
「sovol moeに戻る」をクリックするとメールアドレス登録画面に移動する。 twitter登録画面で「clear」をクリックして、次に「sovol moeに戻る」をクリックするとメールアドレス登録画面に移動する。
0
358,344
25,188,117,345
IssuesEvent
2022-11-11 20:22:39
monetr/monetr
https://api.github.com/repos/monetr/monetr
closed
docs: Document potential link statuses and their meaning.
links documentation
Right now the link staus is just a circle that changes color and a tooltip to go with it. Provide some documentation on this link status
1.0
docs: Document potential link statuses and their meaning. - Right now the link staus is just a circle that changes color and a tooltip to go with it. Provide some documentation on this link status
non_process
docs document potential link statuses and their meaning right now the link staus is just a circle that changes color and a tooltip to go with it provide some documentation on this link status
0
13,747
2,781,380,224
IssuesEvent
2015-05-06 13:03:47
FutureProcessing/TestCaseViewer
https://api.github.com/repos/FutureProcessing/TestCaseViewer
opened
Unable to scroll to the page top after expanding shared steps
defect
Steps to reproduce: 1. Go to the Test Case Viewer URL. Open any test case that contains shared steps, so that when expanded it took more than one page. 2. Expand all the shared steps in selected test case. 3. Collapse all the shared steps in selected test case. Expected result: Scrollbar is displayed when the page height exceeds browser window's height. When shared steps are collapsed, scrollbar disappears and the test case page is viewed correctly. Actual Result: Scrollbar disappears giving user no possibility to scroll back to the top of the page.
1.0
Unable to scroll to the page top after expanding shared steps - Steps to reproduce: 1. Go to the Test Case Viewer URL. Open any test case that contains shared steps, so that when expanded it took more than one page. 2. Expand all the shared steps in selected test case. 3. Collapse all the shared steps in selected test case. Expected result: Scrollbar is displayed when the page height exceeds browser window's height. When shared steps are collapsed, scrollbar disappears and the test case page is viewed correctly. Actual Result: Scrollbar disappears giving user no possibility to scroll back to the top of the page.
non_process
unable to scroll to the page top after expanding shared steps steps to reproduce go to the test case viewer url open any test case that contains shared steps so that when expanded it took more than one page expand all the shared steps in selected test case collapse all the shared steps in selected test case expected result scrollbar is displayed when the page height exceeds browser window s height when shared steps are collapsed scrollbar disappears and the test case page is viewed correctly actual result scrollbar disappears giving user no possibility to scroll back to the top of the page
0
10,028
13,044,161,482
IssuesEvent
2020-07-29 03:47:23
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `FormatWithLocale` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `FormatWithLocale` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @andylokandy ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `FormatWithLocale` from TiDB - ## Description Port the scalar function `FormatWithLocale` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @andylokandy ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function formatwithlocale from tidb description port the scalar function formatwithlocale from tidb to coprocessor score mentor s andylokandy recommended skills rust programming learning materials already implemented expressions ported from tidb
1
16,742
21,900,937,659
IssuesEvent
2022-05-20 13:21:48
oasis-tcs/csaf
https://api.github.com/repos/oasis-tcs/csaf
opened
Comment Resolution Log CSDPR02 to CS02
csaf 2.0 email oasis_tc_process non_material CS02 CSDPR02_feedback
# Comment Resolution Log The table summarizes the comments that were received during the 15-day public review of the committee specification draft "[Common Security Advisory Framework Version 2.0](https://docs.oasis-open.org/csaf/csaf/v2.0/csd02/csaf-v2.0-csd02.html)" and their resolution. Comments came to the csaf-comment and the CSAF TC mailing lists. [The public review did start 15 April 2022 at 00:00 UTC and did end 29 April 2022 at 23:59 UTC](https://www.oasis-open.org/2022/04/14/invitation-to-comment-on-common-security-advisory-framework-v2-0-2/). A status of "Completed" in the Disposition column indicates that the editors have implemented the changes on which the TC decided, which are outlined in the Resolution column. It also is a hyperlink to the GitHub commit notice. The item number is a hyperlink to the e-mail on the csaf-comment or CSAF TC mailing lists. | Item # | Date | Commenter | Description | Date acknowledged | Resolution | Disposition | |--------|------|-----------|-------------|-------------------|------------|-------------| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ## Evaluation of Feedback The editors consider above public comments as well as other more editorial feedback documented in issue(s) ... and classified/considered per pull request ... as **Non-Material** per OASIS TC process. A motion has been issued during the TC meeting by Stefan Hagen on 2022-05-18 LINK_TO_MINUTES to promote the resulting revised work products to CS02 including non-material changes only. To ease verification by anyone and to support the administration a separate release candidate archive containing the 4 standards track work products will be created and linked to this issue as well as noted in the motion as annotation in the minutes of meeting.
1.0
Comment Resolution Log CSDPR02 to CS02 - # Comment Resolution Log The table summarizes the comments that were received during the 15-day public review of the committee specification draft "[Common Security Advisory Framework Version 2.0](https://docs.oasis-open.org/csaf/csaf/v2.0/csd02/csaf-v2.0-csd02.html)" and their resolution. Comments came to the csaf-comment and the CSAF TC mailing lists. [The public review did start 15 April 2022 at 00:00 UTC and did end 29 April 2022 at 23:59 UTC](https://www.oasis-open.org/2022/04/14/invitation-to-comment-on-common-security-advisory-framework-v2-0-2/). A status of "Completed" in the Disposition column indicates that the editors have implemented the changes on which the TC decided, which are outlined in the Resolution column. It also is a hyperlink to the GitHub commit notice. The item number is a hyperlink to the e-mail on the csaf-comment or CSAF TC mailing lists. | Item # | Date | Commenter | Description | Date acknowledged | Resolution | Disposition | |--------|------|-----------|-------------|-------------------|------------|-------------| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ## Evaluation of Feedback The editors consider above public comments as well as other more editorial feedback documented in issue(s) ... and classified/considered per pull request ... as **Non-Material** per OASIS TC process. A motion has been issued during the TC meeting by Stefan Hagen on 2022-05-18 LINK_TO_MINUTES to promote the resulting revised work products to CS02 including non-material changes only. To ease verification by anyone and to support the administration a separate release candidate archive containing the 4 standards track work products will be created and linked to this issue as well as noted in the motion as annotation in the minutes of meeting.
process
comment resolution log to comment resolution log the table summarizes the comments that were received during the day public review of the committee specification draft and their resolution comments came to the csaf comment and the csaf tc mailing lists a status of completed in the disposition column indicates that the editors have implemented the changes on which the tc decided which are outlined in the resolution column it also is a hyperlink to the github commit notice the item number is a hyperlink to the e mail on the csaf comment or csaf tc mailing lists item date commenter description date acknowledged resolution disposition evaluation of feedback the editors consider above public comments as well as other more editorial feedback documented in issue s and classified considered per pull request as non material per oasis tc process a motion has been issued during the tc meeting by stefan hagen on link to minutes to promote the resulting revised work products to including non material changes only to ease verification by anyone and to support the administration a separate release candidate archive containing the standards track work products will be created and linked to this issue as well as noted in the motion as annotation in the minutes of meeting
1
8,575
3,758,257,364
IssuesEvent
2016-03-14 07:52:59
shibdib/EVE-Discord-Bot
https://api.github.com/repos/shibdib/EVE-Discord-Bot
closed
Fix "Design/TooManyFields" issue in plugins/onMessage/auth.php
CodeClimate duplicate
The class auth has 17 fields. Consider to redesign auth to keep the number of fields under 15. https://codeclimate.com/repos/56debad454d931143d00a4f1/plugins/onMessage/auth.php#issue_56e3e6efc805d90001063d50
1.0
Fix "Design/TooManyFields" issue in plugins/onMessage/auth.php - The class auth has 17 fields. Consider to redesign auth to keep the number of fields under 15. https://codeclimate.com/repos/56debad454d931143d00a4f1/plugins/onMessage/auth.php#issue_56e3e6efc805d90001063d50
non_process
fix design toomanyfields issue in plugins onmessage auth php the class auth has fields consider to redesign auth to keep the number of fields under
0
77,157
3,506,266,912
IssuesEvent
2016-01-08 05:07:35
OregonCore/OregonCore
https://api.github.com/repos/OregonCore/OregonCore
closed
resilience (BB #218)
migrated Priority: Medium Type: Bug
This issue was migrated from bitbucket. **Original Reporter:** **Original Date:** 15.07.2010 23:45:01 GMT+0000 **Original Priority:** major **Original Type:** bug **Original State:** invalid **Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/218 <hr> obviously does not work as intended, www.arena-tournament.com, their server has resilience working to the fullest! DoT and crit chance works but not the % of crit damage
1.0
resilience (BB #218) - This issue was migrated from bitbucket. **Original Reporter:** **Original Date:** 15.07.2010 23:45:01 GMT+0000 **Original Priority:** major **Original Type:** bug **Original State:** invalid **Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/218 <hr> obviously does not work as intended, www.arena-tournament.com, their server has resilience working to the fullest! DoT and crit chance works but not the % of crit damage
non_process
resilience bb this issue was migrated from bitbucket original reporter original date gmt original priority major original type bug original state invalid direct link obviously does not work as intended their server has resilience working to the fullest dot and crit chance works but not the of crit damage
0
57,350
11,741,373,863
IssuesEvent
2020-03-11 21:35:53
joomla/joomla-cms
https://api.github.com/repos/joomla/joomla-cms
closed
[com_fields] wrapping media data in markup by default
Documentation Required J3 Issue No Code Attached Yet
plugins\fields\media\tmpl\media.php line 34 There is no logic that I see to stop this from happening, I want to set a background image with a media field in custom fields, but when I retrieve the data it comes with the <img src= markup around it.
1.0
[com_fields] wrapping media data in markup by default - plugins\fields\media\tmpl\media.php line 34 There is no logic that I see to stop this from happening, I want to set a background image with a media field in custom fields, but when I retrieve the data it comes with the <img src= markup around it.
non_process
wrapping media data in markup by default plugins fields media tmpl media php line there is no logic that i see to stop this from happening i want to set a background image with a media field in custom fields but when i retrieve the data it comes with the img src markup around it
0
8,585
11,755,977,170
IssuesEvent
2020-03-13 10:36:12
MHRA/products
https://api.github.com/repos/MHRA/products
closed
AUTOMATIC BATCH PROCESS - Delete service removes deleted file from index
EPIC - Auto Batch Process :oncoming_automobile: HIGH PRIORITY :arrow_double_up: TASK :rescue_worker_helmet:
### User want As a user I want to see up to date documents on the products website So I can make informed decisions **Customer acceptance criteria** **Technical acceptance criteria** Delete service removes the deleted file from the search index, using a HTTP delete request to the Azure API. **Data acceptance criteria** **Testing acceptance criteria** **Size** S **Value** **Effort** ### Exit Criteria met - [x] Backlog - [x] Discovery - [x] DUXD - [ ] Development - [ ] Quality Assurance - [ ] Release and Validate
1.0
AUTOMATIC BATCH PROCESS - Delete service removes deleted file from index - ### User want As a user I want to see up to date documents on the products website So I can make informed decisions **Customer acceptance criteria** **Technical acceptance criteria** Delete service removes the deleted file from the search index, using a HTTP delete request to the Azure API. **Data acceptance criteria** **Testing acceptance criteria** **Size** S **Value** **Effort** ### Exit Criteria met - [x] Backlog - [x] Discovery - [x] DUXD - [ ] Development - [ ] Quality Assurance - [ ] Release and Validate
process
automatic batch process delete service removes deleted file from index user want as a user i want to see up to date documents on the products website so i can make informed decisions customer acceptance criteria technical acceptance criteria delete service removes the deleted file from the search index using a http delete request to the azure api data acceptance criteria testing acceptance criteria size s value effort exit criteria met backlog discovery duxd development quality assurance release and validate
1
4,359
7,260,513,920
IssuesEvent
2018-02-18 10:53:12
qgis/QGIS-Documentation
https://api.github.com/repos/qgis/QGIS-Documentation
closed
[FEATURE] New processing algorithm "minimum bounding geometry"
Automatic new feature Processing
Original commit: https://github.com/qgis/QGIS/commit/83affdc7f531a77cdb963fa8285fdc7af9015c76 by nyalldawson This algorithm creates geometries which enclose the features from an input layer. Numerous enclosing geometry types are supported, including bounding boxes (envelopes), oriented rectangles, circles and convex hulls. Optionally, the features can be grouped by a field. If set, this causes the output layer to contain one feature per grouped value with a minimal geometry covering just the features with matching values.
1.0
[FEATURE] New processing algorithm "minimum bounding geometry" - Original commit: https://github.com/qgis/QGIS/commit/83affdc7f531a77cdb963fa8285fdc7af9015c76 by nyalldawson This algorithm creates geometries which enclose the features from an input layer. Numerous enclosing geometry types are supported, including bounding boxes (envelopes), oriented rectangles, circles and convex hulls. Optionally, the features can be grouped by a field. If set, this causes the output layer to contain one feature per grouped value with a minimal geometry covering just the features with matching values.
process
new processing algorithm minimum bounding geometry original commit by nyalldawson this algorithm creates geometries which enclose the features from an input layer numerous enclosing geometry types are supported including bounding boxes envelopes oriented rectangles circles and convex hulls optionally the features can be grouped by a field if set this causes the output layer to contain one feature per grouped value with a minimal geometry covering just the features with matching values
1
63,769
12,374,413,646
IssuesEvent
2020-05-19 01:30:19
toebes/ciphers
https://api.github.com/repos/toebes/ciphers
opened
Baconian word generator needs a UI to show letters chosen
CodeBusters enhancement
When generating a word baconian, it needs to have a field for the HINT characters. With the given Hint characters, it should show in the letter map which letters are covered by the hint. For example with the sample plain text SOMETHING and a HINT of SOME With the text chosen as: BY OUR ERNST ALERT AUDIO --- BE ITS EARTH A BOOK ABBEY On the mapping, the letters **AB DE I L NO RSTU Y** should be bold or highlighted in a color as well as the A/B letter that they map to **AB**C**DE**FGH**I**JK**L**M**NO**PQ**RSTU**VWX**Y**Z Ideally the code should also check the question text to make sure that the hint occurs in the question (like the other generators do). Note that the hint field should only be present and checked for the word baconian.
1.0
Baconian word generator needs a UI to show letters chosen - When generating a word baconian, it needs to have a field for the HINT characters. With the given Hint characters, it should show in the letter map which letters are covered by the hint. For example with the sample plain text SOMETHING and a HINT of SOME With the text chosen as: BY OUR ERNST ALERT AUDIO --- BE ITS EARTH A BOOK ABBEY On the mapping, the letters **AB DE I L NO RSTU Y** should be bold or highlighted in a color as well as the A/B letter that they map to **AB**C**DE**FGH**I**JK**L**M**NO**PQ**RSTU**VWX**Y**Z Ideally the code should also check the question text to make sure that the hint occurs in the question (like the other generators do). Note that the hint field should only be present and checked for the word baconian.
non_process
baconian word generator needs a ui to show letters chosen when generating a word baconian it needs to have a field for the hint characters with the given hint characters it should show in the letter map which letters are covered by the hint for example with the sample plain text something and a hint of some with the text chosen as by our ernst alert audio be its earth a book abbey on the mapping the letters ab de i l no rstu y should be bold or highlighted in a color as well as the a b letter that they map to ab c de fgh i jk l m no pq rstu vwx y z ideally the code should also check the question text to make sure that the hint occurs in the question like the other generators do note that the hint field should only be present and checked for the word baconian
0
86,381
16,984,334,666
IssuesEvent
2021-06-30 12:49:43
bacher09/pwgen-for-bios
https://api.github.com/repos/bacher09/pwgen-for-bios
closed
Dell Latitude 5490 E7A8
code
Hi, will Dell E7A8 be supported soon or can somebody help me with a code. Maybe @qasimtoep01
1.0
Dell Latitude 5490 E7A8 - Hi, will Dell E7A8 be supported soon or can somebody help me with a code. Maybe @qasimtoep01
non_process
dell latitude hi will dell be supported soon or can somebody help me with a code maybe
0
10,645
13,446,207,862
IssuesEvent
2020-09-08 12:38:04
MHRA/products
https://api.github.com/repos/MHRA/products
closed
PARs - Upload success & error messages
EPIC - PARs process
Had a chat with @roughprada and it was decided: - on successful form submission, there shoud be a success screen as per [the design](https://app.zeplin.io/project/5dd51ae21205c944f8c1d35b/screen/5ebbfcbb5f5cec74857f3c70) - if the there's an error submitting the form, an [error summary component](https://design-system.service.gov.uk/components/error-summary/) should be shown above the page heading on the "Check your answers" page (this matches the GOV.UK design system) - no loading screen (for the time being at least), just submit the form when they click "Accept and send" on the "Check your answers" page and then either show them the success screen or add an error summary to the top of the page and scroll the user up to the top so they can see it. - Update the HTML title in the browser tab when an error is thrown (as per GDS guidelines).
1.0
PARs - Upload success & error messages - Had a chat with @roughprada and it was decided: - on successful form submission, there shoud be a success screen as per [the design](https://app.zeplin.io/project/5dd51ae21205c944f8c1d35b/screen/5ebbfcbb5f5cec74857f3c70) - if the there's an error submitting the form, an [error summary component](https://design-system.service.gov.uk/components/error-summary/) should be shown above the page heading on the "Check your answers" page (this matches the GOV.UK design system) - no loading screen (for the time being at least), just submit the form when they click "Accept and send" on the "Check your answers" page and then either show them the success screen or add an error summary to the top of the page and scroll the user up to the top so they can see it. - Update the HTML title in the browser tab when an error is thrown (as per GDS guidelines).
process
pars upload success error messages had a chat with roughprada and it was decided on successful form submission there shoud be a success screen as per if the there s an error submitting the form an should be shown above the page heading on the check your answers page this matches the gov uk design system no loading screen for the time being at least just submit the form when they click accept and send on the check your answers page and then either show them the success screen or add an error summary to the top of the page and scroll the user up to the top so they can see it update the html title in the browser tab when an error is thrown as per gds guidelines
1
13,237
15,706,429,457
IssuesEvent
2021-03-26 17:25:08
pacificclimate/climate-explorer-data-prep
https://api.github.com/repos/pacificclimate/climate-explorer-data-prep
closed
Make minimum and maximum watershed elevation datasets
process new data
The climate explorer backend calculates the Melton Ratio, which is the change in elevation over a watershed divided by the square root of the area of the watershed. We've been calculating this with a dataset containing the average elevation for each grid cell, but this is producing inaccurate results. What we need are datasets containing the minimum and maximum elevations for each grid cell. Hopefully `cdo` has a command to do this easily, it seems like the sort of thing they would support.
1.0
Make minimum and maximum watershed elevation datasets - The climate explorer backend calculates the Melton Ratio, which is the change in elevation over a watershed divided by the square root of the area of the watershed. We've been calculating this with a dataset containing the average elevation for each grid cell, but this is producing inaccurate results. What we need are datasets containing the minimum and maximum elevations for each grid cell. Hopefully `cdo` has a command to do this easily, it seems like the sort of thing they would support.
process
make minimum and maximum watershed elevation datasets the climate explorer backend calculates the melton ratio which is the change in elevation over a watershed divided by the square root of the area of the watershed we ve been calculating this with a dataset containing the average elevation for each grid cell but this is producing inaccurate results what we need are datasets containing the minimum and maximum elevations for each grid cell hopefully cdo has a command to do this easily it seems like the sort of thing they would support
1
406,968
11,904,838,957
IssuesEvent
2020-03-30 17:31:27
osmontrouge/caresteouvert
https://api.github.com/repos/osmontrouge/caresteouvert
closed
Ouvert le mardi : le lundi ça reste fermé !
bug priority: medium
https://www.caresteouvert.fr/@47.790006,-3.487664,21.20/place/n840334545 Ouverture à 09:00. Exact mais mardi 09:00, pas lundi 09:00 or on est lundi. Bon mercredi 1er avril, ça le fera : "on vous a dit que c'était fermé jusqu'à 09:00, on n'a pas dit quand c'était ouvert" ;-)
1.0
Ouvert le mardi : le lundi ça reste fermé ! - https://www.caresteouvert.fr/@47.790006,-3.487664,21.20/place/n840334545 Ouverture à 09:00. Exact mais mardi 09:00, pas lundi 09:00 or on est lundi. Bon mercredi 1er avril, ça le fera : "on vous a dit que c'était fermé jusqu'à 09:00, on n'a pas dit quand c'était ouvert" ;-)
non_process
ouvert le mardi le lundi ça reste fermé ouverture à exact mais mardi pas lundi or on est lundi bon mercredi avril ça le fera on vous a dit que c était fermé jusqu à on n a pas dit quand c était ouvert
0
13,648
16,358,711,072
IssuesEvent
2021-05-14 05:32:26
Vanuatu-National-Statistics-Office/vnso-RAP-tradeStats-materials
https://api.github.com/repos/Vanuatu-National-Statistics-Office/vnso-RAP-tradeStats-materials
closed
Monthly Report Issues
coding data processing help wanted monthly report
- [ ] NSDP Table * DataFrames for the NSDP indicators largely complete. Next step is to calculate whether these indicators are achieving their Targets, to do this we need Baseline data taken from the 2016 monthly report. This is because the NSDP was launched at the start of 2017. Working with Anna to get Raw data to calculate Baseline indicators can then calculate percentage increase or decrease in Trade for month we are reporting on. * Need to take this data and merge it into the formatted tables, not sure how to do this. - [ ] Trade Balance by Major Partner Countries * If using a map, should have Key too * Again need to take this data and visualise it onto the map, not sure how to do this * Should also have a table or chart that highlights the countries and gives their Balance of Trade figures - [ ] Trade Balance of Pacific Islands * The DataFrame produced has positive Trade Balance, Zero Trade and Negative Trade Balance. Need to connect the countries that have positive and negative into a Bar Chart; for those that have Zero Trade maybe add them as a note- again not sure how to visualise this information. - [x] Trade of Balance of New Emerging Markets * Have to create a subset of Trade Balances for all countries that fall into OTHER category. Then within this subset sort in ascending order (highest to lowest). Take the Top 5 (countries export a lot too) and Lost 5 (Countries import a lot from) and visualise this on a bar chart. Again not sure how to do this. - [ ] Trade by Trade Agreement * Is possible to go through code, and how this extracts information from cleaned dataset. This Table is also incomplete in the Latest Monthly Table script (2). - [ ] Principle Exports * Currently working with Anna to get data for major exports by month from 2010 to create a DataFrame to visualise. We will then be able to track trends on monthly basis. - [ ] Top 5 New Major Exports * This will be similar code to "Trade of Balance of New Emerging Markets" that not sure how to write - [ ] Principle Imports * Currently working with Anna to get data for major imports by month from 2010 to create a DataFrame to visualise. We will then be able to track trends on monthly basis. - [ ] Top 5 New Major Imports * This will be similar code to "Trade of Balance of New Emerging Markets" that not sure how to write - [ ] Methodology and Meta-Data * Haven't got to this stage yet, have to think through definitions, concepts, rationales and methodologies that want to include
1.0
Monthly Report Issues - - [ ] NSDP Table * DataFrames for the NSDP indicators largely complete. Next step is to calculate whether these indicators are achieving their Targets, to do this we need Baseline data taken from the 2016 monthly report. This is because the NSDP was launched at the start of 2017. Working with Anna to get Raw data to calculate Baseline indicators can then calculate percentage increase or decrease in Trade for month we are reporting on. * Need to take this data and merge it into the formatted tables, not sure how to do this. - [ ] Trade Balance by Major Partner Countries * If using a map, should have Key too * Again need to take this data and visualise it onto the map, not sure how to do this * Should also have a table or chart that highlights the countries and gives their Balance of Trade figures - [ ] Trade Balance of Pacific Islands * The DataFrame produced has positive Trade Balance, Zero Trade and Negative Trade Balance. Need to connect the countries that have positive and negative into a Bar Chart; for those that have Zero Trade maybe add them as a note- again not sure how to visualise this information. - [x] Trade of Balance of New Emerging Markets * Have to create a subset of Trade Balances for all countries that fall into OTHER category. Then within this subset sort in ascending order (highest to lowest). Take the Top 5 (countries export a lot too) and Lost 5 (Countries import a lot from) and visualise this on a bar chart. Again not sure how to do this. - [ ] Trade by Trade Agreement * Is possible to go through code, and how this extracts information from cleaned dataset. This Table is also incomplete in the Latest Monthly Table script (2). - [ ] Principle Exports * Currently working with Anna to get data for major exports by month from 2010 to create a DataFrame to visualise. We will then be able to track trends on monthly basis. - [ ] Top 5 New Major Exports * This will be similar code to "Trade of Balance of New Emerging Markets" that not sure how to write - [ ] Principle Imports * Currently working with Anna to get data for major imports by month from 2010 to create a DataFrame to visualise. We will then be able to track trends on monthly basis. - [ ] Top 5 New Major Imports * This will be similar code to "Trade of Balance of New Emerging Markets" that not sure how to write - [ ] Methodology and Meta-Data * Haven't got to this stage yet, have to think through definitions, concepts, rationales and methodologies that want to include
process
monthly report issues nsdp table dataframes for the nsdp indicators largely complete next step is to calculate whether these indicators are achieving their targets to do this we need baseline data taken from the monthly report this is because the nsdp was launched at the start of working with anna to get raw data to calculate baseline indicators can then calculate percentage increase or decrease in trade for month we are reporting on need to take this data and merge it into the formatted tables not sure how to do this trade balance by major partner countries if using a map should have key too again need to take this data and visualise it onto the map not sure how to do this should also have a table or chart that highlights the countries and gives their balance of trade figures trade balance of pacific islands the dataframe produced has positive trade balance zero trade and negative trade balance need to connect the countries that have positive and negative into a bar chart for those that have zero trade maybe add them as a note again not sure how to visualise this information trade of balance of new emerging markets have to create a subset of trade balances for all countries that fall into other category then within this subset sort in ascending order highest to lowest take the top countries export a lot too and lost countries import a lot from and visualise this on a bar chart again not sure how to do this trade by trade agreement is possible to go through code and how this extracts information from cleaned dataset this table is also incomplete in the latest monthly table script principle exports currently working with anna to get data for major exports by month from to create a dataframe to visualise we will then be able to track trends on monthly basis top new major exports this will be similar code to trade of balance of new emerging markets that not sure how to write principle imports currently working with anna to get data for major imports by month from to create a dataframe to visualise we will then be able to track trends on monthly basis top new major imports this will be similar code to trade of balance of new emerging markets that not sure how to write methodology and meta data haven t got to this stage yet have to think through definitions concepts rationales and methodologies that want to include
1
119,642
25,552,392,919
IssuesEvent
2022-11-30 01:37:47
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
opened
CodeGen infrastructure work planned for .NET 8
area-CodeGen-coreclr User Story
This issue collects a set of CLR CodeGen (JIT) inner-loop infrastructure improvements we are planning for .NET 8. (Related .NET 7 work was tracked with https://github.com/dotnet/runtime/issues/64832) # Planned for .NET 8 ## New SuperPMI-related features - [ ] https://github.com/dotnet/runtime/issues/79015 - [ ] https://github.com/dotnet/runtime/issues/79017 - [ ] https://github.com/dotnet/runtime/issues/47543 - [ ] https://github.com/dotnet/runtime/issues/47545 ## Bug fix level changes - [ ] https://github.com/dotnet/runtime/issues/75169 - [ ] https://github.com/dotnet/runtime/issues/76421 - [ ] https://github.com/dotnet/runtime/issues/60947
1.0
CodeGen infrastructure work planned for .NET 8 - This issue collects a set of CLR CodeGen (JIT) inner-loop infrastructure improvements we are planning for .NET 8. (Related .NET 7 work was tracked with https://github.com/dotnet/runtime/issues/64832) # Planned for .NET 8 ## New SuperPMI-related features - [ ] https://github.com/dotnet/runtime/issues/79015 - [ ] https://github.com/dotnet/runtime/issues/79017 - [ ] https://github.com/dotnet/runtime/issues/47543 - [ ] https://github.com/dotnet/runtime/issues/47545 ## Bug fix level changes - [ ] https://github.com/dotnet/runtime/issues/75169 - [ ] https://github.com/dotnet/runtime/issues/76421 - [ ] https://github.com/dotnet/runtime/issues/60947
non_process
codegen infrastructure work planned for net this issue collects a set of clr codegen jit inner loop infrastructure improvements we are planning for net related net work was tracked with planned for net new superpmi related features bug fix level changes
0
17,437
24,051,777,438
IssuesEvent
2022-09-16 13:25:55
PluginBugs/Issues-ItemsAdder
https://api.github.com/repos/PluginBugs/Issues-ItemsAdder
closed
plugin does not load on 1.19.2
Bug Other plugin bug Compatibility with other plugin
### Terms - [X] I'm using the very latest version of ItemsAdder and its dependencies. - [X] I already searched on this [Github page](https://github.com/PluginBugs/Issues-ItemsAdder/issues) to check if the same issue was already reported. - [X] I already searched on the [plugin wiki](https://itemsadder.devs.beer/) to know if a solution is already known. - [X] I already asked on the **#💬ia-community-help** channel on **Discord** to know if anyone already has a solution for the issue. ### Discord tag (optional) _No response_ ### What happened? [22:16:17 INFO]: ================ [ExecutableItems] ================ [22:16:17 INFO]: [ItemsAdder] Enabling ItemsAdder v3.2.3m [22:16:18 INFO]: [ItemsAdder] ___ ___ __ __ __ ___ __ ItemsAdder 3.2.3m | | |__ |\/| /__` /\ | \ | \ |__ |__) LoneLibs 1.0.21 | | |___ | | .__/ /--\ |__/ |__/ |___ | \ Paper git-Paper-142 (MC: 1.19.2) [22:16:18 ERROR]: Error occurred while enabling ItemsAdder v3.2.3m (Is it up to date?) java.lang.NoClassDefFoundError: net/sourcewriters/spigot/rwg/legacy/api/compatibility/AddonInitializationException at dev.lone.itemsadder.Main.onEnable(SourceFile:376) ~[ItemsAdder_3.2.3m.jar:?] at org.bukkit.plugin.java.JavaPlugin.setEnabled(JavaPlugin.java:264) ~[paper-api-1.19.2-R0.1-SNAPSHOT.jar:?] at org.bukkit.plugin.java.JavaPluginLoader.enablePlugin(JavaPluginLoader.java:370) ~[paper-api-1.19.2-R0.1-SNAPSHOT.jar:?] at org.bukkit.plugin.SimplePluginManager.enablePlugin(SimplePluginManager.java:542) ~[paper-api-1.19.2-R0.1-SNAPSHOT.jar:?] at org.bukkit.craftbukkit.v1_19_R1.CraftServer.enablePlugin(CraftServer.java:565) ~[paper-1.19.2.jar:git-Paper-142] at org.bukkit.craftbukkit.v1_19_R1.CraftServer.enablePlugins(CraftServer.java:479) ~[paper-1.19.2.jar:git-Paper-142] at net.minecraft.server.MinecraftServer.loadWorld0(MinecraftServer.java:636) ~[paper-1.19.2.jar:git-Paper-142] at net.minecraft.server.MinecraftServer.loadLevel(MinecraftServer.java:422) ~[paper-1.19.2.jar:git-Paper-142] at net.minecraft.server.dedicated.DedicatedServer.initServer(DedicatedServer.java:306) ~[paper-1.19.2.jar:git-Paper-142] at net.minecraft.server.MinecraftServer.runServer(MinecraftServer.java:1126) ~[paper-1.19.2.jar:git-Paper-142] at net.minecraft.server.MinecraftServer.lambda$spin$0(MinecraftServer.java:305) ~[paper-1.19.2.jar:git-Paper-142] at java.lang.Thread.run(Thread.java:833) ~[?:?] Caused by: java.lang.ClassNotFoundException: net.sourcewriters.spigot.rwg.legacy.api.compatibility.AddonInitializationException at org.bukkit.plugin.java.PluginClassLoader.loadClass0(PluginClassLoader.java:151) ~[paper-api-1.19.2-R0.1-SNAPSHOT.jar:?] at org.bukkit.plugin.java.PluginClassLoader.loadClass(PluginClassLoader.java:103) ~[paper-api-1.19.2-R0.1-SNAPSHOT.jar:?] at java.lang.ClassLoader.loadClass(ClassLoader.java:520) ~[?:?] ... 12 more ### Steps to reproduce the issue 1 ### Server version _No response_ ### ItemsAdder Version 1 ### ProtocolLib Version 1 ### LoneLibs Version 1 ### LightAPI Version (optional) _No response_ ### LibsDisguises Version (optional) _No response_ ### FULL server log _No response_ ### Error (optional) _No response_ ### ItemsAdder config.yml _No response_ ### Problematic items yml configuration file (optional) _No response_ ### Other files, you can drag and drop them here to upload. (optional) _No response_ ### Screenshots/Videos (you can drag and drop files or paste links) _No response_
True
plugin does not load on 1.19.2 - ### Terms - [X] I'm using the very latest version of ItemsAdder and its dependencies. - [X] I already searched on this [Github page](https://github.com/PluginBugs/Issues-ItemsAdder/issues) to check if the same issue was already reported. - [X] I already searched on the [plugin wiki](https://itemsadder.devs.beer/) to know if a solution is already known. - [X] I already asked on the **#💬ia-community-help** channel on **Discord** to know if anyone already has a solution for the issue. ### Discord tag (optional) _No response_ ### What happened? [22:16:17 INFO]: ================ [ExecutableItems] ================ [22:16:17 INFO]: [ItemsAdder] Enabling ItemsAdder v3.2.3m [22:16:18 INFO]: [ItemsAdder] ___ ___ __ __ __ ___ __ ItemsAdder 3.2.3m | | |__ |\/| /__` /\ | \ | \ |__ |__) LoneLibs 1.0.21 | | |___ | | .__/ /--\ |__/ |__/ |___ | \ Paper git-Paper-142 (MC: 1.19.2) [22:16:18 ERROR]: Error occurred while enabling ItemsAdder v3.2.3m (Is it up to date?) java.lang.NoClassDefFoundError: net/sourcewriters/spigot/rwg/legacy/api/compatibility/AddonInitializationException at dev.lone.itemsadder.Main.onEnable(SourceFile:376) ~[ItemsAdder_3.2.3m.jar:?] at org.bukkit.plugin.java.JavaPlugin.setEnabled(JavaPlugin.java:264) ~[paper-api-1.19.2-R0.1-SNAPSHOT.jar:?] at org.bukkit.plugin.java.JavaPluginLoader.enablePlugin(JavaPluginLoader.java:370) ~[paper-api-1.19.2-R0.1-SNAPSHOT.jar:?] at org.bukkit.plugin.SimplePluginManager.enablePlugin(SimplePluginManager.java:542) ~[paper-api-1.19.2-R0.1-SNAPSHOT.jar:?] at org.bukkit.craftbukkit.v1_19_R1.CraftServer.enablePlugin(CraftServer.java:565) ~[paper-1.19.2.jar:git-Paper-142] at org.bukkit.craftbukkit.v1_19_R1.CraftServer.enablePlugins(CraftServer.java:479) ~[paper-1.19.2.jar:git-Paper-142] at net.minecraft.server.MinecraftServer.loadWorld0(MinecraftServer.java:636) ~[paper-1.19.2.jar:git-Paper-142] at net.minecraft.server.MinecraftServer.loadLevel(MinecraftServer.java:422) ~[paper-1.19.2.jar:git-Paper-142] at net.minecraft.server.dedicated.DedicatedServer.initServer(DedicatedServer.java:306) ~[paper-1.19.2.jar:git-Paper-142] at net.minecraft.server.MinecraftServer.runServer(MinecraftServer.java:1126) ~[paper-1.19.2.jar:git-Paper-142] at net.minecraft.server.MinecraftServer.lambda$spin$0(MinecraftServer.java:305) ~[paper-1.19.2.jar:git-Paper-142] at java.lang.Thread.run(Thread.java:833) ~[?:?] Caused by: java.lang.ClassNotFoundException: net.sourcewriters.spigot.rwg.legacy.api.compatibility.AddonInitializationException at org.bukkit.plugin.java.PluginClassLoader.loadClass0(PluginClassLoader.java:151) ~[paper-api-1.19.2-R0.1-SNAPSHOT.jar:?] at org.bukkit.plugin.java.PluginClassLoader.loadClass(PluginClassLoader.java:103) ~[paper-api-1.19.2-R0.1-SNAPSHOT.jar:?] at java.lang.ClassLoader.loadClass(ClassLoader.java:520) ~[?:?] ... 12 more ### Steps to reproduce the issue 1 ### Server version _No response_ ### ItemsAdder Version 1 ### ProtocolLib Version 1 ### LoneLibs Version 1 ### LightAPI Version (optional) _No response_ ### LibsDisguises Version (optional) _No response_ ### FULL server log _No response_ ### Error (optional) _No response_ ### ItemsAdder config.yml _No response_ ### Problematic items yml configuration file (optional) _No response_ ### Other files, you can drag and drop them here to upload. (optional) _No response_ ### Screenshots/Videos (you can drag and drop files or paste links) _No response_
non_process
plugin does not load on terms i m using the very latest version of itemsadder and its dependencies i already searched on this to check if the same issue was already reported i already searched on the to know if a solution is already known i already asked on the 💬ia community help channel on discord to know if anyone already has a solution for the issue discord tag optional no response what happened enabling itemsadder itemsadder lonelibs paper git paper mc error occurred while enabling itemsadder is it up to date java lang noclassdeffounderror net sourcewriters spigot rwg legacy api compatibility addoninitializationexception at dev lone itemsadder main onenable sourcefile at org bukkit plugin java javaplugin setenabled javaplugin java at org bukkit plugin java javapluginloader enableplugin javapluginloader java at org bukkit plugin simplepluginmanager enableplugin simplepluginmanager java at org bukkit craftbukkit craftserver enableplugin craftserver java at org bukkit craftbukkit craftserver enableplugins craftserver java at net minecraft server minecraftserver minecraftserver java at net minecraft server minecraftserver loadlevel minecraftserver java at net minecraft server dedicated dedicatedserver initserver dedicatedserver java at net minecraft server minecraftserver runserver minecraftserver java at net minecraft server minecraftserver lambda spin minecraftserver java at java lang thread run thread java caused by java lang classnotfoundexception net sourcewriters spigot rwg legacy api compatibility addoninitializationexception at org bukkit plugin java pluginclassloader pluginclassloader java at org bukkit plugin java pluginclassloader loadclass pluginclassloader java at java lang classloader loadclass classloader java more steps to reproduce the issue server version no response itemsadder version protocollib version lonelibs version lightapi version optional no response libsdisguises version optional no response full server log no response error optional no response itemsadder config yml no response problematic items yml configuration file optional no response other files you can drag and drop them here to upload optional no response screenshots videos you can drag and drop files or paste links no response
0
626,086
19,784,773,816
IssuesEvent
2022-01-18 04:33:46
microsoft/PowerToys
https://api.github.com/repos/microsoft/PowerToys
closed
Incorrect resolution displayed when using DPI scaling
Issue-Bug FancyZones-Editor Product-FancyZones Area-User Interface Priority-3
### Microsoft PowerToys version 0.37.2 ### Running as admin - [ ] Yes ### Area(s) with issue? FancyZones, FancyZones Editor ### Steps to reproduce Open the Fancy Zones editor using a system using 125 (for example) display scaling with a monitor at 2560x1440 resolution (for example) Note the displayed monitor resolutions. Resize a zone. Note the displayed monitor resolutions. ### ✔️ Expected Behavior The displayed resolution values do not match the monitor resolutions. For example, at monitor running at 2560x1440 at 125% scaling, the Fancy Zones editor displays zones at 2048x1152. From a user experience perspective, if I want to make a zone to be exactly 900x600 (relative to monitor resolution), the FZ editor shows the incorrect values to achieve this. ### ❌ Actual Behavior The displayed coordinates appear to be scaled (i.e. multiplied by) the scaling factor instead of showing monitor coordinates. ![image](https://user-images.githubusercontent.com/1140984/121896668-2c704900-cce7-11eb-8dad-3f44dddb6ff0.png) ### Other Software _No response_
1.0
Incorrect resolution displayed when using DPI scaling - ### Microsoft PowerToys version 0.37.2 ### Running as admin - [ ] Yes ### Area(s) with issue? FancyZones, FancyZones Editor ### Steps to reproduce Open the Fancy Zones editor using a system using 125 (for example) display scaling with a monitor at 2560x1440 resolution (for example) Note the displayed monitor resolutions. Resize a zone. Note the displayed monitor resolutions. ### ✔️ Expected Behavior The displayed resolution values do not match the monitor resolutions. For example, at monitor running at 2560x1440 at 125% scaling, the Fancy Zones editor displays zones at 2048x1152. From a user experience perspective, if I want to make a zone to be exactly 900x600 (relative to monitor resolution), the FZ editor shows the incorrect values to achieve this. ### ❌ Actual Behavior The displayed coordinates appear to be scaled (i.e. multiplied by) the scaling factor instead of showing monitor coordinates. ![image](https://user-images.githubusercontent.com/1140984/121896668-2c704900-cce7-11eb-8dad-3f44dddb6ff0.png) ### Other Software _No response_
non_process
incorrect resolution displayed when using dpi scaling microsoft powertoys version running as admin yes area s with issue fancyzones fancyzones editor steps to reproduce open the fancy zones editor using a system using for example display scaling with a monitor at resolution for example note the displayed monitor resolutions resize a zone note the displayed monitor resolutions ✔️ expected behavior the displayed resolution values do not match the monitor resolutions for example at monitor running at at scaling the fancy zones editor displays zones at from a user experience perspective if i want to make a zone to be exactly relative to monitor resolution the fz editor shows the incorrect values to achieve this ❌ actual behavior the displayed coordinates appear to be scaled i e multiplied by the scaling factor instead of showing monitor coordinates other software no response
0
20,101
26,636,515,226
IssuesEvent
2023-01-24 22:31:09
pytorch/serve
https://api.github.com/repos/pytorch/serve
opened
support parallel pre and post processing in handler
enhancement preprocessing optimization
### 🚀 The feature Today, handler pre and post processes batch in sequential. This is not efficient for large batch and large payload. To improve performance, handler needs to support pre and post processing in parallel. ### Motivation, pitch This task is to improve performance for batching. ### Alternatives _No response_ ### Additional context _No response_
1.0
support parallel pre and post processing in handler - ### 🚀 The feature Today, handler pre and post processes batch in sequential. This is not efficient for large batch and large payload. To improve performance, handler needs to support pre and post processing in parallel. ### Motivation, pitch This task is to improve performance for batching. ### Alternatives _No response_ ### Additional context _No response_
process
support parallel pre and post processing in handler 🚀 the feature today handler pre and post processes batch in sequential this is not efficient for large batch and large payload to improve performance handler needs to support pre and post processing in parallel motivation pitch this task is to improve performance for batching alternatives no response additional context no response
1
12,513
14,963,803,396
IssuesEvent
2021-01-27 11:04:00
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] [Audit Logs] "studyVersion" is displayed null for the events
Bug P2 Participant manager datastore Process: Fixed Process: Tested dev
Events in PM: 1. SITE_ADDED_FOR_STUDY 2. PARTICIPANT_EMAIL_ADDED 3. PARTICIPANTS_EMAIL_LIST_IMPORTED 4. PARTICIPANTS_EMAIL_LIST_IMPORT_FAILED 5. PARTICIPANTS_EMAIL_LIST_IMPORT_PARTIAL_FAILED 6. SITE_DECOMMISSIONED_FOR_STUDY 7. SITE_ACTIVATED_FOR_STUDY 8. PARTICIPANT_INVITATION_DISABLED 9. CONSENT_DOCUMENT_DOWNLOADED 10. INVITATION_EMAIL_SENT 11. INVITATION_EMAIL_FAILED 12. PARTICIPANT_INVITATION_ENABLED 13. ENROLLMENT_TARGET_UPDATED 14. SITE_PARTICIPANT_REGISTRY_VIEWED 15. STUDY_PARTICIPANT_REGISTRY_VIEWED Sample snippet for event `SITE_ADDED_FOR_STUDY` event ``` { "insertId": "epzd98g1b9f0co", "jsonPayload": { "correlationId": "2eb996c6-4d84-435b-b438-35caf65ae6b2", "userAccessLevel": null, "eventCode": "SITE_ADDED_FOR_STUDY", "platformVersion": "1.0", "source": "PARTICIPANT MANAGER", "occurred": 1611313124346, "mobilePlatform": "UNKNOWN", "userId": "2c9180897689364401768a08f0060000", "studyId": "2c91808876fa9409017706d12918002a", "destinationApplicationVersion": "1.0", "participantId": null, "appVersion": "v0.1", "studyVersion": null, "siteId": "2c91808977290f8f017729bf13eb0006", "sourceApplicationVersion": "1.0", "destination": "PARTICIPANT USER DATASTORE", "resourceServer": null, "userIp": "117.211.20.33", "description": "Site added to study (site ID- 2c91808977290f8f017729bf13eb0006).", "appId": "2c91808876fa9409017706d11ea10028" } ```
2.0
[PM] [Audit Logs] "studyVersion" is displayed null for the events - Events in PM: 1. SITE_ADDED_FOR_STUDY 2. PARTICIPANT_EMAIL_ADDED 3. PARTICIPANTS_EMAIL_LIST_IMPORTED 4. PARTICIPANTS_EMAIL_LIST_IMPORT_FAILED 5. PARTICIPANTS_EMAIL_LIST_IMPORT_PARTIAL_FAILED 6. SITE_DECOMMISSIONED_FOR_STUDY 7. SITE_ACTIVATED_FOR_STUDY 8. PARTICIPANT_INVITATION_DISABLED 9. CONSENT_DOCUMENT_DOWNLOADED 10. INVITATION_EMAIL_SENT 11. INVITATION_EMAIL_FAILED 12. PARTICIPANT_INVITATION_ENABLED 13. ENROLLMENT_TARGET_UPDATED 14. SITE_PARTICIPANT_REGISTRY_VIEWED 15. STUDY_PARTICIPANT_REGISTRY_VIEWED Sample snippet for event `SITE_ADDED_FOR_STUDY` event ``` { "insertId": "epzd98g1b9f0co", "jsonPayload": { "correlationId": "2eb996c6-4d84-435b-b438-35caf65ae6b2", "userAccessLevel": null, "eventCode": "SITE_ADDED_FOR_STUDY", "platformVersion": "1.0", "source": "PARTICIPANT MANAGER", "occurred": 1611313124346, "mobilePlatform": "UNKNOWN", "userId": "2c9180897689364401768a08f0060000", "studyId": "2c91808876fa9409017706d12918002a", "destinationApplicationVersion": "1.0", "participantId": null, "appVersion": "v0.1", "studyVersion": null, "siteId": "2c91808977290f8f017729bf13eb0006", "sourceApplicationVersion": "1.0", "destination": "PARTICIPANT USER DATASTORE", "resourceServer": null, "userIp": "117.211.20.33", "description": "Site added to study (site ID- 2c91808977290f8f017729bf13eb0006).", "appId": "2c91808876fa9409017706d11ea10028" } ```
process
studyversion is displayed null for the events events in pm site added for study participant email added participants email list imported participants email list import failed participants email list import partial failed site decommissioned for study site activated for study participant invitation disabled consent document downloaded invitation email sent invitation email failed participant invitation enabled enrollment target updated site participant registry viewed study participant registry viewed sample snippet for event site added for study event insertid jsonpayload correlationid useraccesslevel null eventcode site added for study platformversion source participant manager occurred mobileplatform unknown userid studyid destinationapplicationversion participantid null appversion studyversion null siteid sourceapplicationversion destination participant user datastore resourceserver null userip description site added to study site id appid
1
39,165
10,310,359,737
IssuesEvent
2019-08-29 15:01:14
golang/go
https://api.github.com/repos/golang/go
closed
x/exp: tests consistently failing on linux-amd64-nocgo due to broken golang.org/x/mobile/gl dependency
Builders NeedsFix Testing help wanted
The `x/mobile` tests are not run on most builders, but the `x/exp` tests are. The `x/exp` build is consistently failing on `linux-amd64-nocgo` — apparently due to a dependency on the package `golang.org/x/mobile/gl`, which fails to compile: ``` # golang.org/x/mobile/gl ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:20:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:29:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:39:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:53:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:63:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:73:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:83:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:93:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:102:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:114:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:114:12: too many errors ``` We should either fix the `x/mobile` build or remove the dependency on it from `x/exp`. CC @steeve
1.0
x/exp: tests consistently failing on linux-amd64-nocgo due to broken golang.org/x/mobile/gl dependency - The `x/mobile` tests are not run on most builders, but the `x/exp` tests are. The `x/exp` build is consistently failing on `linux-amd64-nocgo` — apparently due to a dependency on the package `golang.org/x/mobile/gl`, which fails to compile: ``` # golang.org/x/mobile/gl ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:20:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:29:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:39:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:53:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:63:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:73:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:83:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:93:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:102:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:114:12: undefined: context ../../../../pkg/mod/golang.org/x/mobile@v0.0.0-20190312151609-d3739f865fa6/gl/gl.go:114:12: too many errors ``` We should either fix the `x/mobile` build or remove the dependency on it from `x/exp`. CC @steeve
non_process
x exp tests consistently failing on linux nocgo due to broken golang org x mobile gl dependency the x mobile tests are not run on most builders but the x exp tests are the x exp build is consistently failing on linux nocgo — apparently due to a dependency on the package golang org x mobile gl which fails to compile golang org x mobile gl pkg mod golang org x mobile gl gl go undefined context pkg mod golang org x mobile gl gl go undefined context pkg mod golang org x mobile gl gl go undefined context pkg mod golang org x mobile gl gl go undefined context pkg mod golang org x mobile gl gl go undefined context pkg mod golang org x mobile gl gl go undefined context pkg mod golang org x mobile gl gl go undefined context pkg mod golang org x mobile gl gl go undefined context pkg mod golang org x mobile gl gl go undefined context pkg mod golang org x mobile gl gl go undefined context pkg mod golang org x mobile gl gl go too many errors we should either fix the x mobile build or remove the dependency on it from x exp cc steeve
0
16,167
20,603,885,142
IssuesEvent
2022-03-06 17:35:55
AcademySoftwareFoundation/OpenCue
https://api.github.com/repos/AcademySoftwareFoundation/OpenCue
opened
NetGen/NGSolve Pyside2 installation
process
Hi everyone, I have to install the graphical user interface for NGSolve and I'm trying to do it by following the directions below (https://github.com/NGSolve/ngsgui) but I get the following error message: ``` ERROR: Could not find a version that satisfies the requirement pyside2 (from versions: none) ERROR: No matching distribution found for pyside2 ``` I then tried to install pyside2 with the command `pip install pyside2` and I get the same message. How can I proceed?
1.0
NetGen/NGSolve Pyside2 installation - Hi everyone, I have to install the graphical user interface for NGSolve and I'm trying to do it by following the directions below (https://github.com/NGSolve/ngsgui) but I get the following error message: ``` ERROR: Could not find a version that satisfies the requirement pyside2 (from versions: none) ERROR: No matching distribution found for pyside2 ``` I then tried to install pyside2 with the command `pip install pyside2` and I get the same message. How can I proceed?
process
netgen ngsolve installation hi everyone i have to install the graphical user interface for ngsolve and i m trying to do it by following the directions below but i get the following error message error could not find a version that satisfies the requirement from versions none error no matching distribution found for i then tried to install with the command pip install and i get the same message how can i proceed
1
15,296
19,307,878,678
IssuesEvent
2021-12-13 13:29:46
hoprnet/hoprnet
https://api.github.com/repos/hoprnet/hoprnet
opened
Create trifecta email group
processes
<!--- Please DO NOT remove the automatically added 'new issue' label --> <!--- Provide a general summary of the issue in the Title above --> [ ] Create a trifecta email group where the trifecta members are part of it [ ] Update processes describing that trifecta members become part of that group [ ] All tech meetings should be able to edit meetings
1.0
Create trifecta email group - <!--- Please DO NOT remove the automatically added 'new issue' label --> <!--- Provide a general summary of the issue in the Title above --> [ ] Create a trifecta email group where the trifecta members are part of it [ ] Update processes describing that trifecta members become part of that group [ ] All tech meetings should be able to edit meetings
process
create trifecta email group create a trifecta email group where the trifecta members are part of it update processes describing that trifecta members become part of that group all tech meetings should be able to edit meetings
1
262,913
27,989,507,447
IssuesEvent
2023-03-27 01:37:23
pazhanivel07/frameworks_base_Aosp10_r33_CVE-2021-0315
https://api.github.com/repos/pazhanivel07/frameworks_base_Aosp10_r33_CVE-2021-0315
opened
CVE-2023-20963 (Medium) detected in baseandroid-10.0.0_r46
Mend: dependency security vulnerability
## CVE-2023-20963 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>baseandroid-10.0.0_r46</b></p></summary> <p> <p>Android framework classes and services</p> <p>Library home page: <a href=https://android.googlesource.com/platform/frameworks/base>https://android.googlesource.com/platform/frameworks/base</a></p> <p>Found in HEAD commit: <a href="https://github.com/pazhanivel07/frameworks_base_Aosp10_r33_CVE-2021-0315/commit/208fdb481d671eb38f28d914bfbbe6364ee43be2">208fdb481d671eb38f28d914bfbbe6364ee43be2</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/core/java/android/os/WorkSource.java</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In WorkSource, there is a possible parcel mismatch. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-11 Android-12 Android-12L Android-13Android ID: A-220302519 <p>Publish Date: 2023-03-24 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-20963>CVE-2023-20963</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://android.googlesource.com/platform/frameworks/base/+/266b3bddcf14d448c0972db64b42950f76c759e3">https://android.googlesource.com/platform/frameworks/base/+/266b3bddcf14d448c0972db64b42950f76c759e3</a></p> <p>Release Date: 2023-03-24</p> <p>Fix Resolution: android-13.0.0_r32</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2023-20963 (Medium) detected in baseandroid-10.0.0_r46 - ## CVE-2023-20963 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>baseandroid-10.0.0_r46</b></p></summary> <p> <p>Android framework classes and services</p> <p>Library home page: <a href=https://android.googlesource.com/platform/frameworks/base>https://android.googlesource.com/platform/frameworks/base</a></p> <p>Found in HEAD commit: <a href="https://github.com/pazhanivel07/frameworks_base_Aosp10_r33_CVE-2021-0315/commit/208fdb481d671eb38f28d914bfbbe6364ee43be2">208fdb481d671eb38f28d914bfbbe6364ee43be2</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/core/java/android/os/WorkSource.java</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In WorkSource, there is a possible parcel mismatch. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-11 Android-12 Android-12L Android-13Android ID: A-220302519 <p>Publish Date: 2023-03-24 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-20963>CVE-2023-20963</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://android.googlesource.com/platform/frameworks/base/+/266b3bddcf14d448c0972db64b42950f76c759e3">https://android.googlesource.com/platform/frameworks/base/+/266b3bddcf14d448c0972db64b42950f76c759e3</a></p> <p>Release Date: 2023-03-24</p> <p>Fix Resolution: android-13.0.0_r32</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in baseandroid cve medium severity vulnerability vulnerable library baseandroid android framework classes and services library home page a href found in head commit a href found in base branch master vulnerable source files core java android os worksource java vulnerability details in worksource there is a possible parcel mismatch this could lead to local escalation of privilege with no additional execution privileges needed user interaction is not needed for exploitation product androidversions android android android android id a publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution android step up your open source security game with mend
0
14,289
17,263,158,177
IssuesEvent
2021-07-22 10:23:19
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
FULL JOIN rearrengment causes SQL errors
.Query Language (MBQL) Priority:P1 Querying/GUI Querying/Notebook Querying/Processor Type:Bug
**Describe the bug** If you add a FULL JOIN metabase will rearrange the query and position the FULL JOIN at the end of the JOIN chain. Of course, the position of joins is not arbitrary. This may not only alter the result, but cause SQL errors if the JOIN condition references a table that isn't selected yet. **Logs** ERROR: missing FROM-clause entry for table "Foo Bar" Position: 1234 **To Reproduce** Create the following schema: ```sql CREATE TABLE a (id int PRIMARY KEY); CREATE TABLE b (id int PRIMARY KEY); CREATE TABLE c (id int PRIMARY KEY); ``` Create the following question: <img width="337" alt="Screenshot 2021-07-21 at 17 03 55" src="https://user-images.githubusercontent.com/1772890/126511823-dc4e7899-2215-4756-bd2a-73aa677c2494.png"> 1. Select a table A 2. FULL JOIN another table B (to A) 3. LEFT JOIN another table C (to B). The SQL query generated by Metabase will be: ```sql SELECT "public"."a"."id" AS "id", "B"."id" AS "B__id", "C"."id" AS "C__id" FROM "public"."a" LEFT JOIN "public"."c" "C" ON "B"."id" = "C"."id" FULL JOIN "public"."b" "B" ON "public"."a"."id" = "B"."id" LIMIT 1048575 ``` The JOINS have been rearranged, resulting in the following error: ``` ERROR: missing FROM-clause entry for table "B" Position: 240 ``` **Expected behavior** The JOIN order that specified in the editor should be retained. **Information about your Metabase Installation:** ``` { "browser-info": { "language": "de", "platform": "MacIntel", "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.164 Safari/537.36", "vendor": "Google Inc." }, "system-info": { "file.encoding": "UTF-8", "java.runtime.name": "OpenJDK Runtime Environment", "java.runtime.version": "1.8.0_292-heroku-b10", "java.vendor": "Oracle Corporation", "java.vendor.url": "http://java.oracle.com/", "java.version": "1.8.0_292-heroku", "java.vm.name": "OpenJDK 64-Bit Server VM", "java.vm.version": "25.292-b10", "os.name": "Linux", "os.version": "4.4.0-1093-aws", "user.language": "en", "user.timezone": "Etc/UTC" }, "metabase-info": { "databases": [ "postgres" ], "hosting-env": "heroku", "application-database": "postgres", "application-database-details": { "database": { "name": "PostgreSQL", "version": "11.12 (Ubuntu 11.12-1.pgdg16.04+1)" }, "jdbc-driver": { "name": "PostgreSQL JDBC Driver", "version": "42.2.18" } }, "run-mode": "prod", "version": { "tag": "v0.40.1", "date": "2021-07-14", "branch": "release-x.40.x", "hash": "ed8f9c8" }, "settings": { "report-timezone": "Europe/Berlin" } } } ``` **Severity** This prevents users who do not have experience or access to SQL from building queries.
1.0
FULL JOIN rearrengment causes SQL errors - **Describe the bug** If you add a FULL JOIN metabase will rearrange the query and position the FULL JOIN at the end of the JOIN chain. Of course, the position of joins is not arbitrary. This may not only alter the result, but cause SQL errors if the JOIN condition references a table that isn't selected yet. **Logs** ERROR: missing FROM-clause entry for table "Foo Bar" Position: 1234 **To Reproduce** Create the following schema: ```sql CREATE TABLE a (id int PRIMARY KEY); CREATE TABLE b (id int PRIMARY KEY); CREATE TABLE c (id int PRIMARY KEY); ``` Create the following question: <img width="337" alt="Screenshot 2021-07-21 at 17 03 55" src="https://user-images.githubusercontent.com/1772890/126511823-dc4e7899-2215-4756-bd2a-73aa677c2494.png"> 1. Select a table A 2. FULL JOIN another table B (to A) 3. LEFT JOIN another table C (to B). The SQL query generated by Metabase will be: ```sql SELECT "public"."a"."id" AS "id", "B"."id" AS "B__id", "C"."id" AS "C__id" FROM "public"."a" LEFT JOIN "public"."c" "C" ON "B"."id" = "C"."id" FULL JOIN "public"."b" "B" ON "public"."a"."id" = "B"."id" LIMIT 1048575 ``` The JOINS have been rearranged, resulting in the following error: ``` ERROR: missing FROM-clause entry for table "B" Position: 240 ``` **Expected behavior** The JOIN order that specified in the editor should be retained. **Information about your Metabase Installation:** ``` { "browser-info": { "language": "de", "platform": "MacIntel", "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.164 Safari/537.36", "vendor": "Google Inc." }, "system-info": { "file.encoding": "UTF-8", "java.runtime.name": "OpenJDK Runtime Environment", "java.runtime.version": "1.8.0_292-heroku-b10", "java.vendor": "Oracle Corporation", "java.vendor.url": "http://java.oracle.com/", "java.version": "1.8.0_292-heroku", "java.vm.name": "OpenJDK 64-Bit Server VM", "java.vm.version": "25.292-b10", "os.name": "Linux", "os.version": "4.4.0-1093-aws", "user.language": "en", "user.timezone": "Etc/UTC" }, "metabase-info": { "databases": [ "postgres" ], "hosting-env": "heroku", "application-database": "postgres", "application-database-details": { "database": { "name": "PostgreSQL", "version": "11.12 (Ubuntu 11.12-1.pgdg16.04+1)" }, "jdbc-driver": { "name": "PostgreSQL JDBC Driver", "version": "42.2.18" } }, "run-mode": "prod", "version": { "tag": "v0.40.1", "date": "2021-07-14", "branch": "release-x.40.x", "hash": "ed8f9c8" }, "settings": { "report-timezone": "Europe/Berlin" } } } ``` **Severity** This prevents users who do not have experience or access to SQL from building queries.
process
full join rearrengment causes sql errors describe the bug if you add a full join metabase will rearrange the query and position the full join at the end of the join chain of course the position of joins is not arbitrary this may not only alter the result but cause sql errors if the join condition references a table that isn t selected yet logs error missing from clause entry for table foo bar position to reproduce create the following schema sql create table a id int primary key create table b id int primary key create table c id int primary key create the following question img width alt screenshot at src select a table a full join another table b to a left join another table c to b the sql query generated by metabase will be sql select public a id as id b id as b id c id as c id from public a left join public c c on b id c id full join public b b on public a id b id limit the joins have been rearranged resulting in the following error error missing from clause entry for table b position expected behavior the join order that specified in the editor should be retained information about your metabase installation browser info language de platform macintel useragent mozilla macintosh intel mac os x applewebkit khtml like gecko chrome safari vendor google inc system info file encoding utf java runtime name openjdk runtime environment java runtime version heroku java vendor oracle corporation java vendor url java version heroku java vm name openjdk bit server vm java vm version os name linux os version aws user language en user timezone etc utc metabase info databases postgres hosting env heroku application database postgres application database details database name postgresql version ubuntu jdbc driver name postgresql jdbc driver version run mode prod version tag date branch release x x hash settings report timezone europe berlin severity this prevents users who do not have experience or access to sql from building queries
1
182,193
14,907,077,217
IssuesEvent
2021-01-22 02:13:37
danaremar/e-eat
https://api.github.com/repos/danaremar/e-eat
closed
1002 - Realizar informes de aceptación de los entregables
documentation
Redactar registros con resultados, por los que el cliente formalizala aceptación de los entregables completados, ya sea en paralelo o con anterioridad al control de calidad
1.0
1002 - Realizar informes de aceptación de los entregables - Redactar registros con resultados, por los que el cliente formalizala aceptación de los entregables completados, ya sea en paralelo o con anterioridad al control de calidad
non_process
realizar informes de aceptación de los entregables redactar registros con resultados por los que el cliente formalizala aceptación de los entregables completados ya sea en paralelo o con anterioridad al control de calidad
0
8,493
11,648,944,483
IssuesEvent
2020-03-01 23:29:34
kubernetes/minikube
https://api.github.com/repos/kubernetes/minikube
closed
"lint" (`golangci-lint`) sometimes misses !
kind/process priority/important-longterm
our lint has some issues, it acts differently on different computers and sometimes misses things like this : https://github.com/kubernetes/minikube/pull/6712
1.0
"lint" (`golangci-lint`) sometimes misses ! - our lint has some issues, it acts differently on different computers and sometimes misses things like this : https://github.com/kubernetes/minikube/pull/6712
process
lint golangci lint sometimes misses our lint has some issues it acts differently on different computers and sometimes misses things like this
1
60,203
25,031,919,401
IssuesEvent
2022-11-04 13:07:11
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
deployment fails, ACR repo doesn't get created
container-service/svc triaged assigned-to-author doc-enhancement Pri1
Hello all, following this document I found the deployment fails with imagepool error. When I checked the ACR, I see the repository is not created. The kubelet error as below:- kubelet Failed to pull image "cicdacrak.azurecr.io/default:3": [rpc error: code = NotFound desc = failed to pull and unpack image "cicdacrak.azurecr.io/default:3": failed to resolve reference "cicdacrak.azurecr.io/default:3": cicdacrak.azurecr.io/default:3: not found, rpc error: code = Unknown desc = failed to pull and unpack image "cicdacrak.azurecr.io/default:3": failed to resolve reference "cicdacrak.azurecr.io/default:3": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized] So, is it something we have to create the repository manually in the ACR? Or do we have to provide any known image name instead of "default" in the section like nginx etc? <html> <body> <!--StartFragment--> Error: error: deployment "default" exceeded its progress deadlineDeploy stage • Deploy • Deploy to Kubernetes cluster |   |   -- | -- | --   | Rollout status check failed.Deploy stage • Deploy • Deploy to Kubernetes cluster <!--EndFragment--> </body> </html> --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: 0717d289-c339-ed12-4d24-3d968121c75d * Version Independent ID: c20939c5-96b4-e4c5-6737-e3407d52f3de * Content: [Deploy to Azure Kubernetes Service with Azure Pipelines - Azure Kubernetes Service](https://learn.microsoft.com/en-us/azure/aks/devops-pipeline?pivots=pipelines-yaml) * Content Source: [articles/aks/devops-pipeline.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/aks/devops-pipeline.md) * Service: **container-service** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
deployment fails, ACR repo doesn't get created - Hello all, following this document I found the deployment fails with imagepool error. When I checked the ACR, I see the repository is not created. The kubelet error as below:- kubelet Failed to pull image "cicdacrak.azurecr.io/default:3": [rpc error: code = NotFound desc = failed to pull and unpack image "cicdacrak.azurecr.io/default:3": failed to resolve reference "cicdacrak.azurecr.io/default:3": cicdacrak.azurecr.io/default:3: not found, rpc error: code = Unknown desc = failed to pull and unpack image "cicdacrak.azurecr.io/default:3": failed to resolve reference "cicdacrak.azurecr.io/default:3": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized] So, is it something we have to create the repository manually in the ACR? Or do we have to provide any known image name instead of "default" in the section like nginx etc? <html> <body> <!--StartFragment--> Error: error: deployment "default" exceeded its progress deadlineDeploy stage • Deploy • Deploy to Kubernetes cluster |   |   -- | -- | --   | Rollout status check failed.Deploy stage • Deploy • Deploy to Kubernetes cluster <!--EndFragment--> </body> </html> --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: 0717d289-c339-ed12-4d24-3d968121c75d * Version Independent ID: c20939c5-96b4-e4c5-6737-e3407d52f3de * Content: [Deploy to Azure Kubernetes Service with Azure Pipelines - Azure Kubernetes Service](https://learn.microsoft.com/en-us/azure/aks/devops-pipeline?pivots=pipelines-yaml) * Content Source: [articles/aks/devops-pipeline.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/aks/devops-pipeline.md) * Service: **container-service** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
non_process
deployment fails acr repo doesn t get created hello all following this document i found the deployment fails with imagepool error when i checked the acr i see the repository is not created the kubelet error as below kubelet failed to pull image cicdacrak azurecr io default so is it something we have to create the repository manually in the acr or do we have to provide any known image name instead of default in the section like nginx etc error error deployment default exceeded its progress deadlinedeploy stage • deploy • deploy to kubernetes cluster       rollout status check failed deploy stage • deploy • deploy to kubernetes cluster document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service container service github login juliakm microsoft alias jukullam
0
627,674
19,911,948,874
IssuesEvent
2022-01-25 18:03:52
exalearn/EXARL
https://api.github.com/repos/exalearn/EXARL
closed
Performance assessment plan
High Priority
# Background A goal for ExaRL is to design a performance assessment framework that can be used to track performance of the code over time and during its development. This can be a challenging task for ExaRL due to its reliance on TensorFlow, which is a complex framework that in turn relies on other complex GPU accelerated libraries like cuDNN. The standard approach of profiling the code as a 'black box' using a tool like NVIDIA Nsight Compute or Nsight Systems does not yield very useful results when the code relies heavily on TensorFlow, which launches millions or billions of GPU kernels in a typical run, and any of those kernels are highly tuned in libraries like NVIDIA cuDNN or Eigen. An example of this problem, drawn from a [TensorFlow example code for classifying images](https://www.tensorflow.org/tutorials/keras/classification), is shown below: <img width="2032" alt="Screen Shot 2020-10-29 at 11 30 07 AM" src="https://user-images.githubusercontent.com/979133/97616698-1e7cea80-19da-11eb-90ae-893d4081c5d3.png"> The GPU kernel statistics as reported by Nsight Systems are shown below: ``` Time(%) Total Time (ns) Instances Average Minimum Maximum Name ------- --------------- --------- ------- ------- ------- ---------------------------------------------------------------------------------------------------- 10.6 299116632 75000 3988.2 3071 7296 void tensorflow::functor::ApplyAdamKernel<float>(int, float*, float*, float*, float const*, float c? 10.6 298053918 19063 15635.2 13984 22879 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 5.8 163429786 19377 8434.2 7264 172734 void cutlass::Kernel<cutlass_80_tensorop_s1688gemm_64x64_16x6_nn_align1>(cutlass_80_tensorop_s1688g? 5.0 139689636 76252 1831.9 1631 4960 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 4.8 135778809 19063 7122.6 7039 7712 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 4.7 131099423 19377 6765.7 6368 8255 void cutlass::Kernel<cutlass_80_tensorop_s1688gemm_64x64_16x6_nn_align4>(cutlass_80_tensorop_s1688g? 4.7 130793263 57189 2287.0 2047 3391 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 4.3 119677513 18750 6382.8 6303 6465 void cutlass::Kernel<cutlass_80_tensorop_s1688gemm_64x64_16x6_tn_align1>(cutlass_80_tensorop_s1688g? 3.9 110948802 18750 5917.3 5856 6016 ampere_sgemm_32x32_sliced1x4_nt 3.6 102520661 38754 2645.4 2273 9184 void splitKreduce_kernel<float, float, float, float>(cublasSplitKParams<float>, float const*, float? 3.6 99849301 18750 5325.3 5248 6848 void cutlass::Kernel<cutlass_80_tensorop_s1688gemm_64x64_16x6_nt_align4>(cutlass_80_tensorop_s1688g? 3.2 89521683 38754 2310.0 1791 3328 void tensorflow::BiasNHWCKernel<float>(int, float const*, float const*, float*, int) 2.8 79422299 37500 2117.9 2047 7680 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 2.7 75307469 38126 1975.2 1760 9056 void tensorflow::functor::BlockReduceKernel<float*, float*, 256, tensorflow::functor::Sum<float> >(? 2.4 66359582 38126 1740.5 1536 3232 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 2.3 65891986 37813 1742.6 1599 3520 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 2.3 64491357 38127 1691.5 1535 5536 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 2.1 57875293 19063 3036.0 2847 4320 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.8 50012215 18750 2667.3 2560 2720 void tensorflow::functor::ColumnReduceKernel<float const*, float*, cub::Sum>(float const*, float*, ? 1.8 49789572 19063 2611.8 2303 2657 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.7 47636597 18750 2540.6 2495 2912 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.7 47416129 19377 2447.0 1856 2656 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.7 47301887 18750 2522.8 2368 9791 void tensorflow::functor::ColumnReduceMax16ColumnsKernel<float const*, float*, cub::Sum>(float cons? 1.6 45674051 18750 2435.9 2399 3936 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.6 43850922 19063 2300.3 2080 5921 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.5 42487562 19377 2192.7 1888 2400 void tensorflow::functor::RowReduceKernel<float const*, float*, cub::Max>(float const*, float*, int? 1.3 36163366 19063 1897.0 1727 3839 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.3 35739615 19063 1874.8 1664 2208 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.2 33284107 19063 1746.0 1599 2208 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.2 32991011 19063 1730.6 1567 2944 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.1 32314140 19063 1695.1 1536 8160 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.1 32281530 18750 1721.7 1695 3296 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 0.1 1603467 939 1707.6 1567 1921 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 0.0 942101 314 3000.3 2368 3552 void tensorflow::(anonymous namespace)::GenerateNormalizedProb<float, float, 4>(float const*, float? 0.0 800308 314 2548.8 2144 2624 void tensorflow::functor::RowReduceKernel<cub::TransformInputIterator<float, tensorflow::(anonymous? 0.0 51743 28 1848.0 1664 2592 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 0.0 27999 14 1999.9 1791 3040 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 0.0 20833 6 3472.2 3040 4544 void tensorflow::BiasGradNHWC_SharedAtomics<float>(int, float const*, float*, int) 0.0 18368 1 18368.0 18368 18368 void tensorflow::concat_variable_kernel<float, int, true>(tensorflow::GpuDeviceArrayStruct<float co? 0.0 11776 2 5888.0 5056 6720 void tensorflow::functor::FillPhiloxRandomKernelLaunch<tensorflow::random::UniformDistribution<tens? 0.0 4896 2 2448.0 1952 2944 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 0.0 4703 2 2351.5 1983 2720 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 0.0 4128 2 2064.0 1920 2208 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 0.0 1920 1 1920.0 1920 1920 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? ``` It is hardly obvious from this output what one should do to improve performance of the code. So we must adjust this approach in order to be able to collect actionable performance data about ExaRL. Most likely, we will need to use a combination of 'black box' profiling tools and domain-specific tools which have some awareness of what kinds of calculations the code is doing. # Collecting performance data We can use a combination of performance analysis tools to understand the overall performance characteristics of the code. A few components and proposals are described below. ## Timers These are simple to implement and can summarize strong and weak scaling behavior easily. ## MPI analysis ExaRL uses MPI for inter-node communication. MPI performance is straightforward to measure, including characteristics like time spent at barriers, load imbalance, etc. Many tools can measure these quantities via sampling of each MPI task; this activity has low overhead, and can typically be used for even high-MPI-concurrency runs. Open source tools like [TAU](https://www.cs.uoregon.edu/research/tau/home.php) and [HPCToolkit](http://hpctoolkit.org) can be used for this, along with several other proprietary tools like [Arm MAP](https://www.arm.com/products/development-tools/server-and-hpc/forge/map). So far we have been using TAU on the Cori GPU cluster at NERSC, with reasonably good results. The `jumpshot` GUI shows results like the following: <img width="2032" alt="Screen Shot 2020-10-29 at 8 36 19 AM" src="https://user-images.githubusercontent.com/979133/97596337-d9e55500-19c1-11eb-91be-11a0a4ed16f6.png"> and the `pprof` analysis tool shows quantitative results: ``` FUNCTION SUMMARY (mean): --------------------------------------------------------------------------------------- %Time Exclusive Inclusive #Call #Subrs Inclusive Name msec total msec usec/call --------------------------------------------------------------------------------------- 100.0 1564456 1832745 1 24624.8 1832744723 .TAU application 14.4 263892 263892 3274.75 0 80584 MPI_Probe() 14.4 263892 263892 3274.75 0 80584 MPI_Probe() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0x44000000> ] 0.1 1767 1767 1 0 1766844 MPI_Init() 0.1 1255 1255 1 1 1255392 MPI_Finalize() 0.0 412 412 1 0 412275 MPI_Comm_dup() 0.0 412 412 1 0 412275 MPI_Comm_dup() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0x44000000> ] 0.0 364 364 3281.62 0 111 MPI_Send() 0.0 364 364 3281.62 0 111 MPI_Send() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0x44000000> <message send path id> = <0> ] 0.0 295 295 3280.75 0 90 MPI_Recv() 0.0 295 295 3274.75 0 90 MPI_Recv() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0x44000000> ] 0.0 205 205 2 0 102716 MPI_Bcast() 0.0 205 205 2 0 102716 MPI_Bcast() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0x44000000> ] 0.0 73.4 73.4 1 1 73419 MPI_Comm_split() 0.0 15.2 15.2 8200.62 0 2 MPI_Comm_rank() 0.0 15.2 15.2 8198.62 0 2 MPI_Comm_rank() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0x44000000> ] 0.0 5.70 5.70 3282.75 0 2 MPI_Comm_get_attr() 0.0 2.56 2.56 3274.75 0 1 MPI_Get_count() 0.0 0.166 0.166 3.5 0 47 MPI_Recv() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0xffffffff84000002> ] 0.0 0.165 0.165 2.5 0 66 MPI_Recv() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0xffffffff84000003> ] 0.0 0.059 0.059 1 0 59 MPI_Barrier() 0.0 0.059 0.059 1 0 59 MPI_Barrier() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0x44000000> ] 0.0 0.020 0.020 3 0 7 MPI_Comm_test_inter() 0.0 0.020 0.020 3 0 7 MPI_Comm_test_inter() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0x44000000> ] 0.0 0.007 0.016 1 2 16 MPI_Comm_delete_attr() 0.0 0.014 0.014 1.5 0 10 MPI_Comm_rank() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0xffffffff84000002> ] 0.0 0.011 0.011 3 0 4 MPI_Comm_set_attr() 0.0 0.009 0.009 1 0 9 MPI_Comm_free() 0.0 0.006 0.006 3 0 2 MPI_Comm_set_errhandler() 0.0 0.006 0.006 0.5 0 11 MPI_Comm_rank() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0xffffffff84000003> ] 0.0 0.005 0.005 4 0 1 MPI_Finalized() 0.0 0.005 0.005 7.5 0 1 MPI_Comm_size() 0.0 0.003 0.003 2 0 1 MPI_Comm_create_keyval() 0.0 0.002 0.002 2 0 1 MPI_Comm_free_keyval() ``` ## GPU performance Nsight Systems can be useful for understanding data movement between CPU <-> GPU during the run, and also for gaining insight into any GPU kernels which are not part of TensorFlow. It also uses sampling to collect data, and thus can be used as a 'black box' profiling tool with relatively low overhead; since a typical ExaRL calculation is quite long, it is best to disable CPU sampling in Nsight Systems by adding the `-s none` flag; otherwise, the resulting profile will be enormous and will take hours to process. An example profile using Nsight Systems is shown below. In this case, it looks like the majority of GPU activity is spent executing TDLG kernels, in which case Nsight Compute could possibly be used to improve performance. <img width="2032" alt="Screen Shot 2020-10-29 at 9 23 03 AM" src="https://user-images.githubusercontent.com/979133/97612954-4ae23800-19d5-11eb-93ba-a0c6ebd61148.png"> Nsight Compute can be useful for tuning hand-written kernels, like LibTDLG, but it is much less useful when the runtime is dominated by TensorFlow activity. ## TensorFlow performance TensorFlow includes its own [profiling framework](https://www.tensorflow.org/guide/profiler) which, unlike 'black box' profiling tools, has significant domain-specific awareness about what the calculation is doing. It is likely we will need to rely on this to supplement the above tools if the goal is to improve the performance of the TensorFlow portions of ExaRL. # Representing and tracking performance data It will be useful to track the performance characteristics of ExaRL as the code base develops. One way to do this is to integrate ExaRL into the ECP GitLab continuous integration infrastructure which is already available at NERSC. We could configure a GitLab runner to launch once per day, check out the latest version of the code, run it, and store the performance characteristics of the code in a file or database that is then visualized on a website. The website can be hosted in [Spin](https://www.nersc.gov/systems/spin/). Some characteristics like execution time, or fraction of time spent in MPI, can be represented simply with timings, and plotted on a graph as a function of git commit or date, like [here](https://ccse.lbl.gov/pub/RegressionTesting/Castro/Dust-3d-M-timings.png). Other characteristics which may have more complex information, like the output from Nsight Systems or Nsight Compute or TensorFlow Profiler, may require a different approach for visualizing the data as a function of time.
1.0
Performance assessment plan - # Background A goal for ExaRL is to design a performance assessment framework that can be used to track performance of the code over time and during its development. This can be a challenging task for ExaRL due to its reliance on TensorFlow, which is a complex framework that in turn relies on other complex GPU accelerated libraries like cuDNN. The standard approach of profiling the code as a 'black box' using a tool like NVIDIA Nsight Compute or Nsight Systems does not yield very useful results when the code relies heavily on TensorFlow, which launches millions or billions of GPU kernels in a typical run, and any of those kernels are highly tuned in libraries like NVIDIA cuDNN or Eigen. An example of this problem, drawn from a [TensorFlow example code for classifying images](https://www.tensorflow.org/tutorials/keras/classification), is shown below: <img width="2032" alt="Screen Shot 2020-10-29 at 11 30 07 AM" src="https://user-images.githubusercontent.com/979133/97616698-1e7cea80-19da-11eb-90ae-893d4081c5d3.png"> The GPU kernel statistics as reported by Nsight Systems are shown below: ``` Time(%) Total Time (ns) Instances Average Minimum Maximum Name ------- --------------- --------- ------- ------- ------- ---------------------------------------------------------------------------------------------------- 10.6 299116632 75000 3988.2 3071 7296 void tensorflow::functor::ApplyAdamKernel<float>(int, float*, float*, float*, float const*, float c? 10.6 298053918 19063 15635.2 13984 22879 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 5.8 163429786 19377 8434.2 7264 172734 void cutlass::Kernel<cutlass_80_tensorop_s1688gemm_64x64_16x6_nn_align1>(cutlass_80_tensorop_s1688g? 5.0 139689636 76252 1831.9 1631 4960 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 4.8 135778809 19063 7122.6 7039 7712 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 4.7 131099423 19377 6765.7 6368 8255 void cutlass::Kernel<cutlass_80_tensorop_s1688gemm_64x64_16x6_nn_align4>(cutlass_80_tensorop_s1688g? 4.7 130793263 57189 2287.0 2047 3391 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 4.3 119677513 18750 6382.8 6303 6465 void cutlass::Kernel<cutlass_80_tensorop_s1688gemm_64x64_16x6_tn_align1>(cutlass_80_tensorop_s1688g? 3.9 110948802 18750 5917.3 5856 6016 ampere_sgemm_32x32_sliced1x4_nt 3.6 102520661 38754 2645.4 2273 9184 void splitKreduce_kernel<float, float, float, float>(cublasSplitKParams<float>, float const*, float? 3.6 99849301 18750 5325.3 5248 6848 void cutlass::Kernel<cutlass_80_tensorop_s1688gemm_64x64_16x6_nt_align4>(cutlass_80_tensorop_s1688g? 3.2 89521683 38754 2310.0 1791 3328 void tensorflow::BiasNHWCKernel<float>(int, float const*, float const*, float*, int) 2.8 79422299 37500 2117.9 2047 7680 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 2.7 75307469 38126 1975.2 1760 9056 void tensorflow::functor::BlockReduceKernel<float*, float*, 256, tensorflow::functor::Sum<float> >(? 2.4 66359582 38126 1740.5 1536 3232 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 2.3 65891986 37813 1742.6 1599 3520 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 2.3 64491357 38127 1691.5 1535 5536 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 2.1 57875293 19063 3036.0 2847 4320 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.8 50012215 18750 2667.3 2560 2720 void tensorflow::functor::ColumnReduceKernel<float const*, float*, cub::Sum>(float const*, float*, ? 1.8 49789572 19063 2611.8 2303 2657 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.7 47636597 18750 2540.6 2495 2912 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.7 47416129 19377 2447.0 1856 2656 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.7 47301887 18750 2522.8 2368 9791 void tensorflow::functor::ColumnReduceMax16ColumnsKernel<float const*, float*, cub::Sum>(float cons? 1.6 45674051 18750 2435.9 2399 3936 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.6 43850922 19063 2300.3 2080 5921 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.5 42487562 19377 2192.7 1888 2400 void tensorflow::functor::RowReduceKernel<float const*, float*, cub::Max>(float const*, float*, int? 1.3 36163366 19063 1897.0 1727 3839 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.3 35739615 19063 1874.8 1664 2208 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.2 33284107 19063 1746.0 1599 2208 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.2 32991011 19063 1730.6 1567 2944 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.1 32314140 19063 1695.1 1536 8160 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 1.1 32281530 18750 1721.7 1695 3296 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 0.1 1603467 939 1707.6 1567 1921 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 0.0 942101 314 3000.3 2368 3552 void tensorflow::(anonymous namespace)::GenerateNormalizedProb<float, float, 4>(float const*, float? 0.0 800308 314 2548.8 2144 2624 void tensorflow::functor::RowReduceKernel<cub::TransformInputIterator<float, tensorflow::(anonymous? 0.0 51743 28 1848.0 1664 2592 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 0.0 27999 14 1999.9 1791 3040 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 0.0 20833 6 3472.2 3040 4544 void tensorflow::BiasGradNHWC_SharedAtomics<float>(int, float const*, float*, int) 0.0 18368 1 18368.0 18368 18368 void tensorflow::concat_variable_kernel<float, int, true>(tensorflow::GpuDeviceArrayStruct<float co? 0.0 11776 2 5888.0 5056 6720 void tensorflow::functor::FillPhiloxRandomKernelLaunch<tensorflow::random::UniformDistribution<tens? 0.0 4896 2 2448.0 1952 2944 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 0.0 4703 2 2351.5 1983 2720 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 0.0 4128 2 2064.0 1920 2208 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? 0.0 1920 1 1920.0 1920 1920 void Eigen::internal::EigenMetaKernel<Eigen::TensorEvaluator<Eigen::TensorAssignOp<Eigen::TensorMap? ``` It is hardly obvious from this output what one should do to improve performance of the code. So we must adjust this approach in order to be able to collect actionable performance data about ExaRL. Most likely, we will need to use a combination of 'black box' profiling tools and domain-specific tools which have some awareness of what kinds of calculations the code is doing. # Collecting performance data We can use a combination of performance analysis tools to understand the overall performance characteristics of the code. A few components and proposals are described below. ## Timers These are simple to implement and can summarize strong and weak scaling behavior easily. ## MPI analysis ExaRL uses MPI for inter-node communication. MPI performance is straightforward to measure, including characteristics like time spent at barriers, load imbalance, etc. Many tools can measure these quantities via sampling of each MPI task; this activity has low overhead, and can typically be used for even high-MPI-concurrency runs. Open source tools like [TAU](https://www.cs.uoregon.edu/research/tau/home.php) and [HPCToolkit](http://hpctoolkit.org) can be used for this, along with several other proprietary tools like [Arm MAP](https://www.arm.com/products/development-tools/server-and-hpc/forge/map). So far we have been using TAU on the Cori GPU cluster at NERSC, with reasonably good results. The `jumpshot` GUI shows results like the following: <img width="2032" alt="Screen Shot 2020-10-29 at 8 36 19 AM" src="https://user-images.githubusercontent.com/979133/97596337-d9e55500-19c1-11eb-91be-11a0a4ed16f6.png"> and the `pprof` analysis tool shows quantitative results: ``` FUNCTION SUMMARY (mean): --------------------------------------------------------------------------------------- %Time Exclusive Inclusive #Call #Subrs Inclusive Name msec total msec usec/call --------------------------------------------------------------------------------------- 100.0 1564456 1832745 1 24624.8 1832744723 .TAU application 14.4 263892 263892 3274.75 0 80584 MPI_Probe() 14.4 263892 263892 3274.75 0 80584 MPI_Probe() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0x44000000> ] 0.1 1767 1767 1 0 1766844 MPI_Init() 0.1 1255 1255 1 1 1255392 MPI_Finalize() 0.0 412 412 1 0 412275 MPI_Comm_dup() 0.0 412 412 1 0 412275 MPI_Comm_dup() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0x44000000> ] 0.0 364 364 3281.62 0 111 MPI_Send() 0.0 364 364 3281.62 0 111 MPI_Send() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0x44000000> <message send path id> = <0> ] 0.0 295 295 3280.75 0 90 MPI_Recv() 0.0 295 295 3274.75 0 90 MPI_Recv() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0x44000000> ] 0.0 205 205 2 0 102716 MPI_Bcast() 0.0 205 205 2 0 102716 MPI_Bcast() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0x44000000> ] 0.0 73.4 73.4 1 1 73419 MPI_Comm_split() 0.0 15.2 15.2 8200.62 0 2 MPI_Comm_rank() 0.0 15.2 15.2 8198.62 0 2 MPI_Comm_rank() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0x44000000> ] 0.0 5.70 5.70 3282.75 0 2 MPI_Comm_get_attr() 0.0 2.56 2.56 3274.75 0 1 MPI_Get_count() 0.0 0.166 0.166 3.5 0 47 MPI_Recv() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0xffffffff84000002> ] 0.0 0.165 0.165 2.5 0 66 MPI_Recv() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0xffffffff84000003> ] 0.0 0.059 0.059 1 0 59 MPI_Barrier() 0.0 0.059 0.059 1 0 59 MPI_Barrier() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0x44000000> ] 0.0 0.020 0.020 3 0 7 MPI_Comm_test_inter() 0.0 0.020 0.020 3 0 7 MPI_Comm_test_inter() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0x44000000> ] 0.0 0.007 0.016 1 2 16 MPI_Comm_delete_attr() 0.0 0.014 0.014 1.5 0 10 MPI_Comm_rank() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0xffffffff84000002> ] 0.0 0.011 0.011 3 0 4 MPI_Comm_set_attr() 0.0 0.009 0.009 1 0 9 MPI_Comm_free() 0.0 0.006 0.006 3 0 2 MPI_Comm_set_errhandler() 0.0 0.006 0.006 0.5 0 11 MPI_Comm_rank() [ <comm> = <ranks: 0, 1, 2, 3, 4, 5, 6, 7> <addr=0xffffffff84000003> ] 0.0 0.005 0.005 4 0 1 MPI_Finalized() 0.0 0.005 0.005 7.5 0 1 MPI_Comm_size() 0.0 0.003 0.003 2 0 1 MPI_Comm_create_keyval() 0.0 0.002 0.002 2 0 1 MPI_Comm_free_keyval() ``` ## GPU performance Nsight Systems can be useful for understanding data movement between CPU <-> GPU during the run, and also for gaining insight into any GPU kernels which are not part of TensorFlow. It also uses sampling to collect data, and thus can be used as a 'black box' profiling tool with relatively low overhead; since a typical ExaRL calculation is quite long, it is best to disable CPU sampling in Nsight Systems by adding the `-s none` flag; otherwise, the resulting profile will be enormous and will take hours to process. An example profile using Nsight Systems is shown below. In this case, it looks like the majority of GPU activity is spent executing TDLG kernels, in which case Nsight Compute could possibly be used to improve performance. <img width="2032" alt="Screen Shot 2020-10-29 at 9 23 03 AM" src="https://user-images.githubusercontent.com/979133/97612954-4ae23800-19d5-11eb-93ba-a0c6ebd61148.png"> Nsight Compute can be useful for tuning hand-written kernels, like LibTDLG, but it is much less useful when the runtime is dominated by TensorFlow activity. ## TensorFlow performance TensorFlow includes its own [profiling framework](https://www.tensorflow.org/guide/profiler) which, unlike 'black box' profiling tools, has significant domain-specific awareness about what the calculation is doing. It is likely we will need to rely on this to supplement the above tools if the goal is to improve the performance of the TensorFlow portions of ExaRL. # Representing and tracking performance data It will be useful to track the performance characteristics of ExaRL as the code base develops. One way to do this is to integrate ExaRL into the ECP GitLab continuous integration infrastructure which is already available at NERSC. We could configure a GitLab runner to launch once per day, check out the latest version of the code, run it, and store the performance characteristics of the code in a file or database that is then visualized on a website. The website can be hosted in [Spin](https://www.nersc.gov/systems/spin/). Some characteristics like execution time, or fraction of time spent in MPI, can be represented simply with timings, and plotted on a graph as a function of git commit or date, like [here](https://ccse.lbl.gov/pub/RegressionTesting/Castro/Dust-3d-M-timings.png). Other characteristics which may have more complex information, like the output from Nsight Systems or Nsight Compute or TensorFlow Profiler, may require a different approach for visualizing the data as a function of time.
non_process
performance assessment plan background a goal for exarl is to design a performance assessment framework that can be used to track performance of the code over time and during its development this can be a challenging task for exarl due to its reliance on tensorflow which is a complex framework that in turn relies on other complex gpu accelerated libraries like cudnn the standard approach of profiling the code as a black box using a tool like nvidia nsight compute or nsight systems does not yield very useful results when the code relies heavily on tensorflow which launches millions or billions of gpu kernels in a typical run and any of those kernels are highly tuned in libraries like nvidia cudnn or eigen an example of this problem drawn from a is shown below img width alt screen shot at am src the gpu kernel statistics as reported by nsight systems are shown below time total time ns instances average minimum maximum name void tensorflow functor applyadamkernel int float float float float const float c void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void cutlass kernel cutlass tensorop void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void cutlass kernel cutlass tensorop void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void cutlass kernel cutlass tensorop ampere sgemm nt void splitkreduce kernel cublassplitkparams float const float void cutlass kernel cutlass tensorop void tensorflow biasnhwckernel int float const float const float int void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void tensorflow functor blockreducekernel void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void tensorflow functor columnreducekernel float const float void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void tensorflow functor float cons void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void tensorflow functor rowreducekernel float const float int void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void tensorflow anonymous namespace generatenormalizedprob float const float void tensorflow functor rowreducekernel cub transforminputiterator float tensorflow anonymous void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void tensorflow biasgradnhwc sharedatomics int float const float int void tensorflow concat variable kernel tensorflow gpudevicearraystruct float co void tensorflow functor fillphiloxrandomkernellaunch tensorflow random uniformdistribution tens void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap void eigen internal eigenmetakernel eigen tensorevaluator eigen tensorassignop eigen tensormap it is hardly obvious from this output what one should do to improve performance of the code so we must adjust this approach in order to be able to collect actionable performance data about exarl most likely we will need to use a combination of black box profiling tools and domain specific tools which have some awareness of what kinds of calculations the code is doing collecting performance data we can use a combination of performance analysis tools to understand the overall performance characteristics of the code a few components and proposals are described below timers these are simple to implement and can summarize strong and weak scaling behavior easily mpi analysis exarl uses mpi for inter node communication mpi performance is straightforward to measure including characteristics like time spent at barriers load imbalance etc many tools can measure these quantities via sampling of each mpi task this activity has low overhead and can typically be used for even high mpi concurrency runs open source tools like and can be used for this along with several other proprietary tools like so far we have been using tau on the cori gpu cluster at nersc with reasonably good results the jumpshot gui shows results like the following img width alt screen shot at am src and the pprof analysis tool shows quantitative results function summary mean time exclusive inclusive call subrs inclusive name msec total msec usec call tau application mpi probe mpi probe mpi init mpi finalize mpi comm dup mpi comm dup mpi send mpi send mpi recv mpi recv mpi bcast mpi bcast mpi comm split mpi comm rank mpi comm rank mpi comm get attr mpi get count mpi recv mpi recv mpi barrier mpi barrier mpi comm test inter mpi comm test inter mpi comm delete attr mpi comm rank mpi comm set attr mpi comm free mpi comm set errhandler mpi comm rank mpi finalized mpi comm size mpi comm create keyval mpi comm free keyval gpu performance nsight systems can be useful for understanding data movement between cpu gpu during the run and also for gaining insight into any gpu kernels which are not part of tensorflow it also uses sampling to collect data and thus can be used as a black box profiling tool with relatively low overhead since a typical exarl calculation is quite long it is best to disable cpu sampling in nsight systems by adding the s none flag otherwise the resulting profile will be enormous and will take hours to process an example profile using nsight systems is shown below in this case it looks like the majority of gpu activity is spent executing tdlg kernels in which case nsight compute could possibly be used to improve performance img width alt screen shot at am src nsight compute can be useful for tuning hand written kernels like libtdlg but it is much less useful when the runtime is dominated by tensorflow activity tensorflow performance tensorflow includes its own which unlike black box profiling tools has significant domain specific awareness about what the calculation is doing it is likely we will need to rely on this to supplement the above tools if the goal is to improve the performance of the tensorflow portions of exarl representing and tracking performance data it will be useful to track the performance characteristics of exarl as the code base develops one way to do this is to integrate exarl into the ecp gitlab continuous integration infrastructure which is already available at nersc we could configure a gitlab runner to launch once per day check out the latest version of the code run it and store the performance characteristics of the code in a file or database that is then visualized on a website the website can be hosted in some characteristics like execution time or fraction of time spent in mpi can be represented simply with timings and plotted on a graph as a function of git commit or date like other characteristics which may have more complex information like the output from nsight systems or nsight compute or tensorflow profiler may require a different approach for visualizing the data as a function of time
0
22,383
31,142,284,122
IssuesEvent
2023-08-16 01:44:06
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
Flaky test: Port 4445 is already in use
process: flaky test topic: flake ❄️ stage: flake topic: choose-a-browser stale
### Link to dashboard or CircleCI failure All the "Choose a browser" tests failed in this run: [Dashboard](https://dashboard.cypress.io/projects/sehy69/runs/14713/test-results/1f71f240-0299-4240-b5a4-ba61c0271c85) and [CircleCI](https://app.circleci.com/pipelines/github/cypress-io/cypress/42123/workflows/7d0adb1e-7cf9-4ce1-a5c6-36d8fd6c0dc0/jobs/1747717/tests) ### Link to failing test in GitHub https://github.com/cypress-io/cypress/blob/develop/packages/launchpad/cypress/e2e/project-setup.cy.ts#L658 ### Analysis Looks like the port is already in use <img width="1497" alt="Screen Shot 2022-08-17 at 11 47 51 PM" src="https://user-images.githubusercontent.com/26726429/185317691-b951eef1-b2df-4081-8c49-7436e44cf6e7.png"> ### Cypress Version 10.6.0 ### Other Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
1.0
Flaky test: Port 4445 is already in use - ### Link to dashboard or CircleCI failure All the "Choose a browser" tests failed in this run: [Dashboard](https://dashboard.cypress.io/projects/sehy69/runs/14713/test-results/1f71f240-0299-4240-b5a4-ba61c0271c85) and [CircleCI](https://app.circleci.com/pipelines/github/cypress-io/cypress/42123/workflows/7d0adb1e-7cf9-4ce1-a5c6-36d8fd6c0dc0/jobs/1747717/tests) ### Link to failing test in GitHub https://github.com/cypress-io/cypress/blob/develop/packages/launchpad/cypress/e2e/project-setup.cy.ts#L658 ### Analysis Looks like the port is already in use <img width="1497" alt="Screen Shot 2022-08-17 at 11 47 51 PM" src="https://user-images.githubusercontent.com/26726429/185317691-b951eef1-b2df-4081-8c49-7436e44cf6e7.png"> ### Cypress Version 10.6.0 ### Other Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
process
flaky test port is already in use link to dashboard or circleci failure all the choose a browser tests failed in this run and link to failing test in github analysis looks like the port is already in use img width alt screen shot at pm src cypress version other search for this issue number in the codebase to find the test s skipped until this issue is fixed
1
280,755
21,315,151,826
IssuesEvent
2022-04-16 06:22:25
MontyPython28/pe
https://api.github.com/repos/MontyPython28/pe
opened
Typo in description of add command
severity.VeryLow type.DocumentationBug
Minor flaw: when I type 'add' to simply find out the functionality of the add command in uMessage, a random 'null' value comes up before the functionality ![Screenshot (92).png](https://raw.githubusercontent.com/MontyPython28/pe/main/files/23c65abb-2404-4415-bd78-a589a1581a0f.png) <!--session: 1650087565899-7f378706-2180-4f99-9d06-7c0e20cf3e13--> <!--Version: Web v3.4.2-->
1.0
Typo in description of add command - Minor flaw: when I type 'add' to simply find out the functionality of the add command in uMessage, a random 'null' value comes up before the functionality ![Screenshot (92).png](https://raw.githubusercontent.com/MontyPython28/pe/main/files/23c65abb-2404-4415-bd78-a589a1581a0f.png) <!--session: 1650087565899-7f378706-2180-4f99-9d06-7c0e20cf3e13--> <!--Version: Web v3.4.2-->
non_process
typo in description of add command minor flaw when i type add to simply find out the functionality of the add command in umessage a random null value comes up before the functionality
0
394,141
27,021,901,182
IssuesEvent
2023-02-11 04:49:47
dotnetcore/BootstrapBlazor
https://api.github.com/repos/dotnetcore/BootstrapBlazor
closed
doc: update the Labels demos
documentation
### Document describing which component update to new code block mode
1.0
doc: update the Labels demos - ### Document describing which component update to new code block mode
non_process
doc update the labels demos document describing which component update to new code block mode
0
9,271
12,301,359,385
IssuesEvent
2020-05-11 15:17:57
jyn514/rcc
https://api.github.com/repos/jyn514/rcc
opened
[ICE] mutually recursive cycles in function macros are not detected
ICE fuzz preprocessor
### Code <!-- The code that caused the panic goes here. This should also include the error message you got. --> ```c #define a(c) b(c) #define b(c) a(c) a(1) thread 'main' has overflowed its stack fatal runtime error: stack overflow Aborted (core dumped) ``` ### Expected behavior <!-- A description of what you expected to happen. If you're not sure (e.g. this is invalid code), paste the output of another compiler (I like `clang -x c - -Wall -Wextra -pedantic`) --> <details><summary>Backtrace</summary> <!-- The output of `RUST_BACKTRACE=1 cargo run` goes here. --> ``` Program received signal SIGSEGV, Segmentation fault. 0x0000555555aea75a in rcc::lex::cpp::PreProcessor::replace_function ( self=0x6f5ab4c845d5be19, name=<error reading variable: Cannot access memory at address 0x7fffff7fee38>, start=<error reading variable: Cannot access memory at address 0x7fffff7fee3c>) at src/lex/cpp.rs:709 709 fn replace_function(&mut self, name: InternedStr, start: u32) -> Option<CppResult<Token>> { (gdb) where #0 0x0000555555aea75a in rcc::lex::cpp::PreProcessor::replace_function ( self=0x6f5ab4c845d5be19, name=<error reading variable: Cannot access memory at address 0x7fffff7fee38>, start=<error reading variable: Cannot access memory at address 0x7fffff7fee3c>) at src/lex/cpp.rs:709 #1 0x0000555555aea549 in rcc::lex::cpp::PreProcessor::replace_id ( self=0x7fffffff9740, name=..., location=...) at src/lex/cpp.rs:707 #2 0x0000555555ae73bc in rcc::lex::cpp::PreProcessor::handle_token ( self=0x7fffffff9740, token=..., location=...) at src/lex/cpp.rs:354 #3 0x0000555555aeb6e8 in rcc::lex::cpp::PreProcessor::replace_function ( self=0x7fffffff9740, name=..., start=41) at src/lex/cpp.rs:789 #4 0x0000555555aea549 in rcc::lex::cpp::PreProcessor::replace_id ( self=0x7fffffff9740, name=..., location=...) at src/lex/cpp.rs:707 ``` </details> Similar to #399.
1.0
[ICE] mutually recursive cycles in function macros are not detected - ### Code <!-- The code that caused the panic goes here. This should also include the error message you got. --> ```c #define a(c) b(c) #define b(c) a(c) a(1) thread 'main' has overflowed its stack fatal runtime error: stack overflow Aborted (core dumped) ``` ### Expected behavior <!-- A description of what you expected to happen. If you're not sure (e.g. this is invalid code), paste the output of another compiler (I like `clang -x c - -Wall -Wextra -pedantic`) --> <details><summary>Backtrace</summary> <!-- The output of `RUST_BACKTRACE=1 cargo run` goes here. --> ``` Program received signal SIGSEGV, Segmentation fault. 0x0000555555aea75a in rcc::lex::cpp::PreProcessor::replace_function ( self=0x6f5ab4c845d5be19, name=<error reading variable: Cannot access memory at address 0x7fffff7fee38>, start=<error reading variable: Cannot access memory at address 0x7fffff7fee3c>) at src/lex/cpp.rs:709 709 fn replace_function(&mut self, name: InternedStr, start: u32) -> Option<CppResult<Token>> { (gdb) where #0 0x0000555555aea75a in rcc::lex::cpp::PreProcessor::replace_function ( self=0x6f5ab4c845d5be19, name=<error reading variable: Cannot access memory at address 0x7fffff7fee38>, start=<error reading variable: Cannot access memory at address 0x7fffff7fee3c>) at src/lex/cpp.rs:709 #1 0x0000555555aea549 in rcc::lex::cpp::PreProcessor::replace_id ( self=0x7fffffff9740, name=..., location=...) at src/lex/cpp.rs:707 #2 0x0000555555ae73bc in rcc::lex::cpp::PreProcessor::handle_token ( self=0x7fffffff9740, token=..., location=...) at src/lex/cpp.rs:354 #3 0x0000555555aeb6e8 in rcc::lex::cpp::PreProcessor::replace_function ( self=0x7fffffff9740, name=..., start=41) at src/lex/cpp.rs:789 #4 0x0000555555aea549 in rcc::lex::cpp::PreProcessor::replace_id ( self=0x7fffffff9740, name=..., location=...) at src/lex/cpp.rs:707 ``` </details> Similar to #399.
process
mutually recursive cycles in function macros are not detected code the code that caused the panic goes here this should also include the error message you got c define a c b c define b c a c a thread main has overflowed its stack fatal runtime error stack overflow aborted core dumped expected behavior a description of what you expected to happen if you re not sure e g this is invalid code paste the output of another compiler i like clang x c wall wextra pedantic backtrace program received signal sigsegv segmentation fault in rcc lex cpp preprocessor replace function self name start at src lex cpp rs fn replace function mut self name internedstr start option gdb where in rcc lex cpp preprocessor replace function self name start at src lex cpp rs in rcc lex cpp preprocessor replace id self name location at src lex cpp rs in rcc lex cpp preprocessor handle token self token location at src lex cpp rs in rcc lex cpp preprocessor replace function self name start at src lex cpp rs in rcc lex cpp preprocessor replace id self name location at src lex cpp rs similar to
1
363,311
10,741,056,179
IssuesEvent
2019-10-29 19:26:15
eriq-augustine/test-issue-copy
https://api.github.com/repos/eriq-augustine/test-issue-copy
opened
[CLOSED] TrainingMap Optimization
Components - Optimization Difficulty - Medium Priority - Normal
<a href="https://github.com/eriq-augustine"><img src="https://avatars0.githubusercontent.com/u/337857?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [eriq-augustine](https://github.com/eriq-augustine)** _Monday Feb 27, 2017 at 21:38 GMT_ _Originally opened as https://github.com/eriq-augustine/psl/issues/52_ ---- Looks like TrainingMap can take advantage of the same optimization that PersistedAtom manager got in f8959e8488d7fa26c19d5de5ece923168cdc5d69. Also take a look for any other places that can use this optimization.
1.0
[CLOSED] TrainingMap Optimization - <a href="https://github.com/eriq-augustine"><img src="https://avatars0.githubusercontent.com/u/337857?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [eriq-augustine](https://github.com/eriq-augustine)** _Monday Feb 27, 2017 at 21:38 GMT_ _Originally opened as https://github.com/eriq-augustine/psl/issues/52_ ---- Looks like TrainingMap can take advantage of the same optimization that PersistedAtom manager got in f8959e8488d7fa26c19d5de5ece923168cdc5d69. Also take a look for any other places that can use this optimization.
non_process
trainingmap optimization issue by monday feb at gmt originally opened as looks like trainingmap can take advantage of the same optimization that persistedatom manager got in also take a look for any other places that can use this optimization
0
146,352
19,402,161,558
IssuesEvent
2021-12-19 11:32:28
sultanabubaker/octopus-master
https://api.github.com/repos/sultanabubaker/octopus-master
opened
CVE-2020-7598 (Medium) detected in minimist-0.0.8.tgz, minimist-0.0.10.tgz
security vulnerability
## CVE-2020-7598 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-0.0.8.tgz</b>, <b>minimist-0.0.10.tgz</b></p></summary> <p> <details><summary><b>minimist-0.0.8.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p> <p>Path to dependency file: /start-npm-tasks/package.json</p> <p>Path to vulnerable library: /start-npm-tasks/node_modules/mocha/node_modules/minimist/package.json,/start-modules-tasks/node_modules/mocha/node_modules/minimist/package.json,/start-preset-depcheck/node_modules/mocha/node_modules/minimist/package.json,/start-preset-dependencies/node_modules/mocha/node_modules/minimist/package.json,/start-preset-prepush/node_modules/mocha/node_modules/minimist/package.json,/start-reporter/node_modules/mocha/node_modules/minimist/package.json,/start-preset-idea/node_modules/mocha/node_modules/minimist/package.json,/modules/node_modules/mocha/node_modules/minimist/package.json,/start-tasks/node_modules/mocha/node_modules/minimist/package.json,/start-git/node_modules/mocha/node_modules/minimist/package.json,/test-utils/node_modules/mocha/node_modules/minimist/package.json,/start-preset-modules/node_modules/mocha/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - mocha-3.4.1.tgz (Root Library) - mkdirp-0.5.1.tgz - :x: **minimist-0.0.8.tgz** (Vulnerable Library) </details> <details><summary><b>minimist-0.0.10.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.10.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.10.tgz</a></p> <p>Path to dependency file: /start-preset-idea/package.json</p> <p>Path to vulnerable library: /start-preset-idea/node_modules/minimist/package.json,/node_modules/optimist/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - handlebars-4.0.10.tgz (Root Library) - optimist-0.6.1.tgz - :x: **minimist-0.0.10.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/sultanabubaker/octopus-master/commit/fe409ca2bb6102addf56e0caf3b48bb9726d71f3">fe409ca2bb6102addf56e0caf3b48bb9726d71f3</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload. <p>Publish Date: 2020-03-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598>CVE-2020-7598</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94">https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94</a></p> <p>Release Date: 2020-03-11</p> <p>Fix Resolution: minimist - 0.2.1,1.2.3</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"minimist","packageVersion":"0.0.8","packageFilePaths":["/start-npm-tasks/package.json"],"isTransitiveDependency":true,"dependencyTree":"mocha:3.4.1;mkdirp:0.5.1;minimist:0.0.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"minimist - 0.2.1,1.2.3","isBinary":false},{"packageType":"javascript/Node.js","packageName":"minimist","packageVersion":"0.0.10","packageFilePaths":["/start-preset-idea/package.json"],"isTransitiveDependency":true,"dependencyTree":"handlebars:4.0.10;optimist:0.6.1;minimist:0.0.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"minimist - 0.2.1,1.2.3","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-7598","vulnerabilityDetails":"minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a \"constructor\" or \"__proto__\" payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598","cvss3Severity":"medium","cvss3Score":"5.6","cvss3Metrics":{"A":"Low","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-7598 (Medium) detected in minimist-0.0.8.tgz, minimist-0.0.10.tgz - ## CVE-2020-7598 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-0.0.8.tgz</b>, <b>minimist-0.0.10.tgz</b></p></summary> <p> <details><summary><b>minimist-0.0.8.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p> <p>Path to dependency file: /start-npm-tasks/package.json</p> <p>Path to vulnerable library: /start-npm-tasks/node_modules/mocha/node_modules/minimist/package.json,/start-modules-tasks/node_modules/mocha/node_modules/minimist/package.json,/start-preset-depcheck/node_modules/mocha/node_modules/minimist/package.json,/start-preset-dependencies/node_modules/mocha/node_modules/minimist/package.json,/start-preset-prepush/node_modules/mocha/node_modules/minimist/package.json,/start-reporter/node_modules/mocha/node_modules/minimist/package.json,/start-preset-idea/node_modules/mocha/node_modules/minimist/package.json,/modules/node_modules/mocha/node_modules/minimist/package.json,/start-tasks/node_modules/mocha/node_modules/minimist/package.json,/start-git/node_modules/mocha/node_modules/minimist/package.json,/test-utils/node_modules/mocha/node_modules/minimist/package.json,/start-preset-modules/node_modules/mocha/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - mocha-3.4.1.tgz (Root Library) - mkdirp-0.5.1.tgz - :x: **minimist-0.0.8.tgz** (Vulnerable Library) </details> <details><summary><b>minimist-0.0.10.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.10.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.10.tgz</a></p> <p>Path to dependency file: /start-preset-idea/package.json</p> <p>Path to vulnerable library: /start-preset-idea/node_modules/minimist/package.json,/node_modules/optimist/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - handlebars-4.0.10.tgz (Root Library) - optimist-0.6.1.tgz - :x: **minimist-0.0.10.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/sultanabubaker/octopus-master/commit/fe409ca2bb6102addf56e0caf3b48bb9726d71f3">fe409ca2bb6102addf56e0caf3b48bb9726d71f3</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload. <p>Publish Date: 2020-03-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598>CVE-2020-7598</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94">https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94</a></p> <p>Release Date: 2020-03-11</p> <p>Fix Resolution: minimist - 0.2.1,1.2.3</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"minimist","packageVersion":"0.0.8","packageFilePaths":["/start-npm-tasks/package.json"],"isTransitiveDependency":true,"dependencyTree":"mocha:3.4.1;mkdirp:0.5.1;minimist:0.0.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"minimist - 0.2.1,1.2.3","isBinary":false},{"packageType":"javascript/Node.js","packageName":"minimist","packageVersion":"0.0.10","packageFilePaths":["/start-preset-idea/package.json"],"isTransitiveDependency":true,"dependencyTree":"handlebars:4.0.10;optimist:0.6.1;minimist:0.0.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"minimist - 0.2.1,1.2.3","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-7598","vulnerabilityDetails":"minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a \"constructor\" or \"__proto__\" payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598","cvss3Severity":"medium","cvss3Score":"5.6","cvss3Metrics":{"A":"Low","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
non_process
cve medium detected in minimist tgz minimist tgz cve medium severity vulnerability vulnerable libraries minimist tgz minimist tgz minimist tgz parse argument options library home page a href path to dependency file start npm tasks package json path to vulnerable library start npm tasks node modules mocha node modules minimist package json start modules tasks node modules mocha node modules minimist package json start preset depcheck node modules mocha node modules minimist package json start preset dependencies node modules mocha node modules minimist package json start preset prepush node modules mocha node modules minimist package json start reporter node modules mocha node modules minimist package json start preset idea node modules mocha node modules minimist package json modules node modules mocha node modules minimist package json start tasks node modules mocha node modules minimist package json start git node modules mocha node modules minimist package json test utils node modules mocha node modules minimist package json start preset modules node modules mocha node modules minimist package json dependency hierarchy mocha tgz root library mkdirp tgz x minimist tgz vulnerable library minimist tgz parse argument options library home page a href path to dependency file start preset idea package json path to vulnerable library start preset idea node modules minimist package json node modules optimist node modules minimist package json dependency hierarchy handlebars tgz root library optimist tgz x minimist tgz vulnerable library found in head commit a href found in base branch master vulnerability details minimist before could be tricked into adding or modifying properties of object prototype using a constructor or proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution minimist isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree mocha mkdirp minimist isminimumfixversionavailable true minimumfixversion minimist isbinary false packagetype javascript node js packagename minimist packageversion packagefilepaths istransitivedependency true dependencytree handlebars optimist minimist isminimumfixversionavailable true minimumfixversion minimist isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails minimist before could be tricked into adding or modifying properties of object prototype using a constructor or proto payload vulnerabilityurl
0
705
2,502,204,717
IssuesEvent
2015-01-09 05:12:21
AgonyAunt/Sigil-City-of-Doors-PW
https://api.github.com/repos/AgonyAunt/Sigil-City-of-Doors-PW
closed
Error in learning spell from scroll
Bug Low Priority
The Resist Elements scroll is named "Resist Elements", but the associated learned spell is named "Resist Energy".
1.0
Error in learning spell from scroll - The Resist Elements scroll is named "Resist Elements", but the associated learned spell is named "Resist Energy".
non_process
error in learning spell from scroll the resist elements scroll is named resist elements but the associated learned spell is named resist energy
0
3,560
6,598,822,231
IssuesEvent
2017-09-16 11:31:00
PHPSocialNetwork/phpfastcache
https://api.github.com/repos/PHPSocialNetwork/phpfastcache
closed
Missing cache items until page reload.
6.0 :] Not a bug [-_-] In Process
### Configuration: PhpFastCache version: 6.0.4 PHP version: 5.6.31 Operating system: Windows 10 #### Issue description: I'm using Server-Sent Events, to print messages for user. In infinite loop, every 10 seconds I check if there is any new item in cache to broadcast: $messages_to_broadcast = $this->_cache->getItemsByTag('inbox_message'); foreach ($messages_to_broadcast as $key => $_message) { $_message = $_message->get(); if($_message->recipient == $this->_user_id || $_message->recipient == 0){ if(!is_null($html = \CRM\Engine\MessagingService::getMessageToBroadcast($_message))){ echo "event: $_message->type \n"; echo "data:{\n"; echo "data:\"message_html\": \"$html\" \n"; echo "data:}\n\n"; $this->send_keepalive = false; $this->_cache->deleteItem($key); } } } At irregular intervals, there is event, which save message to cache: $_cache_this = self::$_cache->getItem("message_".$_message->id); if(is_null($_cache_this->get())){ $_cache_this->set($_message) ->expiresAfter(600) ->addTag('inbox_message'); self::$_cache->save($_cache_this); } The problem is that while I check in infinite loop for new items in cache, I get empty array. When I reload page, or browser reconnect to Server Side Events stream, item appears in cache. Is there any `flush` method I'm missing here? I'm using `files` as cache method. \phpFastCache\CacheManager::setDefaultConfig(array( "path" => DIR_TMP )); global $cache; $cache = \phpFastCache\CacheManager::getInstance('files');
1.0
Missing cache items until page reload. - ### Configuration: PhpFastCache version: 6.0.4 PHP version: 5.6.31 Operating system: Windows 10 #### Issue description: I'm using Server-Sent Events, to print messages for user. In infinite loop, every 10 seconds I check if there is any new item in cache to broadcast: $messages_to_broadcast = $this->_cache->getItemsByTag('inbox_message'); foreach ($messages_to_broadcast as $key => $_message) { $_message = $_message->get(); if($_message->recipient == $this->_user_id || $_message->recipient == 0){ if(!is_null($html = \CRM\Engine\MessagingService::getMessageToBroadcast($_message))){ echo "event: $_message->type \n"; echo "data:{\n"; echo "data:\"message_html\": \"$html\" \n"; echo "data:}\n\n"; $this->send_keepalive = false; $this->_cache->deleteItem($key); } } } At irregular intervals, there is event, which save message to cache: $_cache_this = self::$_cache->getItem("message_".$_message->id); if(is_null($_cache_this->get())){ $_cache_this->set($_message) ->expiresAfter(600) ->addTag('inbox_message'); self::$_cache->save($_cache_this); } The problem is that while I check in infinite loop for new items in cache, I get empty array. When I reload page, or browser reconnect to Server Side Events stream, item appears in cache. Is there any `flush` method I'm missing here? I'm using `files` as cache method. \phpFastCache\CacheManager::setDefaultConfig(array( "path" => DIR_TMP )); global $cache; $cache = \phpFastCache\CacheManager::getInstance('files');
process
missing cache items until page reload configuration phpfastcache version php version operating system windows issue description i m using server sent events to print messages for user in infinite loop every seconds i check if there is any new item in cache to broadcast messages to broadcast this cache getitemsbytag inbox message foreach messages to broadcast as key message message message get if message recipient this user id message recipient if is null html crm engine messagingservice getmessagetobroadcast message echo event message type n echo data n echo data message html html n echo data n n this send keepalive false this cache deleteitem key at irregular intervals there is event which save message to cache cache this self cache getitem message message id if is null cache this get cache this set message expiresafter addtag inbox message self cache save cache this the problem is that while i check in infinite loop for new items in cache i get empty array when i reload page or browser reconnect to server side events stream item appears in cache is there any flush method i m missing here i m using files as cache method phpfastcache cachemanager setdefaultconfig array path dir tmp global cache cache phpfastcache cachemanager getinstance files
1
2,690
5,540,128,531
IssuesEvent
2017-03-22 09:15:32
Hurence/logisland
https://api.github.com/repos/Hurence/logisland
closed
Support for SMTP/Mailer Processor
feature processor security
Even if records considered as alerts/notification can be stored in a backend and handled by a console, I think it would be nice to have a mailer processor that allows to send emails as notification. For instance, any other processor generating an alert event could post a record to a Kafka 'alert_email' topic on which the MailerProcessor would listen. --------------------------------------------------------------- Simplest processor configuration and record example: Mailer Processor configuration properties example: smtp.server = smtp.mydomain.com smtp.port = 25 smtp.emails = alerts@mydomain.com In order to generate a mail sending, an input record for this processor would be required to have the following expected fields (for instance): record_type = alert alert_msg = "An attempt to crack password for user 'root' on host 'sensible_host' was detected from ip '172.17.0.5'" --------------------------------------------------------------- More complex processor configuration and record example: Mailer Processor configuration properties example: smtp.server = smtp.mydomain.com smtp.port = 25 smtp.emails = alerts@mydomain.com, soc@mydomain.com smtp.emails.cc = cc@mydomain.com smtp.emails.bcc = bcc@mydomain.com smtp.emails.allow_overwrite = true (if true, use emails in record specific fields if present) smtp.conn.timeout = 1s smtp.conn.retry = 2 smtp.subject.default = "Alert from LogIsland security system" smtp.security.username = xxx (authentication towards SMTP server) smtp.security.password = xxx smtp.security.ssl = true smtp.security.keystore = /path/to/keystore/for/authentication/against/smtp/server smtp.security.truststore = /path/to/truststore/for/authentication/of/the/smtp/server .... In order to generate a mail sending, an input record for this processor would be required to have the following expected fields (for instance): record_type = alert alert_emails = special@mydomain.com alert_subject = "[LogIsland] SSH bruteforce password cracking attempt" alert_msg = "An attempt to crack password for user 'root' on host 'sensible_host' was detected from ip '172.17.0.5'"
1.0
Support for SMTP/Mailer Processor - Even if records considered as alerts/notification can be stored in a backend and handled by a console, I think it would be nice to have a mailer processor that allows to send emails as notification. For instance, any other processor generating an alert event could post a record to a Kafka 'alert_email' topic on which the MailerProcessor would listen. --------------------------------------------------------------- Simplest processor configuration and record example: Mailer Processor configuration properties example: smtp.server = smtp.mydomain.com smtp.port = 25 smtp.emails = alerts@mydomain.com In order to generate a mail sending, an input record for this processor would be required to have the following expected fields (for instance): record_type = alert alert_msg = "An attempt to crack password for user 'root' on host 'sensible_host' was detected from ip '172.17.0.5'" --------------------------------------------------------------- More complex processor configuration and record example: Mailer Processor configuration properties example: smtp.server = smtp.mydomain.com smtp.port = 25 smtp.emails = alerts@mydomain.com, soc@mydomain.com smtp.emails.cc = cc@mydomain.com smtp.emails.bcc = bcc@mydomain.com smtp.emails.allow_overwrite = true (if true, use emails in record specific fields if present) smtp.conn.timeout = 1s smtp.conn.retry = 2 smtp.subject.default = "Alert from LogIsland security system" smtp.security.username = xxx (authentication towards SMTP server) smtp.security.password = xxx smtp.security.ssl = true smtp.security.keystore = /path/to/keystore/for/authentication/against/smtp/server smtp.security.truststore = /path/to/truststore/for/authentication/of/the/smtp/server .... In order to generate a mail sending, an input record for this processor would be required to have the following expected fields (for instance): record_type = alert alert_emails = special@mydomain.com alert_subject = "[LogIsland] SSH bruteforce password cracking attempt" alert_msg = "An attempt to crack password for user 'root' on host 'sensible_host' was detected from ip '172.17.0.5'"
process
support for smtp mailer processor even if records considered as alerts notification can be stored in a backend and handled by a console i think it would be nice to have a mailer processor that allows to send emails as notification for instance any other processor generating an alert event could post a record to a kafka alert email topic on which the mailerprocessor would listen simplest processor configuration and record example mailer processor configuration properties example smtp server smtp mydomain com smtp port smtp emails alerts mydomain com in order to generate a mail sending an input record for this processor would be required to have the following expected fields for instance record type alert alert msg an attempt to crack password for user root on host sensible host was detected from ip more complex processor configuration and record example mailer processor configuration properties example smtp server smtp mydomain com smtp port smtp emails alerts mydomain com soc mydomain com smtp emails cc cc mydomain com smtp emails bcc bcc mydomain com smtp emails allow overwrite true if true use emails in record specific fields if present smtp conn timeout smtp conn retry smtp subject default alert from logisland security system smtp security username xxx authentication towards smtp server smtp security password xxx smtp security ssl true smtp security keystore path to keystore for authentication against smtp server smtp security truststore path to truststore for authentication of the smtp server in order to generate a mail sending an input record for this processor would be required to have the following expected fields for instance record type alert alert emails special mydomain com alert subject ssh bruteforce password cracking attempt alert msg an attempt to crack password for user root on host sensible host was detected from ip
1
258,481
27,564,125,796
IssuesEvent
2023-03-08 01:30:06
praneethpanasala/linux
https://api.github.com/repos/praneethpanasala/linux
opened
CVE-2023-1095 (Medium) detected in linuxlinux-4.19.6
Mend: dependency security vulnerability
## CVE-2023-1095 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.6</b></p></summary> <p> <p>Apache Software Foundation (ASF)</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://api.github.com/repos/praneethpanasala/linux/commits/d80c4f847c91020292cb280132b15e2ea147f1a3">d80c4f847c91020292cb280132b15e2ea147f1a3</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_tables_api.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_tables_api.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In nf_tables_updtable, if nf_tables_table_enable returns an error, nft_trans_destroy is called to free the transaction object. nft_trans_destroy() calls list_del(), but the transaction was never placed on a list -- the list head is all zeroes, this results in a NULL pointer dereference. <p>Publish Date: 2023-02-28 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1095>CVE-2023-1095</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-1095">https://www.linuxkernelcves.com/cves/CVE-2023-1095</a></p> <p>Release Date: 2023-02-28</p> <p>Fix Resolution: v4.9.326,v4.14.291,v4.19.256,v5.4.211,v5.10.137,v5.15.61,v5.18.18</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2023-1095 (Medium) detected in linuxlinux-4.19.6 - ## CVE-2023-1095 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.6</b></p></summary> <p> <p>Apache Software Foundation (ASF)</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://api.github.com/repos/praneethpanasala/linux/commits/d80c4f847c91020292cb280132b15e2ea147f1a3">d80c4f847c91020292cb280132b15e2ea147f1a3</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_tables_api.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_tables_api.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In nf_tables_updtable, if nf_tables_table_enable returns an error, nft_trans_destroy is called to free the transaction object. nft_trans_destroy() calls list_del(), but the transaction was never placed on a list -- the list head is all zeroes, this results in a NULL pointer dereference. <p>Publish Date: 2023-02-28 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1095>CVE-2023-1095</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-1095">https://www.linuxkernelcves.com/cves/CVE-2023-1095</a></p> <p>Release Date: 2023-02-28</p> <p>Fix Resolution: v4.9.326,v4.14.291,v4.19.256,v5.4.211,v5.10.137,v5.15.61,v5.18.18</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux apache software foundation asf library home page a href found in head commit a href found in base branch master vulnerable source files net netfilter nf tables api c net netfilter nf tables api c vulnerability details in nf tables updtable if nf tables table enable returns an error nft trans destroy is called to free the transaction object nft trans destroy calls list del but the transaction was never placed on a list the list head is all zeroes this results in a null pointer dereference publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
76,173
14,581,313,809
IssuesEvent
2020-12-18 10:31:44
joomla/joomla-cms
https://api.github.com/repos/joomla/joomla-cms
closed
End user opinion—com_categories needs enhancement
New Feature No Code Attached Yet
Joomla! is a fantastic CMS that is quickly becoming a personal to enterprise level publishing and distribution system. The enhancement requested here will not only streamline and simplify joomla management by end users, but will also create a higher market share / customer base. Joomla needs a set of options in com_categories where upon creating a category, joomla will create and associate an images/media folder (possibly both) for that category as named. Having this option/set of options available will help end users in news agencies, newspapers, and other enterprise level operations simplify content management and keep site mapping much more automated and organized. For many end users, it’s too easy to submit articles that are all over the place—causing joomla to process more files and db fields during operations. Having this option available should make things easier to create a well organized and simpler site management.
1.0
End user opinion—com_categories needs enhancement - Joomla! is a fantastic CMS that is quickly becoming a personal to enterprise level publishing and distribution system. The enhancement requested here will not only streamline and simplify joomla management by end users, but will also create a higher market share / customer base. Joomla needs a set of options in com_categories where upon creating a category, joomla will create and associate an images/media folder (possibly both) for that category as named. Having this option/set of options available will help end users in news agencies, newspapers, and other enterprise level operations simplify content management and keep site mapping much more automated and organized. For many end users, it’s too easy to submit articles that are all over the place—causing joomla to process more files and db fields during operations. Having this option available should make things easier to create a well organized and simpler site management.
non_process
end user opinion—com categories needs enhancement joomla is a fantastic cms that is quickly becoming a personal to enterprise level publishing and distribution system the enhancement requested here will not only streamline and simplify joomla management by end users but will also create a higher market share customer base joomla needs a set of options in com categories where upon creating a category joomla will create and associate an images media folder possibly both for that category as named having this option set of options available will help end users in news agencies newspapers and other enterprise level operations simplify content management and keep site mapping much more automated and organized for many end users it’s too easy to submit articles that are all over the place—causing joomla to process more files and db fields during operations having this option available should make things easier to create a well organized and simpler site management
0
67,124
16,821,944,439
IssuesEvent
2021-06-17 14:03:48
microsoft/appcenter
https://api.github.com/repos/microsoft/appcenter
closed
Add an option that the build should fail if any custom build scripts have failed or wasn't called
build feature request
**Describe the solution you'd like** Add an option that the build should fail if any custom build scripts have failed or wasn't called. **Additional context** Recently in a Xamarin project i had an issue that the pre-build script hasn't be called and/or hasn't been detected. Still the build was successful and a release was done with the staging environment instead of production. My issue was resolved now by adding the build scripts in three paths .sln, Android csproj and iOS csproj. Still i think its important that if Appcenter detects those scripts and they do not run for whatever reason or fail then the hole build should fail as well.
1.0
Add an option that the build should fail if any custom build scripts have failed or wasn't called - **Describe the solution you'd like** Add an option that the build should fail if any custom build scripts have failed or wasn't called. **Additional context** Recently in a Xamarin project i had an issue that the pre-build script hasn't be called and/or hasn't been detected. Still the build was successful and a release was done with the staging environment instead of production. My issue was resolved now by adding the build scripts in three paths .sln, Android csproj and iOS csproj. Still i think its important that if Appcenter detects those scripts and they do not run for whatever reason or fail then the hole build should fail as well.
non_process
add an option that the build should fail if any custom build scripts have failed or wasn t called describe the solution you d like add an option that the build should fail if any custom build scripts have failed or wasn t called additional context recently in a xamarin project i had an issue that the pre build script hasn t be called and or hasn t been detected still the build was successful and a release was done with the staging environment instead of production my issue was resolved now by adding the build scripts in three paths sln android csproj and ios csproj still i think its important that if appcenter detects those scripts and they do not run for whatever reason or fail then the hole build should fail as well
0
3,223
6,283,230,262
IssuesEvent
2017-07-19 02:29:17
gaocegege/Processing.R
https://api.github.com/repos/gaocegege/Processing.R
closed
Implement complete print logic
community/processing community/renjin priority/p0 size/no-idea status/WIP type/enhancement
Now we have limited support for print statements, we should do the hack just like built-in static functions.
1.0
Implement complete print logic - Now we have limited support for print statements, we should do the hack just like built-in static functions.
process
implement complete print logic now we have limited support for print statements we should do the hack just like built in static functions
1
64,889
16,062,810,331
IssuesEvent
2021-04-23 14:43:57
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[SB] Notifications > Server level automated notifications from Study Builder are not triggered and not shown in notifications list
Bug P1 Process: Fixed Process: Tested dev Study builder
**Steps:** 1. Trigger server level automated notifications eg. Create new study/ Pause a study/ Resume a study/ Deactivate a study / Create new activity etc. 2. Observe the same in mobile notifications list and observe push notification is not triggered **Actual:** 1. Push notifications are not triggered for Server level automated notifications 2. Notifications are not shown in the notifications screen in mobile **Expected:** 1. Push notifications should trigger for Server level automated notifications 2. Notifications should be shown in the notifications screen in mobile Note: Issue not observed for manually created App level/study level notifications
1.0
[SB] Notifications > Server level automated notifications from Study Builder are not triggered and not shown in notifications list - **Steps:** 1. Trigger server level automated notifications eg. Create new study/ Pause a study/ Resume a study/ Deactivate a study / Create new activity etc. 2. Observe the same in mobile notifications list and observe push notification is not triggered **Actual:** 1. Push notifications are not triggered for Server level automated notifications 2. Notifications are not shown in the notifications screen in mobile **Expected:** 1. Push notifications should trigger for Server level automated notifications 2. Notifications should be shown in the notifications screen in mobile Note: Issue not observed for manually created App level/study level notifications
non_process
notifications server level automated notifications from study builder are not triggered and not shown in notifications list steps trigger server level automated notifications eg create new study pause a study resume a study deactivate a study create new activity etc observe the same in mobile notifications list and observe push notification is not triggered actual push notifications are not triggered for server level automated notifications notifications are not shown in the notifications screen in mobile expected push notifications should trigger for server level automated notifications notifications should be shown in the notifications screen in mobile note issue not observed for manually created app level study level notifications
0
2,299
5,116,676,871
IssuesEvent
2017-01-07 06:51:24
NJDaeger/EssentialCommands
https://api.github.com/repos/NJDaeger/EssentialCommands
closed
Remove BannerManager and sell it as a separate premium plugin.
in process
BannerManager will be large enough to be a plugin on its own, and it is a unique enough plugin that I believe I can get it passed for premium.
1.0
Remove BannerManager and sell it as a separate premium plugin. - BannerManager will be large enough to be a plugin on its own, and it is a unique enough plugin that I believe I can get it passed for premium.
process
remove bannermanager and sell it as a separate premium plugin bannermanager will be large enough to be a plugin on its own and it is a unique enough plugin that i believe i can get it passed for premium
1
12,362
14,890,473,670
IssuesEvent
2021-01-20 23:09:01
nrnb/GoogleSummerOfCode
https://api.github.com/repos/nrnb/GoogleSummerOfCode
reopened
Develop GUI for ImageJ groovy script calling VCell API
Difficulty: 2 Groovy Image processing ImageJ Java VCell
### Background VCell (http://vcell.org) is an open source software platform that can model and simulate reaction-diffusion systems in geometries derived from 3D experimental microscopy images; it can also utilize experimentally derived molecular concentrations and cellular localizations. It is used by scientists around the world to create quantitative hypotheses of cellular functions whose predictions can be directly compared to experimental results. Fiji/ImageJ (https://imagej.net/) is arguably the most widely used software tool for the analysis of microscope images of cells. Both software environments will gain enhanced functionality by creating an architecture to couple these tools. Bridging these software tools will: 1. Enable VCell users to analyze and visualize their dynamic multidimensional, multivariable simulation results. 2. Enable ImageJ users to analyze live cell microscopy experiments to quantitatively assess their fit to mechanistic reaction-diffusion models and to extract quantitative parameters such as rate constants and in vivo concentrations. https://imagej.net/Plugins https://imagej.net/Writing_plugins https://imagej.net/Development ### Goal We have created a service for VCell users that allows Fiji/ImageJ scripting to directly access the VCell client. We have to expand this service into a series of user-friendly plugins for ImageJ that will automate processing and analyzing cell imaging simulation experiments, including 1) entering the appropriate images to create a geometry; 2) setting initial conditions for the simulation, 3) running multiple simulations with varying parameter sets, 4) visualizing and comparing simulation results to the original experimental image set. ### Difficulty Level 2 (1 easiest and 3 hardest). Prior experience with ImageJ, image processing and modeling app could be useful. There is a lot of help on how to create ImageJ plugins, less about interaction with VCell. ### Skills Java, Groovy (essential) ImageJ, modeling (nice to have) ### Public Repository https://github.com/virtualcell/vcell/tree/master/vcell-imagej-helper ### Potential Mentors Michael Blinov Frank Morgan Ann Cowan ### Contact Michael Blinov(mailto:blinov@uchc.edu)
1.0
Develop GUI for ImageJ groovy script calling VCell API - ### Background VCell (http://vcell.org) is an open source software platform that can model and simulate reaction-diffusion systems in geometries derived from 3D experimental microscopy images; it can also utilize experimentally derived molecular concentrations and cellular localizations. It is used by scientists around the world to create quantitative hypotheses of cellular functions whose predictions can be directly compared to experimental results. Fiji/ImageJ (https://imagej.net/) is arguably the most widely used software tool for the analysis of microscope images of cells. Both software environments will gain enhanced functionality by creating an architecture to couple these tools. Bridging these software tools will: 1. Enable VCell users to analyze and visualize their dynamic multidimensional, multivariable simulation results. 2. Enable ImageJ users to analyze live cell microscopy experiments to quantitatively assess their fit to mechanistic reaction-diffusion models and to extract quantitative parameters such as rate constants and in vivo concentrations. https://imagej.net/Plugins https://imagej.net/Writing_plugins https://imagej.net/Development ### Goal We have created a service for VCell users that allows Fiji/ImageJ scripting to directly access the VCell client. We have to expand this service into a series of user-friendly plugins for ImageJ that will automate processing and analyzing cell imaging simulation experiments, including 1) entering the appropriate images to create a geometry; 2) setting initial conditions for the simulation, 3) running multiple simulations with varying parameter sets, 4) visualizing and comparing simulation results to the original experimental image set. ### Difficulty Level 2 (1 easiest and 3 hardest). Prior experience with ImageJ, image processing and modeling app could be useful. There is a lot of help on how to create ImageJ plugins, less about interaction with VCell. ### Skills Java, Groovy (essential) ImageJ, modeling (nice to have) ### Public Repository https://github.com/virtualcell/vcell/tree/master/vcell-imagej-helper ### Potential Mentors Michael Blinov Frank Morgan Ann Cowan ### Contact Michael Blinov(mailto:blinov@uchc.edu)
process
develop gui for imagej groovy script calling vcell api background vcell is an open source software platform that can model and simulate reaction diffusion systems in geometries derived from experimental microscopy images it can also utilize experimentally derived molecular concentrations and cellular localizations it is used by scientists around the world to create quantitative hypotheses of cellular functions whose predictions can be directly compared to experimental results fiji imagej is arguably the most widely used software tool for the analysis of microscope images of cells both software environments will gain enhanced functionality by creating an architecture to couple these tools bridging these software tools will enable vcell users to analyze and visualize their dynamic multidimensional multivariable simulation results enable imagej users to analyze live cell microscopy experiments to quantitatively assess their fit to mechanistic reaction diffusion models and to extract quantitative parameters such as rate constants and in vivo concentrations goal we have created a service for vcell users that allows fiji imagej scripting to directly access the vcell client we have to expand this service into a series of user friendly plugins for imagej that will automate processing and analyzing cell imaging simulation experiments including entering the appropriate images to create a geometry setting initial conditions for the simulation running multiple simulations with varying parameter sets visualizing and comparing simulation results to the original experimental image set difficulty level easiest and hardest prior experience with imagej image processing and modeling app could be useful there is a lot of help on how to create imagej plugins less about interaction with vcell skills java groovy essential imagej modeling nice to have public repository potential mentors michael blinov frank morgan ann cowan contact michael blinov mailto blinov uchc edu
1
46,870
5,832,285,538
IssuesEvent
2017-05-08 21:25:39
bounswe/bounswe2017group11
https://api.github.com/repos/bounswe/bounswe2017group11
opened
UnitTest - Anıl
testing
Implement unit test method for getUsersTweetingMostFrequently() method in API.
1.0
UnitTest - Anıl - Implement unit test method for getUsersTweetingMostFrequently() method in API.
non_process
unittest anıl implement unit test method for getuserstweetingmostfrequently method in api
0
12,250
14,767,694,787
IssuesEvent
2021-01-10 08:16:30
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
Fork start method is susceptible to deadlocks
feature module: multiprocessing module: multithreading todo triaged
## Versions ``` Ubuntu 16.04 Python 3.6.2 pytorch 0.1.12_2 ``` ## Issue description ```python import torch import torch.multiprocessing as mp import torch.functional as f import threading import numpy as np from timeit import timeit def build(cuda=False): nn = torch.nn.Sequential( torch.nn.Linear(1024, 1024), torch.nn.Linear(1024, 1) ) return nn.cuda() if cuda else nn def train(nn, X, y, epoch=100): X = torch.autograd.Variable(X) y = torch.autograd.Variable(y) optim = torch.optim.SGD(nn.parameters(), lr=0.1) for i in range(epoch): yhat = nn(X) loss = ((yhat - y) ** 2).mean() loss.backward() optim.step() def data(cuda=False): X = torch.Tensor(np.random.randn(10, 1024)) y = torch.Tensor(np.random.randn(10, 1)) return (X.cuda(), y.cuda()) if cuda else (X, y) def cpu_run(i=None): nn = build(cuda=False) d = data(cuda=False) train(nn, *d) def seq_cpu_run(): for i in range(5): cpu_run() def multiprocess_cpu_run(): pool = torch.multiprocessing.Pool(processes=1) result = pool.map(cpu_run, [() for i in range(1)]) pool.close() pool.join() return result if __name__ == "__main__": print(timeit(seq_cpu_run, number=1)) # 1 print(timeit(multiprocess_cpu_run, number=1)) # 2 ``` #1 run okay alone. #2 run okay alone. #2 then #1 runs okay. #1 then #2 never terminate. where #1 = seq_cpu_run, #2 = multiprocess_cpu_run
1.0
Fork start method is susceptible to deadlocks - ## Versions ``` Ubuntu 16.04 Python 3.6.2 pytorch 0.1.12_2 ``` ## Issue description ```python import torch import torch.multiprocessing as mp import torch.functional as f import threading import numpy as np from timeit import timeit def build(cuda=False): nn = torch.nn.Sequential( torch.nn.Linear(1024, 1024), torch.nn.Linear(1024, 1) ) return nn.cuda() if cuda else nn def train(nn, X, y, epoch=100): X = torch.autograd.Variable(X) y = torch.autograd.Variable(y) optim = torch.optim.SGD(nn.parameters(), lr=0.1) for i in range(epoch): yhat = nn(X) loss = ((yhat - y) ** 2).mean() loss.backward() optim.step() def data(cuda=False): X = torch.Tensor(np.random.randn(10, 1024)) y = torch.Tensor(np.random.randn(10, 1)) return (X.cuda(), y.cuda()) if cuda else (X, y) def cpu_run(i=None): nn = build(cuda=False) d = data(cuda=False) train(nn, *d) def seq_cpu_run(): for i in range(5): cpu_run() def multiprocess_cpu_run(): pool = torch.multiprocessing.Pool(processes=1) result = pool.map(cpu_run, [() for i in range(1)]) pool.close() pool.join() return result if __name__ == "__main__": print(timeit(seq_cpu_run, number=1)) # 1 print(timeit(multiprocess_cpu_run, number=1)) # 2 ``` #1 run okay alone. #2 run okay alone. #2 then #1 runs okay. #1 then #2 never terminate. where #1 = seq_cpu_run, #2 = multiprocess_cpu_run
process
fork start method is susceptible to deadlocks versions ubuntu python pytorch issue description python import torch import torch multiprocessing as mp import torch functional as f import threading import numpy as np from timeit import timeit def build cuda false nn torch nn sequential torch nn linear torch nn linear return nn cuda if cuda else nn def train nn x y epoch x torch autograd variable x y torch autograd variable y optim torch optim sgd nn parameters lr for i in range epoch yhat nn x loss yhat y mean loss backward optim step def data cuda false x torch tensor np random randn y torch tensor np random randn return x cuda y cuda if cuda else x y def cpu run i none nn build cuda false d data cuda false train nn d def seq cpu run for i in range cpu run def multiprocess cpu run pool torch multiprocessing pool processes result pool map cpu run pool close pool join return result if name main print timeit seq cpu run number print timeit multiprocess cpu run number run okay alone run okay alone then runs okay then never terminate where seq cpu run multiprocess cpu run
1
79,356
15,180,856,729
IssuesEvent
2021-02-15 01:31:06
stef-levesque/vscode-3dviewer
https://api.github.com/repos/stef-levesque/vscode-3dviewer
closed
FR: Open the current document in the viewer
vscode limitation
Hi I am wondering if it is also possible to open the currently opened .obj(or other formats) document in the 3d viewer? Here is how the use case would be, the user drags and drops an .obj file from the system's explorer or the file browser for inspection. Then fire "ctrl-shift-p -> open in 3d viewer" At the moment the mesh file has to be in the opened folder inside Vscode. thanks
1.0
FR: Open the current document in the viewer - Hi I am wondering if it is also possible to open the currently opened .obj(or other formats) document in the 3d viewer? Here is how the use case would be, the user drags and drops an .obj file from the system's explorer or the file browser for inspection. Then fire "ctrl-shift-p -> open in 3d viewer" At the moment the mesh file has to be in the opened folder inside Vscode. thanks
non_process
fr open the current document in the viewer hi i am wondering if it is also possible to open the currently opened obj or other formats document in the viewer here is how the use case would be the user drags and drops an obj file from the system s explorer or the file browser for inspection then fire ctrl shift p open in viewer at the moment the mesh file has to be in the opened folder inside vscode thanks
0
523,640
15,186,868,032
IssuesEvent
2021-02-15 13:01:04
percipioglobal/craft
https://api.github.com/repos/percipioglobal/craft
closed
Organisms: FAQ finetune according to the NTP
CMS Templating enhancement priority: low severity: minor
FAQ content block required for content builder matrix - Accordion [boolean] // allows the user to optionally enable a collapsable accordion style for the FAQ's - if disabled, questions are simply laid out `H3` [question] -> `<p>` [answer] with maybe a small `<hr>` separator Then a superTable loop [add new question] - Question [plainText] - Answer [redactor (simple)]
1.0
Organisms: FAQ finetune according to the NTP - FAQ content block required for content builder matrix - Accordion [boolean] // allows the user to optionally enable a collapsable accordion style for the FAQ's - if disabled, questions are simply laid out `H3` [question] -> `<p>` [answer] with maybe a small `<hr>` separator Then a superTable loop [add new question] - Question [plainText] - Answer [redactor (simple)]
non_process
organisms faq finetune according to the ntp faq content block required for content builder matrix accordion allows the user to optionally enable a collapsable accordion style for the faq s if disabled questions are simply laid out with maybe a small separator then a supertable loop question answer
0
123,733
10,281,390,860
IssuesEvent
2019-08-26 08:23:48
DataDog/dd-trace-go
https://api.github.com/repos/DataDog/dd-trace-go
closed
ddtrace/mocktracer: WithSpanID option is ignored
bug dev/testing
I don't know if this is a bug or expected behaviour but it definitely caught me by surprise. Reproduction: https://play.golang.org/p/EAoiusEOkOh Expected behaviour: That the SpanID in the SpanContext of my span should have the id I specified in WithSpanID. Observed behaviour: The span gets a SpanID (and TraceID) of 0 Essentially when creating a new span with `tracer.WithSpanID`, the span's `SpanContext` will return 0 for the SpanID and TraceID which contradicts with the documentation of `tracer.WithSpanID`: > WithSpanID sets the SpanID on the started span, instead of using a random number. If there is no parent Span (eg from ChildOf), then the TraceID will also be set to the value given here. Is this a bug or am I misunderstanding what the SpanContext is?
1.0
ddtrace/mocktracer: WithSpanID option is ignored - I don't know if this is a bug or expected behaviour but it definitely caught me by surprise. Reproduction: https://play.golang.org/p/EAoiusEOkOh Expected behaviour: That the SpanID in the SpanContext of my span should have the id I specified in WithSpanID. Observed behaviour: The span gets a SpanID (and TraceID) of 0 Essentially when creating a new span with `tracer.WithSpanID`, the span's `SpanContext` will return 0 for the SpanID and TraceID which contradicts with the documentation of `tracer.WithSpanID`: > WithSpanID sets the SpanID on the started span, instead of using a random number. If there is no parent Span (eg from ChildOf), then the TraceID will also be set to the value given here. Is this a bug or am I misunderstanding what the SpanContext is?
non_process
ddtrace mocktracer withspanid option is ignored i don t know if this is a bug or expected behaviour but it definitely caught me by surprise reproduction expected behaviour that the spanid in the spancontext of my span should have the id i specified in withspanid observed behaviour the span gets a spanid and traceid of essentially when creating a new span with tracer withspanid the span s spancontext will return for the spanid and traceid which contradicts with the documentation of tracer withspanid withspanid sets the spanid on the started span instead of using a random number if there is no parent span eg from childof then the traceid will also be set to the value given here is this a bug or am i misunderstanding what the spancontext is
0
63,346
7,718,634,066
IssuesEvent
2018-05-23 16:48:58
hypothesis/product-backlog
https://api.github.com/repos/hypothesis/product-backlog
closed
"Only Me" in groups is a poor experience
Design user requested
### Problem you are trying to address The way we currently manage private annotations seems to be consistently at odds with the way users expect private annotations to work. The idea of annotating "privately, in a group" confuses people about what the purpose of a group is, and what "private" means. As a user who primarily annotates for their own benefit I want to quickly and easily see my own annotations (and no-one else's) on any page So that I don't have to trawl through other contributions to find my own. As a user who is new to Hypothesis, and who has been invited to annotate with a group I want the simplest possible route to posting annotations in that group So that there is little risk I get stuck and frustrated. ### Your solution As discussed most recently in https://github.com/hypothesis/product-backlog/issues/191, but also elsewhere, it seems like moving from a model where annotations can be in: ``` Public shared only me Group A shared only me Group B shared only me ``` to one in which annotations are in one of: ``` Only me Public Group A Group B ``` would be a useful simplification of our user interface and would solve this and related issues. ### User requests - https://hypothesis.zendesk.com/agent/tickets/328 - https://hypothesis.zendesk.com/agent/tickets/836 (How to see only my annotations?) - https://hypothesis.zendesk.com/agent/tickets/990 ('I had assumed that once I was logged in, “Private” would be an option in the drop down menu') - https://hypothesis.zendesk.com/agent/tickets/995 ('...even my public annotation are marked with a small padlock that says that only I can see this annotation. What's the difference between these two options?') - https://hypothesis.zendesk.com/agent/tickets/996 (Feature Request: a way to filter out public annotations)
1.0
"Only Me" in groups is a poor experience - ### Problem you are trying to address The way we currently manage private annotations seems to be consistently at odds with the way users expect private annotations to work. The idea of annotating "privately, in a group" confuses people about what the purpose of a group is, and what "private" means. As a user who primarily annotates for their own benefit I want to quickly and easily see my own annotations (and no-one else's) on any page So that I don't have to trawl through other contributions to find my own. As a user who is new to Hypothesis, and who has been invited to annotate with a group I want the simplest possible route to posting annotations in that group So that there is little risk I get stuck and frustrated. ### Your solution As discussed most recently in https://github.com/hypothesis/product-backlog/issues/191, but also elsewhere, it seems like moving from a model where annotations can be in: ``` Public shared only me Group A shared only me Group B shared only me ``` to one in which annotations are in one of: ``` Only me Public Group A Group B ``` would be a useful simplification of our user interface and would solve this and related issues. ### User requests - https://hypothesis.zendesk.com/agent/tickets/328 - https://hypothesis.zendesk.com/agent/tickets/836 (How to see only my annotations?) - https://hypothesis.zendesk.com/agent/tickets/990 ('I had assumed that once I was logged in, “Private” would be an option in the drop down menu') - https://hypothesis.zendesk.com/agent/tickets/995 ('...even my public annotation are marked with a small padlock that says that only I can see this annotation. What's the difference between these two options?') - https://hypothesis.zendesk.com/agent/tickets/996 (Feature Request: a way to filter out public annotations)
non_process
only me in groups is a poor experience problem you are trying to address the way we currently manage private annotations seems to be consistently at odds with the way users expect private annotations to work the idea of annotating privately in a group confuses people about what the purpose of a group is and what private means as a user who primarily annotates for their own benefit i want to quickly and easily see my own annotations and no one else s on any page so that i don t have to trawl through other contributions to find my own as a user who is new to hypothesis and who has been invited to annotate with a group i want the simplest possible route to posting annotations in that group so that there is little risk i get stuck and frustrated your solution as discussed most recently in but also elsewhere it seems like moving from a model where annotations can be in public shared only me group a shared only me group b shared only me to one in which annotations are in one of only me public group a group b would be a useful simplification of our user interface and would solve this and related issues user requests how to see only my annotations i had assumed that once i was logged in “private” would be an option in the drop down menu even my public annotation are marked with a small padlock that says that only i can see this annotation what s the difference between these two options feature request a way to filter out public annotations
0
6,443
9,546,037,361
IssuesEvent
2019-05-01 18:45:55
openopps/openopps-platform
https://api.github.com/repos/openopps/openopps-platform
closed
Department of State: Select internship opportunities from Saved Search
Apply Process Approved Requirements Ready State Dept.
Who: Student What: Selecting an opportunity from "Saved Searches" Why: As a student, I would like to be able to select my internship opportunities from a list of opportunities I have previously saved. A/C - When the user selects an internship opportunity from the list in the "saved internship opportunities" box it will be displayed in the next available choice (1st, 2nd or 3rd). - There will be a "Go to saved" button on the right bar that will take the user to to the user's landing page where they can view all of their saved searches - All of the saved searches that have not been selected as one of the 3 choices in the right bar will say "select" - Once the user has moved a saved search into their application (one of their 3 choices) on the left the saved search will say "Selected" - The total number of saved searches will display at the top of the right rail. InVision Mock: https://opm.invisionapp.com/d/main/#/console/15360465/319289308/preview Public Link: https://opm.invisionapp.com/share/ZEPNZR09Q54 Related ticket 2788
1.0
Department of State: Select internship opportunities from Saved Search - Who: Student What: Selecting an opportunity from "Saved Searches" Why: As a student, I would like to be able to select my internship opportunities from a list of opportunities I have previously saved. A/C - When the user selects an internship opportunity from the list in the "saved internship opportunities" box it will be displayed in the next available choice (1st, 2nd or 3rd). - There will be a "Go to saved" button on the right bar that will take the user to to the user's landing page where they can view all of their saved searches - All of the saved searches that have not been selected as one of the 3 choices in the right bar will say "select" - Once the user has moved a saved search into their application (one of their 3 choices) on the left the saved search will say "Selected" - The total number of saved searches will display at the top of the right rail. InVision Mock: https://opm.invisionapp.com/d/main/#/console/15360465/319289308/preview Public Link: https://opm.invisionapp.com/share/ZEPNZR09Q54 Related ticket 2788
process
department of state select internship opportunities from saved search who student what selecting an opportunity from saved searches why as a student i would like to be able to select my internship opportunities from a list of opportunities i have previously saved a c when the user selects an internship opportunity from the list in the saved internship opportunities box it will be displayed in the next available choice or there will be a go to saved button on the right bar that will take the user to to the user s landing page where they can view all of their saved searches all of the saved searches that have not been selected as one of the choices in the right bar will say select once the user has moved a saved search into their application one of their choices on the left the saved search will say selected the total number of saved searches will display at the top of the right rail invision mock public link related ticket
1
206,791
16,056,805,383
IssuesEvent
2021-04-23 06:48:58
Luccien/CoronaHotspotAndShoppingHelper
https://api.github.com/repos/Luccien/CoronaHotspotAndShoppingHelper
closed
Android App Publishers
documentation enhancement good first issue help wanted
The app offers the possibility to limit the usage to certain longitude and latitude, so that the app can be offered to users of a specific region. Apps can be adapted to the different needs of a region. With different paralleled apps offering different graphical interfaces and features, it is easier to find out what works.
1.0
Android App Publishers - The app offers the possibility to limit the usage to certain longitude and latitude, so that the app can be offered to users of a specific region. Apps can be adapted to the different needs of a region. With different paralleled apps offering different graphical interfaces and features, it is easier to find out what works.
non_process
android app publishers the app offers the possibility to limit the usage to certain longitude and latitude so that the app can be offered to users of a specific region apps can be adapted to the different needs of a region with different paralleled apps offering different graphical interfaces and features it is easier to find out what works
0
19,873
26,287,671,030
IssuesEvent
2023-01-08 02:00:07
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Fri, 6 Jan 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events There is no result ## Keyword: event camera ### Event Camera Data Pre-training - **Authors:** Yan Yang, Liyuan Pan, Liu Liu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.01928 - **Pdf link:** https://arxiv.org/pdf/2301.01928 - **Abstract** This paper proposes a pre-trained neural network for handling event camera data. Our model is trained in a self-supervised learning framework, and uses paired event camera data and natural RGB images for training. Our method contains three modules connected in a sequence: i) a family of event data augmentations, generating meaningful event images for self-supervised training; ii) a conditional masking strategy to sample informative event patches from event images, encouraging our model to capture the spatial layout of a scene and fast training; iii) a contrastive learning approach, enforcing the similarity of embeddings between matching event images, and between paired event-RGB images. An embedding projection loss is proposed to avoid the model collapse when enforcing event embedding similarities. A probability distribution alignment loss is proposed to encourage the event data to be consistent with its paired RGB image in feature space. Transfer performance in downstream tasks shows superior performance of our method over state-of-the-art methods. For example, we achieve top-1 accuracy at 64.83\% on the N-ImageNet dataset. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### Automatic Classification of Single Tree Decay Stages from Combined ALS Data and Aerial Imagery using Machine Learning - **Authors:** Tsz Chung Wong, Abubakar Sani-Mohammed, Wei Yao, Marco Heurich - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.01841 - **Pdf link:** https://arxiv.org/pdf/2301.01841 - **Abstract** Understanding forest health is of great importance for the conservation of the integrity of forest ecosystems. The monitoring of forest health is, therefore, indispensable for the long-term conservation of forests and their sustainable management. In this regard, evaluating the amount and quality of dead wood is of utmost interest as they are favorable indicators of biodiversity. Apparently, remote sensing-based machine learning techniques have proven to be more efficient and sustainable with unprecedented accuracy in forest inventory. However, the application of these techniques is still in its infancy with respect to dead wood mapping. This study investigates for the first time the automatic classification of individual coniferous trees into five decay stages (live, declining, dead, loose bark, and clean) from combined airborne laser scanning (ALS) point clouds and CIR images using three Machine Learning methods - 3D point cloud-based deep learning (PointNet), Convolutional Neural Network (CNN), and Random Forest (RF). All models achieved promising results, reaching overall accuracy (OA) up to 90.9%, 90.6%, and 80.6% for CNN, RF, and PointNet, respectively. The experimental results reveal that the image-based approach notably outperformed the 3D point cloud-based one, while spectral image texture is of the highest relevance to the success of categorizing tree decay. Our models could therefore be used for automatic determination of single tree decay stages and landscape-wide assessment of dead wood amount and quality using modern airborne remote sensing techniques with machine/deep learning. The proposed method can contribute as an important and rigorous tool for monitoring biodiversity in forest ecosystems. ### GIVL: Improving Geographical Inclusivity of Vision-Language Models with Pre-Training Methods - **Authors:** Da Yin, Feng Gao, Govind Thattai, Michael Johnston, Kai-Wei Chang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2301.01893 - **Pdf link:** https://arxiv.org/pdf/2301.01893 - **Abstract** A key goal for the advancement of AI is to develop technologies that serve the needs not just of one group but of all communities regardless of their geographical region. In fact, a significant proportion of knowledge is locally shared by people from certain regions but may not apply equally in other regions because of cultural differences. If a model is unaware of regional characteristics, it may lead to performance disparity across regions and result in bias against underrepresented groups. We propose GIVL, a Geographically Inclusive Vision-and-Language Pre-trained model. There are two attributes of geo-diverse visual concepts which can help to learn geo-diverse knowledge: 1) concepts under similar categories have unique knowledge and visual characteristics, 2) concepts with similar visual features may fall in completely different categories. Motivated by the attributes, we design new pre-training objectives Image Knowledge Matching (IKM) and Image Edit Checking (IEC) to pre-train GIVL. Compared with similar-size models pre-trained with similar scale of data, GIVL achieves state-of-the-art (SOTA) and more balanced performance on geo-diverse V&L tasks. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW ### CiT: Curation in Training for Effective Vision-Language Data - **Authors:** Hu Xu, Saining Xie, Po-Yao Huang, Licheng Yu, Russell Howes, Gargi Ghosh, Luke Zettlemoyer, Christoph Feichtenhofer - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2301.02241 - **Pdf link:** https://arxiv.org/pdf/2301.02241 - **Abstract** Large vision-language models are generally applicable to many downstream tasks, but come at an exorbitant training cost that only large institutions can afford. This paper trades generality for efficiency and presents Curation in Training (CiT), a simple and efficient vision-text learning algorithm that couples a data objective into training. CiT automatically yields quality data to speed-up contrastive image-text training and alleviates the need for an offline data filtering pipeline, allowing broad data sources (including raw image-text pairs from the web). CiT contains two loops: an outer loop curating the training data and an inner loop consuming the curated training data. The text encoder connects the two loops. Given metadata for tasks of interest, e.g., class names, and a large pool of image-text pairs, CiT alternatively selects relevant training data from the pool by measuring the similarity of their text embeddings and embeddings of the metadata. In our experiments, we observe that CiT can speed up training by over an order of magnitude, especially if the raw data size is large. ## Keyword: raw image ### CiT: Curation in Training for Effective Vision-Language Data - **Authors:** Hu Xu, Saining Xie, Po-Yao Huang, Licheng Yu, Russell Howes, Gargi Ghosh, Luke Zettlemoyer, Christoph Feichtenhofer - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2301.02241 - **Pdf link:** https://arxiv.org/pdf/2301.02241 - **Abstract** Large vision-language models are generally applicable to many downstream tasks, but come at an exorbitant training cost that only large institutions can afford. This paper trades generality for efficiency and presents Curation in Training (CiT), a simple and efficient vision-text learning algorithm that couples a data objective into training. CiT automatically yields quality data to speed-up contrastive image-text training and alleviates the need for an offline data filtering pipeline, allowing broad data sources (including raw image-text pairs from the web). CiT contains two loops: an outer loop curating the training data and an inner loop consuming the curated training data. The text encoder connects the two loops. Given metadata for tasks of interest, e.g., class names, and a large pool of image-text pairs, CiT alternatively selects relevant training data from the pool by measuring the similarity of their text embeddings and embeddings of the metadata. In our experiments, we observe that CiT can speed up training by over an order of magnitude, especially if the raw data size is large.
2.0
New submissions for Fri, 6 Jan 23 - ## Keyword: events There is no result ## Keyword: event camera ### Event Camera Data Pre-training - **Authors:** Yan Yang, Liyuan Pan, Liu Liu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.01928 - **Pdf link:** https://arxiv.org/pdf/2301.01928 - **Abstract** This paper proposes a pre-trained neural network for handling event camera data. Our model is trained in a self-supervised learning framework, and uses paired event camera data and natural RGB images for training. Our method contains three modules connected in a sequence: i) a family of event data augmentations, generating meaningful event images for self-supervised training; ii) a conditional masking strategy to sample informative event patches from event images, encouraging our model to capture the spatial layout of a scene and fast training; iii) a contrastive learning approach, enforcing the similarity of embeddings between matching event images, and between paired event-RGB images. An embedding projection loss is proposed to avoid the model collapse when enforcing event embedding similarities. A probability distribution alignment loss is proposed to encourage the event data to be consistent with its paired RGB image in feature space. Transfer performance in downstream tasks shows superior performance of our method over state-of-the-art methods. For example, we achieve top-1 accuracy at 64.83\% on the N-ImageNet dataset. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### Automatic Classification of Single Tree Decay Stages from Combined ALS Data and Aerial Imagery using Machine Learning - **Authors:** Tsz Chung Wong, Abubakar Sani-Mohammed, Wei Yao, Marco Heurich - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.01841 - **Pdf link:** https://arxiv.org/pdf/2301.01841 - **Abstract** Understanding forest health is of great importance for the conservation of the integrity of forest ecosystems. The monitoring of forest health is, therefore, indispensable for the long-term conservation of forests and their sustainable management. In this regard, evaluating the amount and quality of dead wood is of utmost interest as they are favorable indicators of biodiversity. Apparently, remote sensing-based machine learning techniques have proven to be more efficient and sustainable with unprecedented accuracy in forest inventory. However, the application of these techniques is still in its infancy with respect to dead wood mapping. This study investigates for the first time the automatic classification of individual coniferous trees into five decay stages (live, declining, dead, loose bark, and clean) from combined airborne laser scanning (ALS) point clouds and CIR images using three Machine Learning methods - 3D point cloud-based deep learning (PointNet), Convolutional Neural Network (CNN), and Random Forest (RF). All models achieved promising results, reaching overall accuracy (OA) up to 90.9%, 90.6%, and 80.6% for CNN, RF, and PointNet, respectively. The experimental results reveal that the image-based approach notably outperformed the 3D point cloud-based one, while spectral image texture is of the highest relevance to the success of categorizing tree decay. Our models could therefore be used for automatic determination of single tree decay stages and landscape-wide assessment of dead wood amount and quality using modern airborne remote sensing techniques with machine/deep learning. The proposed method can contribute as an important and rigorous tool for monitoring biodiversity in forest ecosystems. ### GIVL: Improving Geographical Inclusivity of Vision-Language Models with Pre-Training Methods - **Authors:** Da Yin, Feng Gao, Govind Thattai, Michael Johnston, Kai-Wei Chang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2301.01893 - **Pdf link:** https://arxiv.org/pdf/2301.01893 - **Abstract** A key goal for the advancement of AI is to develop technologies that serve the needs not just of one group but of all communities regardless of their geographical region. In fact, a significant proportion of knowledge is locally shared by people from certain regions but may not apply equally in other regions because of cultural differences. If a model is unaware of regional characteristics, it may lead to performance disparity across regions and result in bias against underrepresented groups. We propose GIVL, a Geographically Inclusive Vision-and-Language Pre-trained model. There are two attributes of geo-diverse visual concepts which can help to learn geo-diverse knowledge: 1) concepts under similar categories have unique knowledge and visual characteristics, 2) concepts with similar visual features may fall in completely different categories. Motivated by the attributes, we design new pre-training objectives Image Knowledge Matching (IKM) and Image Edit Checking (IEC) to pre-train GIVL. Compared with similar-size models pre-trained with similar scale of data, GIVL achieves state-of-the-art (SOTA) and more balanced performance on geo-diverse V&L tasks. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW ### CiT: Curation in Training for Effective Vision-Language Data - **Authors:** Hu Xu, Saining Xie, Po-Yao Huang, Licheng Yu, Russell Howes, Gargi Ghosh, Luke Zettlemoyer, Christoph Feichtenhofer - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2301.02241 - **Pdf link:** https://arxiv.org/pdf/2301.02241 - **Abstract** Large vision-language models are generally applicable to many downstream tasks, but come at an exorbitant training cost that only large institutions can afford. This paper trades generality for efficiency and presents Curation in Training (CiT), a simple and efficient vision-text learning algorithm that couples a data objective into training. CiT automatically yields quality data to speed-up contrastive image-text training and alleviates the need for an offline data filtering pipeline, allowing broad data sources (including raw image-text pairs from the web). CiT contains two loops: an outer loop curating the training data and an inner loop consuming the curated training data. The text encoder connects the two loops. Given metadata for tasks of interest, e.g., class names, and a large pool of image-text pairs, CiT alternatively selects relevant training data from the pool by measuring the similarity of their text embeddings and embeddings of the metadata. In our experiments, we observe that CiT can speed up training by over an order of magnitude, especially if the raw data size is large. ## Keyword: raw image ### CiT: Curation in Training for Effective Vision-Language Data - **Authors:** Hu Xu, Saining Xie, Po-Yao Huang, Licheng Yu, Russell Howes, Gargi Ghosh, Luke Zettlemoyer, Christoph Feichtenhofer - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2301.02241 - **Pdf link:** https://arxiv.org/pdf/2301.02241 - **Abstract** Large vision-language models are generally applicable to many downstream tasks, but come at an exorbitant training cost that only large institutions can afford. This paper trades generality for efficiency and presents Curation in Training (CiT), a simple and efficient vision-text learning algorithm that couples a data objective into training. CiT automatically yields quality data to speed-up contrastive image-text training and alleviates the need for an offline data filtering pipeline, allowing broad data sources (including raw image-text pairs from the web). CiT contains two loops: an outer loop curating the training data and an inner loop consuming the curated training data. The text encoder connects the two loops. Given metadata for tasks of interest, e.g., class names, and a large pool of image-text pairs, CiT alternatively selects relevant training data from the pool by measuring the similarity of their text embeddings and embeddings of the metadata. In our experiments, we observe that CiT can speed up training by over an order of magnitude, especially if the raw data size is large.
process
new submissions for fri jan keyword events there is no result keyword event camera event camera data pre training authors yan yang liyuan pan liu liu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract this paper proposes a pre trained neural network for handling event camera data our model is trained in a self supervised learning framework and uses paired event camera data and natural rgb images for training our method contains three modules connected in a sequence i a family of event data augmentations generating meaningful event images for self supervised training ii a conditional masking strategy to sample informative event patches from event images encouraging our model to capture the spatial layout of a scene and fast training iii a contrastive learning approach enforcing the similarity of embeddings between matching event images and between paired event rgb images an embedding projection loss is proposed to avoid the model collapse when enforcing event embedding similarities a probability distribution alignment loss is proposed to encourage the event data to be consistent with its paired rgb image in feature space transfer performance in downstream tasks shows superior performance of our method over state of the art methods for example we achieve top accuracy at on the n imagenet dataset keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp automatic classification of single tree decay stages from combined als data and aerial imagery using machine learning authors tsz chung wong abubakar sani mohammed wei yao marco heurich subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract understanding forest health is of great importance for the conservation of the integrity of forest ecosystems the monitoring of forest health is therefore indispensable for the long term conservation of forests and their sustainable management in this regard evaluating the amount and quality of dead wood is of utmost interest as they are favorable indicators of biodiversity apparently remote sensing based machine learning techniques have proven to be more efficient and sustainable with unprecedented accuracy in forest inventory however the application of these techniques is still in its infancy with respect to dead wood mapping this study investigates for the first time the automatic classification of individual coniferous trees into five decay stages live declining dead loose bark and clean from combined airborne laser scanning als point clouds and cir images using three machine learning methods point cloud based deep learning pointnet convolutional neural network cnn and random forest rf all models achieved promising results reaching overall accuracy oa up to and for cnn rf and pointnet respectively the experimental results reveal that the image based approach notably outperformed the point cloud based one while spectral image texture is of the highest relevance to the success of categorizing tree decay our models could therefore be used for automatic determination of single tree decay stages and landscape wide assessment of dead wood amount and quality using modern airborne remote sensing techniques with machine deep learning the proposed method can contribute as an important and rigorous tool for monitoring biodiversity in forest ecosystems givl improving geographical inclusivity of vision language models with pre training methods authors da yin feng gao govind thattai michael johnston kai wei chang subjects computer vision and pattern recognition cs cv artificial intelligence cs ai computation and language cs cl arxiv link pdf link abstract a key goal for the advancement of ai is to develop technologies that serve the needs not just of one group but of all communities regardless of their geographical region in fact a significant proportion of knowledge is locally shared by people from certain regions but may not apply equally in other regions because of cultural differences if a model is unaware of regional characteristics it may lead to performance disparity across regions and result in bias against underrepresented groups we propose givl a geographically inclusive vision and language pre trained model there are two attributes of geo diverse visual concepts which can help to learn geo diverse knowledge concepts under similar categories have unique knowledge and visual characteristics concepts with similar visual features may fall in completely different categories motivated by the attributes we design new pre training objectives image knowledge matching ikm and image edit checking iec to pre train givl compared with similar size models pre trained with similar scale of data givl achieves state of the art sota and more balanced performance on geo diverse v l tasks keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw cit curation in training for effective vision language data authors hu xu saining xie po yao huang licheng yu russell howes gargi ghosh luke zettlemoyer christoph feichtenhofer subjects computer vision and pattern recognition cs cv computation and language cs cl arxiv link pdf link abstract large vision language models are generally applicable to many downstream tasks but come at an exorbitant training cost that only large institutions can afford this paper trades generality for efficiency and presents curation in training cit a simple and efficient vision text learning algorithm that couples a data objective into training cit automatically yields quality data to speed up contrastive image text training and alleviates the need for an offline data filtering pipeline allowing broad data sources including raw image text pairs from the web cit contains two loops an outer loop curating the training data and an inner loop consuming the curated training data the text encoder connects the two loops given metadata for tasks of interest e g class names and a large pool of image text pairs cit alternatively selects relevant training data from the pool by measuring the similarity of their text embeddings and embeddings of the metadata in our experiments we observe that cit can speed up training by over an order of magnitude especially if the raw data size is large keyword raw image cit curation in training for effective vision language data authors hu xu saining xie po yao huang licheng yu russell howes gargi ghosh luke zettlemoyer christoph feichtenhofer subjects computer vision and pattern recognition cs cv computation and language cs cl arxiv link pdf link abstract large vision language models are generally applicable to many downstream tasks but come at an exorbitant training cost that only large institutions can afford this paper trades generality for efficiency and presents curation in training cit a simple and efficient vision text learning algorithm that couples a data objective into training cit automatically yields quality data to speed up contrastive image text training and alleviates the need for an offline data filtering pipeline allowing broad data sources including raw image text pairs from the web cit contains two loops an outer loop curating the training data and an inner loop consuming the curated training data the text encoder connects the two loops given metadata for tasks of interest e g class names and a large pool of image text pairs cit alternatively selects relevant training data from the pool by measuring the similarity of their text embeddings and embeddings of the metadata in our experiments we observe that cit can speed up training by over an order of magnitude especially if the raw data size is large
1
12,873
15,263,992,466
IssuesEvent
2021-02-22 04:17:07
syncfusion/ej2-react-ui-components
https://api.github.com/repos/syncfusion/ej2-react-ui-components
closed
Document Editor no overflow
word-processor
Is there a simple way to make the document editor or the container render at full height. i.e. no scrollbars? I tried manipulating the css/styling, but can't come up with an approach that works for my use-case.
1.0
Document Editor no overflow - Is there a simple way to make the document editor or the container render at full height. i.e. no scrollbars? I tried manipulating the css/styling, but can't come up with an approach that works for my use-case.
process
document editor no overflow is there a simple way to make the document editor or the container render at full height i e no scrollbars i tried manipulating the css styling but can t come up with an approach that works for my use case
1