Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 1k | labels stringlengths 4 1.38k | body stringlengths 1 262k | index stringclasses 16
values | text_combine stringlengths 96 262k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,701 | 3,593,202,086 | IssuesEvent | 2016-02-01 18:49:55 | learningequality/ka-lite | https://api.github.com/repos/learningequality/ka-lite | closed | BrowserActionMixins need to be refactored | bug refactoring and performance release blocker | Branch: develop
In kalite/testing/mixins/browser_mixins.py, the method `browser_register_user` doesn't have the desired effect. Perhaps other methods need to be refactored too? | True | BrowserActionMixins need to be refactored - Branch: develop
In kalite/testing/mixins/browser_mixins.py, the method `browser_register_user` doesn't have the desired effect. Perhaps other methods need to be refactored too? | non_priority | browseractionmixins need to be refactored branch develop in kalite testing mixins browser mixins py the method browser register user doesn t have the desired effect perhaps other methods need to be refactored too | 0 |
209,323 | 7,168,542,786 | IssuesEvent | 2018-01-30 01:05:07 | servicecatalog/development | https://api.github.com/repos/servicecatalog/development | closed | After upgrading to 17.3.1, the following error message is repeatedly logged in system.log | bug/status/fixed priority/P1 | After upgrading to 17.3.1, the following error message is repeatedly logged in system.log
<pre>01/29/2018_02:40:56.951 FSP_INTS-BSS: ERROR: ThreadID http-listener-2(3): ApplicationBean: 70023: Error formating build date
java.text.ParseException: Unparseable date: "2018012611"
at java.text.DateFormat.parse(DateFormat.java:366)
at org.oscm.ui.beans.ApplicationBean.initBuildIdAndDate(ApplicationBean.java:159)
at org.oscm.ui.beans.ApplicationBean.getBuildId(ApplicationBean.java:423)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.el.BeanELResolver.getValue(BeanELResolver.java:363)
at com.sun.faces.el.DemuxCompositeELResolver._getValue(DemuxCompositeELResolver.java:176)
at com.sun.faces.el.DemuxCompositeELResolver.getValue(DemuxCompositeELResolver.java:203)
at com.sun.el.parser.AstValue.getValue(AstValue.java:140)
at com.sun.el.parser.AstValue.getValue(AstValue.java:204)
at com.sun.el.ValueExpressionImpl.getValue(ValueExpressionImpl.java:226)
at org.jboss.weld.el.WeldValueExpression.getValue(WeldValueExpression.java:50)
at com.sun.faces.facelets.el.TagValueExpression.getValue(TagValueExpression.java:109)
at javax.faces.component.ComponentStateHelper.eval(ComponentStateHelper.java:194)
at javax.faces.component.ComponentStateHelper.eval(ComponentStateHelper.java:182)
at javax.faces.component.UIOutput.getValue(UIOutput.java:174)
at com.sun.faces.renderkit.html_basic.HtmlBasicInputRenderer.getValue(HtmlBasicInputRenderer.java:205)
at com.sun.faces.renderkit.html_basic.HtmlBasicRenderer.getCurrentValue(HtmlBasicRenderer.java:355)
at com.sun.faces.renderkit.html_basic.HtmlBasicRenderer.encodeEnd(HtmlBasicRenderer.java:164)
at javax.faces.component.UIComponentBase.encodeEnd(UIComponentBase.java:920)
at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1863)
at org.richfaces.renderkit.RendererBase.renderChildren(RendererBase.java:282)
at org.richfaces.renderkit.html.AjaxOutputPanelRenderer.doEncodeChildren(AjaxOutputPanelRenderer.java:57)
at org.richfaces.renderkit.RendererBase.encodeChildren(RendererBase.java:158)
at javax.faces.component.UIComponentBase.encodeChildren(UIComponentBase.java:890)
at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1856)
at org.richfaces.renderkit.RendererBase.renderChildren(RendererBase.java:282)
at org.richfaces.renderkit.html.PopupPanelRenderer.doEncodeEnd(PopupPanelRenderer.java:545)
at org.richfaces.renderkit.RendererBase.encodeEnd(RendererBase.java:180)
at javax.faces.component.UIComponentBase.encodeEnd(UIComponentBase.java:920)
at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1863)
at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1859)
at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1859)
at com.sun.faces.application.view.FaceletViewHandlingStrategy.renderView(FaceletViewHandlingStrategy.java:458)
at com.sun.faces.application.view.MultiViewHandler.renderView(MultiViewHandler.java:134)
at javax.faces.application.ViewHandlerWrapper.renderView(ViewHandlerWrapper.java:337)
at javax.faces.application.ViewHandlerWrapper.renderView(ViewHandlerWrapper.java:337)
at javax.faces.application.ViewHandlerWrapper.renderView(ViewHandlerWrapper.java:337)
at org.apache.myfaces.tomahawk.application.ResourceViewHandlerWrapper.renderView(ResourceViewHandlerWrapper.java:169)
at javax.faces.application.ViewHandlerWrapper.renderView(ViewHandlerWrapper.java:337)
at com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:120)
at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101)
at com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:219)
at javax.faces.webapp.FacesServlet.service(FacesServlet.java:659)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1682)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:344)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.apache.myfaces.webapp.filter.ExtensionsFilter.doFilter(ExtensionsFilter.java:357)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.oscm.ui.filter.AuthorizationFilter.handleLoggedInUser(AuthorizationFilter.java:752)
at org.oscm.ui.filter.AuthorizationFilter.handleProtectedUrlAndChangePwdCase(AuthorizationFilter.java:290)
at org.oscm.ui.filter.AuthorizationFilter.doFilter(AuthorizationFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.oscm.ui.filter.NonexistentConversationFilter.doFilter(NonexistentConversationFilter.java:37)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.oscm.ui.filter.MarketplaceContextFilter.doFilter(MarketplaceContextFilter.java:112)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.oscm.ui.filter.DisableUrlFilter.doFilter(DisableUrlFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.oscm.ui.filter.IdPLogoutFilter.doFilter(IdPLogoutFilter.java:133)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.oscm.ui.filter.IllegalRequestParameterFilter.doFilter(IllegalRequestParameterFilter.java:81)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.oscm.ui.filter.IdPResponseFilter.doFilter(IdPResponseFilter.java:159)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.oscm.ui.filter.HttpMethodFilter.doFilter(HttpMethodFilter.java:69)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.oscm.ui.filter.DisableUrlSessionFilter.doFilter(DisableUrlSessionFilter.java:50)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:316)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:160)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:734)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:673)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:99)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:174)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:734)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:673)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:413)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:283)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:459)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:167)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:206)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:180)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:235)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:283)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:200)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:132)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:111)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:536)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:591)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:571)
at java.lang.Thread.run(Thread.java:748)</pre> | 1.0 | After upgrading to 17.3.1, the following error message is repeatedly logged in system.log - After upgrading to 17.3.1, the following error message is repeatedly logged in system.log
<pre>01/29/2018_02:40:56.951 FSP_INTS-BSS: ERROR: ThreadID http-listener-2(3): ApplicationBean: 70023: Error formating build date
java.text.ParseException: Unparseable date: "2018012611"
at java.text.DateFormat.parse(DateFormat.java:366)
at org.oscm.ui.beans.ApplicationBean.initBuildIdAndDate(ApplicationBean.java:159)
at org.oscm.ui.beans.ApplicationBean.getBuildId(ApplicationBean.java:423)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.el.BeanELResolver.getValue(BeanELResolver.java:363)
at com.sun.faces.el.DemuxCompositeELResolver._getValue(DemuxCompositeELResolver.java:176)
at com.sun.faces.el.DemuxCompositeELResolver.getValue(DemuxCompositeELResolver.java:203)
at com.sun.el.parser.AstValue.getValue(AstValue.java:140)
at com.sun.el.parser.AstValue.getValue(AstValue.java:204)
at com.sun.el.ValueExpressionImpl.getValue(ValueExpressionImpl.java:226)
at org.jboss.weld.el.WeldValueExpression.getValue(WeldValueExpression.java:50)
at com.sun.faces.facelets.el.TagValueExpression.getValue(TagValueExpression.java:109)
at javax.faces.component.ComponentStateHelper.eval(ComponentStateHelper.java:194)
at javax.faces.component.ComponentStateHelper.eval(ComponentStateHelper.java:182)
at javax.faces.component.UIOutput.getValue(UIOutput.java:174)
at com.sun.faces.renderkit.html_basic.HtmlBasicInputRenderer.getValue(HtmlBasicInputRenderer.java:205)
at com.sun.faces.renderkit.html_basic.HtmlBasicRenderer.getCurrentValue(HtmlBasicRenderer.java:355)
at com.sun.faces.renderkit.html_basic.HtmlBasicRenderer.encodeEnd(HtmlBasicRenderer.java:164)
at javax.faces.component.UIComponentBase.encodeEnd(UIComponentBase.java:920)
at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1863)
at org.richfaces.renderkit.RendererBase.renderChildren(RendererBase.java:282)
at org.richfaces.renderkit.html.AjaxOutputPanelRenderer.doEncodeChildren(AjaxOutputPanelRenderer.java:57)
at org.richfaces.renderkit.RendererBase.encodeChildren(RendererBase.java:158)
at javax.faces.component.UIComponentBase.encodeChildren(UIComponentBase.java:890)
at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1856)
at org.richfaces.renderkit.RendererBase.renderChildren(RendererBase.java:282)
at org.richfaces.renderkit.html.PopupPanelRenderer.doEncodeEnd(PopupPanelRenderer.java:545)
at org.richfaces.renderkit.RendererBase.encodeEnd(RendererBase.java:180)
at javax.faces.component.UIComponentBase.encodeEnd(UIComponentBase.java:920)
at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1863)
at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1859)
at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1859)
at com.sun.faces.application.view.FaceletViewHandlingStrategy.renderView(FaceletViewHandlingStrategy.java:458)
at com.sun.faces.application.view.MultiViewHandler.renderView(MultiViewHandler.java:134)
at javax.faces.application.ViewHandlerWrapper.renderView(ViewHandlerWrapper.java:337)
at javax.faces.application.ViewHandlerWrapper.renderView(ViewHandlerWrapper.java:337)
at javax.faces.application.ViewHandlerWrapper.renderView(ViewHandlerWrapper.java:337)
at org.apache.myfaces.tomahawk.application.ResourceViewHandlerWrapper.renderView(ResourceViewHandlerWrapper.java:169)
at javax.faces.application.ViewHandlerWrapper.renderView(ViewHandlerWrapper.java:337)
at com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:120)
at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101)
at com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:219)
at javax.faces.webapp.FacesServlet.service(FacesServlet.java:659)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1682)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:344)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.apache.myfaces.webapp.filter.ExtensionsFilter.doFilter(ExtensionsFilter.java:357)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.oscm.ui.filter.AuthorizationFilter.handleLoggedInUser(AuthorizationFilter.java:752)
at org.oscm.ui.filter.AuthorizationFilter.handleProtectedUrlAndChangePwdCase(AuthorizationFilter.java:290)
at org.oscm.ui.filter.AuthorizationFilter.doFilter(AuthorizationFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.oscm.ui.filter.NonexistentConversationFilter.doFilter(NonexistentConversationFilter.java:37)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.oscm.ui.filter.MarketplaceContextFilter.doFilter(MarketplaceContextFilter.java:112)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.oscm.ui.filter.DisableUrlFilter.doFilter(DisableUrlFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.oscm.ui.filter.IdPLogoutFilter.doFilter(IdPLogoutFilter.java:133)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.oscm.ui.filter.IllegalRequestParameterFilter.doFilter(IllegalRequestParameterFilter.java:81)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.oscm.ui.filter.IdPResponseFilter.doFilter(IdPResponseFilter.java:159)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.oscm.ui.filter.HttpMethodFilter.doFilter(HttpMethodFilter.java:69)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.oscm.ui.filter.DisableUrlSessionFilter.doFilter(DisableUrlSessionFilter.java:50)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:316)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:160)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:734)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:673)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:99)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:174)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:734)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:673)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:413)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:283)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:459)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:167)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:206)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:180)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:235)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:283)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:200)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:132)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:111)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:536)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:591)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:571)
at java.lang.Thread.run(Thread.java:748)</pre> | priority | after upgrading to the following error message is repeatedly logged in system log after upgrading to the following error message is repeatedly logged in system log fsp ints bss error threadid http listener applicationbean error formating build date java text parseexception unparseable date at java text dateformat parse dateformat java at org oscm ui beans applicationbean initbuildidanddate applicationbean java at org oscm ui beans applicationbean getbuildid applicationbean java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at javax el beanelresolver getvalue beanelresolver java at com sun faces el demuxcompositeelresolver getvalue demuxcompositeelresolver java at com sun faces el demuxcompositeelresolver getvalue demuxcompositeelresolver java at com sun el parser astvalue getvalue astvalue java at com sun el parser astvalue getvalue astvalue java at com sun el valueexpressionimpl getvalue valueexpressionimpl java at org jboss weld el weldvalueexpression getvalue weldvalueexpression java at com sun faces facelets el tagvalueexpression getvalue tagvalueexpression java at javax faces component componentstatehelper eval componentstatehelper java at javax faces component componentstatehelper eval componentstatehelper java at javax faces component uioutput getvalue uioutput java at com sun faces renderkit html basic htmlbasicinputrenderer getvalue htmlbasicinputrenderer java at com sun faces renderkit html basic htmlbasicrenderer getcurrentvalue htmlbasicrenderer java at com sun faces renderkit html basic htmlbasicrenderer encodeend htmlbasicrenderer java at javax faces component uicomponentbase encodeend uicomponentbase java at javax faces component uicomponent encodeall uicomponent java at org richfaces renderkit rendererbase renderchildren rendererbase java at org richfaces renderkit html ajaxoutputpanelrenderer doencodechildren ajaxoutputpanelrenderer java at org richfaces renderkit rendererbase encodechildren rendererbase java at javax faces component uicomponentbase encodechildren uicomponentbase java at javax faces component uicomponent encodeall uicomponent java at org richfaces renderkit rendererbase renderchildren rendererbase java at org richfaces renderkit html popuppanelrenderer doencodeend popuppanelrenderer java at org richfaces renderkit rendererbase encodeend rendererbase java at javax faces component uicomponentbase encodeend uicomponentbase java at javax faces component uicomponent encodeall uicomponent java at javax faces component uicomponent encodeall uicomponent java at javax faces component uicomponent encodeall uicomponent java at com sun faces application view faceletviewhandlingstrategy renderview faceletviewhandlingstrategy java at com sun faces application view multiviewhandler renderview multiviewhandler java at javax faces application viewhandlerwrapper renderview viewhandlerwrapper java at javax faces application viewhandlerwrapper renderview viewhandlerwrapper java at javax faces application viewhandlerwrapper renderview viewhandlerwrapper java at org apache myfaces tomahawk application resourceviewhandlerwrapper renderview resourceviewhandlerwrapper java at javax faces application viewhandlerwrapper renderview viewhandlerwrapper java at com sun faces lifecycle renderresponsephase execute renderresponsephase java at com sun faces lifecycle phase dophase phase java at com sun faces lifecycle lifecycleimpl render lifecycleimpl java at javax faces webapp facesservlet service facesservlet java at org apache catalina core standardwrapper service standardwrapper java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache myfaces webapp filter extensionsfilter dofilter extensionsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org oscm ui filter authorizationfilter handleloggedinuser authorizationfilter java at org oscm ui filter authorizationfilter handleprotectedurlandchangepwdcase authorizationfilter java at org oscm ui filter authorizationfilter dofilter authorizationfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org oscm ui filter nonexistentconversationfilter dofilter nonexistentconversationfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org oscm ui filter marketplacecontextfilter dofilter marketplacecontextfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org oscm ui filter disableurlfilter dofilter disableurlfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org oscm ui filter idplogoutfilter dofilter idplogoutfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org oscm ui filter illegalrequestparameterfilter dofilter illegalrequestparameterfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org oscm ui filter idpresponsefilter dofilter idpresponsefilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org oscm ui filter httpmethodfilter dofilter httpmethodfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org oscm ui filter disableurlsessionfilter dofilter disableurlsessionfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina core standardpipeline doinvoke standardpipeline java at org apache catalina core standardpipeline invoke standardpipeline java at com sun enterprise web webpipeline invoke webpipeline java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina core standardpipeline doinvoke standardpipeline java at org apache catalina core standardpipeline invoke standardpipeline java at org apache catalina connector coyoteadapter doservice coyoteadapter java at org apache catalina connector coyoteadapter service coyoteadapter java at com sun enterprise services impl containermapper httphandlercallable call containermapper java at com sun enterprise services impl containermapper service containermapper java at org glassfish grizzly http server httphandler runservice httphandler java at org glassfish grizzly http server httphandler dohandle httphandler java at org glassfish grizzly http server httpserverfilter handleread httpserverfilter java at org glassfish grizzly filterchain executorresolver execute executorresolver java at org glassfish grizzly filterchain defaultfilterchain executefilter defaultfilterchain java at org glassfish grizzly filterchain defaultfilterchain executechainpart defaultfilterchain java at org glassfish grizzly filterchain defaultfilterchain execute defaultfilterchain java at org glassfish grizzly filterchain defaultfilterchain process defaultfilterchain java at org glassfish grizzly processorexecutor execute processorexecutor java at org glassfish grizzly nio transport tcpniotransport fireioevent tcpniotransport java at org glassfish grizzly strategies abstractiostrategy fireioevent abstractiostrategy java at org glassfish grizzly strategies workerthreadiostrategy workerthreadiostrategy java at org glassfish grizzly strategies workerthreadiostrategy access workerthreadiostrategy java at org glassfish grizzly strategies workerthreadiostrategy workerthreadrunnable run workerthreadiostrategy java at org glassfish grizzly threadpool abstractthreadpool worker dowork abstractthreadpool java at org glassfish grizzly threadpool abstractthreadpool worker run abstractthreadpool java at java lang thread run thread java | 1 |
82,681 | 23,851,813,471 | IssuesEvent | 2022-09-06 18:41:31 | bitcoin-s/bitcoin-s | https://api.github.com/repos/bitcoin-s/bitcoin-s | closed | slick 3.4.0-M1 dependency upgrade breaks jlink build | bug build dependencies db-commons | This is fixed in java 18, but unfortunately the tool we use to select java versions on CI isn't updated for java 18 (#4275).
https://stackoverflow.com/a/70011064/967713
For some weird reason I can't build reproduce this bug locally, it does show up on CI, see #4342
https://github.com/bitcoin-s/bitcoin-s/commit/63df47e002f6cc6c18a095d51257f83d9a7ea1da
| 1.0 | slick 3.4.0-M1 dependency upgrade breaks jlink build - This is fixed in java 18, but unfortunately the tool we use to select java versions on CI isn't updated for java 18 (#4275).
https://stackoverflow.com/a/70011064/967713
For some weird reason I can't build reproduce this bug locally, it does show up on CI, see #4342
https://github.com/bitcoin-s/bitcoin-s/commit/63df47e002f6cc6c18a095d51257f83d9a7ea1da
| non_priority | slick dependency upgrade breaks jlink build this is fixed in java but unfortunately the tool we use to select java versions on ci isn t updated for java for some weird reason i can t build reproduce this bug locally it does show up on ci see | 0 |
159,074 | 6,040,290,177 | IssuesEvent | 2017-06-10 12:57:30 | aayaffe/SailingRaceCourseManager | https://api.github.com/repos/aayaffe/SailingRaceCourseManager | closed | Write to DB If user leaves event. Handle correctly | Priority: Medium Type: Enhancement | See whether to remove from display/delete from db/remove mark assignments etc. | 1.0 | Write to DB If user leaves event. Handle correctly - See whether to remove from display/delete from db/remove mark assignments etc. | priority | write to db if user leaves event handle correctly see whether to remove from display delete from db remove mark assignments etc | 1 |
727,450 | 25,035,831,936 | IssuesEvent | 2022-11-04 15:56:50 | bounswe/bounswe2022group1 | https://api.github.com/repos/bounswe/bounswe2022group1 | closed | Defining and Creating New Branches For Android App | Type: Enhancement Priority: Critical Status: Revision Needed Android | `Description:` Defining new branches name and creating new branches to speed up the development process and develop more stable codes.
`Deadline`: 23.10.2022 - 12:00 | 1.0 | Defining and Creating New Branches For Android App - `Description:` Defining new branches name and creating new branches to speed up the development process and develop more stable codes.
`Deadline`: 23.10.2022 - 12:00 | priority | defining and creating new branches for android app description defining new branches name and creating new branches to speed up the development process and develop more stable codes deadline | 1 |
800,419 | 28,365,194,543 | IssuesEvent | 2023-04-12 13:28:56 | GoogleCloudPlatform/python-docs-samples | https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples | opened | The build failed | priority: p1 type: bug flakybot: issue | This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 0bf03c78ecde146f71673196a4e2f35180d5ee97
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/bef0e22e-6cad-4875-9688-cec57a81ef32), [Sponge](http://sponge2/bef0e22e-6cad-4875-9688-cec57a81ef32)
status: failed
<details><summary>Test output</summary><br><pre>Traceback (most recent call last):
File "/workspace/media_cdn/dualtoken_test.py", line 42, in <module>
with open("/tmp/example.key", "wb") as fp:
OSError: [Errno 30] Read-only file system: '/tmp/example.key'</pre></details> | 1.0 | The build failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 0bf03c78ecde146f71673196a4e2f35180d5ee97
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/bef0e22e-6cad-4875-9688-cec57a81ef32), [Sponge](http://sponge2/bef0e22e-6cad-4875-9688-cec57a81ef32)
status: failed
<details><summary>Test output</summary><br><pre>Traceback (most recent call last):
File "/workspace/media_cdn/dualtoken_test.py", line 42, in <module>
with open("/tmp/example.key", "wb") as fp:
OSError: [Errno 30] Read-only file system: '/tmp/example.key'</pre></details> | priority | the build failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output traceback most recent call last file workspace media cdn dualtoken test py line in with open tmp example key wb as fp oserror read only file system tmp example key | 1 |
158,470 | 6,028,196,560 | IssuesEvent | 2017-06-08 15:15:57 | dwyl/hq | https://api.github.com/repos/dwyl/hq | closed | dwyl VAT return | period ending 30th April 2017 | finance priority-1 | # Deadline: 7th June 2017
@markwilliamfirth Please _codify_ the steps for doing this into an `md` file in the _finances_ folder as you go :bush:
+ [ ] Finalise income for the period 1st February - 30th April 2017 (a decent portion of this will be done already through #229 )
+ [ ] Determine VAT on expenses for that period
+ [ ] F&C invoices
+ [ ] Ice Cream Rocketship invoices
+ [x] Receipts/invoices (@iteles needs 24-48 hours to find her wallet 😬
+ [ ] Fill our VAT statement
+ [ ] Pay VAT for the quarter
(assigning priority 2 because the deadline is not until next week, but there is some work to be done here) | 1.0 | dwyl VAT return | period ending 30th April 2017 - # Deadline: 7th June 2017
@markwilliamfirth Please _codify_ the steps for doing this into an `md` file in the _finances_ folder as you go :bush:
+ [ ] Finalise income for the period 1st February - 30th April 2017 (a decent portion of this will be done already through #229 )
+ [ ] Determine VAT on expenses for that period
+ [ ] F&C invoices
+ [ ] Ice Cream Rocketship invoices
+ [x] Receipts/invoices (@iteles needs 24-48 hours to find her wallet 😬
+ [ ] Fill our VAT statement
+ [ ] Pay VAT for the quarter
(assigning priority 2 because the deadline is not until next week, but there is some work to be done here) | priority | dwyl vat return period ending april deadline june markwilliamfirth please codify the steps for doing this into an md file in the finances folder as you go bush finalise income for the period february april a decent portion of this will be done already through determine vat on expenses for that period f c invoices ice cream rocketship invoices receipts invoices iteles needs hours to find her wallet 😬 fill our vat statement pay vat for the quarter assigning priority because the deadline is not until next week but there is some work to be done here | 1 |
73,172 | 15,252,862,449 | IssuesEvent | 2021-02-20 04:59:22 | gate5/struts-2.3.20 | https://api.github.com/repos/gate5/struts-2.3.20 | closed | CVE-2017-9804 (High) detected in xwork-core-2.3.20.jar, struts2-core-2.3.20.jar - autoclosed | security vulnerability | ## CVE-2017-9804 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>xwork-core-2.3.20.jar</b>, <b>struts2-core-2.3.20.jar</b></p></summary>
<p>
<details><summary><b>xwork-core-2.3.20.jar</b></p></summary>
<p>Apache Struts 2</p>
<p>Library home page: <a href="http://struts.apache.org/">http://struts.apache.org/</a></p>
<p>Path to dependency file: struts-2.3.20/plugins/junit/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar</p>
<p>
Dependency Hierarchy:
- struts2-core-2.3.20.jar (Root Library)
- :x: **xwork-core-2.3.20.jar** (Vulnerable Library)
</details>
<details><summary><b>struts2-core-2.3.20.jar</b></p></summary>
<p>Apache Struts 2</p>
<p>Path to dependency file: struts-2.3.20/plugins/sitemesh/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar</p>
<p>
Dependency Hierarchy:
- :x: **struts2-core-2.3.20.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/gate5/struts-2.3.20/commit/1d3a9da2b49a075b9122e05e19a483fc66b5aaf4">1d3a9da2b49a075b9122e05e19a483fc66b5aaf4</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Struts 2.3.7 through 2.3.33 and 2.5 through 2.5.12, if an application allows entering a URL in a form field and built-in URLValidator is used, it is possible to prepare a special URL which will be used to overload server process when performing validation of the URL. NOTE: this vulnerability exists because of an incomplete fix for S2-047 / CVE-2017-7672.
<p>Publish Date: 2017-09-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-9804>CVE-2017-9804</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-9804">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-9804</a></p>
<p>Release Date: 2017-09-20</p>
<p>Fix Resolution: org.apache.struts:struts2-core:2.5.13</p>
</p>
</details>
<p></p>
| True | CVE-2017-9804 (High) detected in xwork-core-2.3.20.jar, struts2-core-2.3.20.jar - autoclosed - ## CVE-2017-9804 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>xwork-core-2.3.20.jar</b>, <b>struts2-core-2.3.20.jar</b></p></summary>
<p>
<details><summary><b>xwork-core-2.3.20.jar</b></p></summary>
<p>Apache Struts 2</p>
<p>Library home page: <a href="http://struts.apache.org/">http://struts.apache.org/</a></p>
<p>Path to dependency file: struts-2.3.20/plugins/junit/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar</p>
<p>
Dependency Hierarchy:
- struts2-core-2.3.20.jar (Root Library)
- :x: **xwork-core-2.3.20.jar** (Vulnerable Library)
</details>
<details><summary><b>struts2-core-2.3.20.jar</b></p></summary>
<p>Apache Struts 2</p>
<p>Path to dependency file: struts-2.3.20/plugins/sitemesh/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar,canner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar</p>
<p>
Dependency Hierarchy:
- :x: **struts2-core-2.3.20.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/gate5/struts-2.3.20/commit/1d3a9da2b49a075b9122e05e19a483fc66b5aaf4">1d3a9da2b49a075b9122e05e19a483fc66b5aaf4</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Struts 2.3.7 through 2.3.33 and 2.5 through 2.5.12, if an application allows entering a URL in a form field and built-in URLValidator is used, it is possible to prepare a special URL which will be used to overload server process when performing validation of the URL. NOTE: this vulnerability exists because of an incomplete fix for S2-047 / CVE-2017-7672.
<p>Publish Date: 2017-09-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-9804>CVE-2017-9804</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-9804">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-9804</a></p>
<p>Release Date: 2017-09-20</p>
<p>Fix Resolution: org.apache.struts:struts2-core:2.5.13</p>
</p>
</details>
<p></p>
| non_priority | cve high detected in xwork core jar core jar autoclosed cve high severity vulnerability vulnerable libraries xwork core jar core jar xwork core jar apache struts library home page a href path to dependency file struts plugins junit pom xml path to vulnerable library home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar canner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar dependency hierarchy core jar root library x xwork core jar vulnerable library core jar apache struts path to dependency file struts plugins sitemesh pom xml path to vulnerable library canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar canner repository org apache struts core core jar dependency hierarchy x core jar vulnerable library found in head commit a href vulnerability details in apache struts through and through if an application allows entering a url in a form field and built in urlvalidator is used it is possible to prepare a special url which will be used to overload server process when performing validation of the url note this vulnerability exists because of an incomplete fix for cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache struts core | 0 |
334,319 | 29,830,798,491 | IssuesEvent | 2023-06-18 08:44:32 | mcollovati/quarkus-hilla | https://api.github.com/repos/mcollovati/quarkus-hilla | opened | Validation workflow: run test steps in parallel | testing internal-improvement | Test steps (test, end-to-end dev, end-to-end prod) take more than 1 minute to complete.
Update the workflow to run them in parallel to speed up the validation | 1.0 | Validation workflow: run test steps in parallel - Test steps (test, end-to-end dev, end-to-end prod) take more than 1 minute to complete.
Update the workflow to run them in parallel to speed up the validation | non_priority | validation workflow run test steps in parallel test steps test end to end dev end to end prod take more than minute to complete update the workflow to run them in parallel to speed up the validation | 0 |
11,913 | 5,108,203,401 | IssuesEvent | 2017-01-05 17:00:38 | semihalf-berestovskyy-andriy/test2 | https://api.github.com/repos/semihalf-berestovskyy-andriy/test2 | closed | Performance testing | build & test enhancement | Note: the issue was imported automatically from Bugzilla with bugzilla2issues.py tool
# Bugzilla Bug ID: 18
Date: 2015-06-03 08:41:41 +0200
From: Bogdan Pricope <bogdan.pricope@enea.com>
To: Andras Berger <andras.berger@nokia.com>
Last updated: 2015-06-16 15:08:15 +0200
## Bugzilla Comment ID: 23
Date: 2015-06-03 08:41:41 +0200
From: Bogdan Pricope <bogdan.pricope@enea.com>
Performance testing
Measure traffic performance of OpenFP
Note:
Find tools for testing
State:
Keep and break down into smaller tasks
## Bugzilla Comment ID: 46
Date: 2015-06-03 15:41:15 +0200
From: Andras Berger <andras.berger@nokia.com>
First round of measurements on Octeon3 is done (with pre-release odp). Results sent to mailing list. Areas to be improved discussed.
Next round of measurement is underway with new pre-release version of odp-octeon3.
## Bugzilla Comment ID: 50
Date: 2015-06-16 15:07:54 +0200
From: Andras Berger <andras.berger@nokia.com>
2nd round of testing was comleted using odp-octeon drop #2. There was a big improvement in throughput.
UDP socket / event interface performance improved, especially scalability.
| 1.0 | Performance testing - Note: the issue was imported automatically from Bugzilla with bugzilla2issues.py tool
# Bugzilla Bug ID: 18
Date: 2015-06-03 08:41:41 +0200
From: Bogdan Pricope <bogdan.pricope@enea.com>
To: Andras Berger <andras.berger@nokia.com>
Last updated: 2015-06-16 15:08:15 +0200
## Bugzilla Comment ID: 23
Date: 2015-06-03 08:41:41 +0200
From: Bogdan Pricope <bogdan.pricope@enea.com>
Performance testing
Measure traffic performance of OpenFP
Note:
Find tools for testing
State:
Keep and break down into smaller tasks
## Bugzilla Comment ID: 46
Date: 2015-06-03 15:41:15 +0200
From: Andras Berger <andras.berger@nokia.com>
First round of measurements on Octeon3 is done (with pre-release odp). Results sent to mailing list. Areas to be improved discussed.
Next round of measurement is underway with new pre-release version of odp-octeon3.
## Bugzilla Comment ID: 50
Date: 2015-06-16 15:07:54 +0200
From: Andras Berger <andras.berger@nokia.com>
2nd round of testing was comleted using odp-octeon drop #2. There was a big improvement in throughput.
UDP socket / event interface performance improved, especially scalability.
| non_priority | performance testing note the issue was imported automatically from bugzilla with py tool bugzilla bug id date from bogdan pricope to andras berger last updated bugzilla comment id date from bogdan pricope performance testing measure traffic performance of openfp note find tools for testing state keep and break down into smaller tasks bugzilla comment id date from andras berger first round of measurements on is done with pre release odp results sent to mailing list areas to be improved discussed next round of measurement is underway with new pre release version of odp bugzilla comment id date from andras berger round of testing was comleted using odp octeon drop there was a big improvement in throughput udp socket event interface performance improved especially scalability | 0 |
104,747 | 13,109,948,228 | IssuesEvent | 2020-08-04 19:41:06 | flutter/flutter | https://api.github.com/repos/flutter/flutter | closed | TimePicker is not correct in RTL languages | a: internationalization f: date/time picker f: material design found in release: 1.21 framework has reproducible steps | 
To reproduce, Set app locale to a right-to-left language (`fa` for example) then open timepicker.
Besides that "Select Time" which not translated, Why is that the time of day format is in `mm:hh` format instead of `hh:mm`?
RTL doesn't mean everything should be written right to left, Those languages still use `hh:mm` for time format.
Also it wasn't an issue in previous TimePicker widget.
**Flutter Doctor**
[√] Flutter (Channel beta, 1.20.0-7.3.pre, on Microsoft Windows [Version 10.0.19041.388], locale en-US)
[!] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
X Android license status unknown.
Try re-installing or updating your Android SDK Manager.
See https://developer.android.com/studio/#downloads or visit
https://flutter.dev/docs/get-started/install/windows#android-setup for detailed instructions.
[√] Chrome - develop for the web
[√] Android Studio (version 4.0)
[√] VS Code, 64-bit edition (version 1.47.3)
[√] Connected device (4 available) | 1.0 | TimePicker is not correct in RTL languages - 
To reproduce, Set app locale to a right-to-left language (`fa` for example) then open timepicker.
Besides that "Select Time" which not translated, Why is that the time of day format is in `mm:hh` format instead of `hh:mm`?
RTL doesn't mean everything should be written right to left, Those languages still use `hh:mm` for time format.
Also it wasn't an issue in previous TimePicker widget.
**Flutter Doctor**
[√] Flutter (Channel beta, 1.20.0-7.3.pre, on Microsoft Windows [Version 10.0.19041.388], locale en-US)
[!] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
X Android license status unknown.
Try re-installing or updating your Android SDK Manager.
See https://developer.android.com/studio/#downloads or visit
https://flutter.dev/docs/get-started/install/windows#android-setup for detailed instructions.
[√] Chrome - develop for the web
[√] Android Studio (version 4.0)
[√] VS Code, 64-bit edition (version 1.47.3)
[√] Connected device (4 available) | non_priority | timepicker is not correct in rtl languages to reproduce set app locale to a right to left language fa for example then open timepicker besides that select time which not translated why is that the time of day format is in mm hh format instead of hh mm rtl doesn t mean everything should be written right to left those languages still use hh mm for time format also it wasn t an issue in previous timepicker widget flutter doctor flutter channel beta pre on microsoft windows locale en us android toolchain develop for android devices android sdk version x android license status unknown try re installing or updating your android sdk manager see or visit for detailed instructions chrome develop for the web android studio version vs code bit edition version connected device available | 0 |
193,541 | 6,885,802,832 | IssuesEvent | 2017-11-21 17:11:19 | RetroWoW/PTR | https://api.github.com/repos/RetroWoW/PTR | closed | Hunter pets lose all Buffs/Debuffs when called upon | Class - Hunter Fixed - On PTR Priority - Low | **Description**: Title
**Current behavior**: Buffs/debuffs removed
**Expected behavior**: Buffs/debuffs should be kept
**Steps to reproduce the problem**:
1. Use random buff on pet, dismiss it and then call it.
2.
3.
**Source:**
https://github.com/elysium-project/server/issues/470 Elysium fixed it.
| 1.0 | Hunter pets lose all Buffs/Debuffs when called upon - **Description**: Title
**Current behavior**: Buffs/debuffs removed
**Expected behavior**: Buffs/debuffs should be kept
**Steps to reproduce the problem**:
1. Use random buff on pet, dismiss it and then call it.
2.
3.
**Source:**
https://github.com/elysium-project/server/issues/470 Elysium fixed it.
| priority | hunter pets lose all buffs debuffs when called upon description title current behavior buffs debuffs removed expected behavior buffs debuffs should be kept steps to reproduce the problem use random buff on pet dismiss it and then call it source elysium fixed it | 1 |
104,420 | 22,659,338,707 | IssuesEvent | 2022-07-02 00:11:02 | IbrahimImanol/TF-201716094-20191E650-20201C579 | https://api.github.com/repos/IbrahimImanol/TF-201716094-20191E650-20201C579 | closed | Corrección de errores | documentation code | Actividades finales de la aplicación, corrección de errores y puesta a punto.
| 1.0 | Corrección de errores - Actividades finales de la aplicación, corrección de errores y puesta a punto.
| non_priority | corrección de errores actividades finales de la aplicación corrección de errores y puesta a punto | 0 |
277,065 | 24,046,136,388 | IssuesEvent | 2022-09-16 08:30:31 | hirosystems/stacks-wallet-web | https://api.github.com/repos/hirosystems/stacks-wallet-web | opened | Test: contract interface details are displayed in wallet | 🤖 Automated test | [`v3.17.0`](https://github.com/hirosystems/stacks-wallet-web/releases/tag/v3.17.0) saw many users run into a blank screen issue, where an error was thrown because of missing/wrong contract interface arguments.
This highlights a gap in our test suite. We should prioritise writing a test to make sure this functionality is covered. | 1.0 | Test: contract interface details are displayed in wallet - [`v3.17.0`](https://github.com/hirosystems/stacks-wallet-web/releases/tag/v3.17.0) saw many users run into a blank screen issue, where an error was thrown because of missing/wrong contract interface arguments.
This highlights a gap in our test suite. We should prioritise writing a test to make sure this functionality is covered. | non_priority | test contract interface details are displayed in wallet saw many users run into a blank screen issue where an error was thrown because of missing wrong contract interface arguments this highlights a gap in our test suite we should prioritise writing a test to make sure this functionality is covered | 0 |
205,083 | 15,591,067,968 | IssuesEvent | 2021-03-18 10:02:13 | Tencent/bk-ci | https://api.github.com/repos/Tencent/bk-ci | closed | bug:插件参数进行变量替换时,jackson把类似"[133]-[sid-tocqc]-[sid-zhiliang-test1]"这种字符串能转成内容截断的list对象 | area/ci/backend kind/bug kind/feat/tech stage/test stage/uat uat/passed | jackson把类似"[133]-[sid-tocqc]-[sid-zhiliang-test1]"这种不是json字符串能转换List对象,但这List的内容是[133],被截断了 | 1.0 | bug:插件参数进行变量替换时,jackson把类似"[133]-[sid-tocqc]-[sid-zhiliang-test1]"这种字符串能转成内容截断的list对象 - jackson把类似"[133]-[sid-tocqc]-[sid-zhiliang-test1]"这种不是json字符串能转换List对象,但这List的内容是[133],被截断了 | non_priority | bug 插件参数进行变量替换时,jackson把类似 这种字符串能转成内容截断的list对象 jackson把类似 这种不是json字符串能转换list对象,但这list的内容是 ,被截断了 | 0 |
417,731 | 12,178,310,276 | IssuesEvent | 2020-04-28 08:48:00 | web-platform-tests/wpt | https://api.github.com/repos/web-platform-tests/wpt | closed | Using the triggers/* branches to trigger Taskcluster often fails | Taskcluster infra priority:backlog | I've used the instructions in https://web-platform-tests.org/running-tests/from-ci.html to trigger full runs on Taskcluster perhaps 5 times since I added the support.
On 2 or 3 occasions the Taskcluster runs failed to be started at all. Here's the most recent case:
https://github.com/web-platform-tests/wpt/commit/42b4a3fa60#commitcomment-35182287
That was triggered by me doing this:
```
$ git push --force-with-lease origin origin/epochs/daily:triggers/chrome_beta
Total 0 (delta 0), reused 0 (delta 0)
To github.com:web-platform-tests/wpt.git
820f0f8604..42b4a3fa60 origin/epochs/daily -> triggers/chrome_beta
```
There are 229 commits in that range, not sure if big changes could still contribute to this. | 1.0 | Using the triggers/* branches to trigger Taskcluster often fails - I've used the instructions in https://web-platform-tests.org/running-tests/from-ci.html to trigger full runs on Taskcluster perhaps 5 times since I added the support.
On 2 or 3 occasions the Taskcluster runs failed to be started at all. Here's the most recent case:
https://github.com/web-platform-tests/wpt/commit/42b4a3fa60#commitcomment-35182287
That was triggered by me doing this:
```
$ git push --force-with-lease origin origin/epochs/daily:triggers/chrome_beta
Total 0 (delta 0), reused 0 (delta 0)
To github.com:web-platform-tests/wpt.git
820f0f8604..42b4a3fa60 origin/epochs/daily -> triggers/chrome_beta
```
There are 229 commits in that range, not sure if big changes could still contribute to this. | priority | using the triggers branches to trigger taskcluster often fails i ve used the instructions in to trigger full runs on taskcluster perhaps times since i added the support on or occasions the taskcluster runs failed to be started at all here s the most recent case that was triggered by me doing this git push force with lease origin origin epochs daily triggers chrome beta total delta reused delta to github com web platform tests wpt git origin epochs daily triggers chrome beta there are commits in that range not sure if big changes could still contribute to this | 1 |
328,626 | 9,997,678,109 | IssuesEvent | 2019-07-12 05:40:14 | horizontalsystems/HS-Design | https://api.github.com/repos/horizontalsystems/HS-Design | opened | introduce "failed" transaction state | priority | It's a transaction that was included in the blockchain but has failed status. At the moment such transactions appear as pending which is incorrect and misleading.
What we need:
- failed state on transactions list
- there should be an action button on such transaction. click on that button should trigger bottom controller with further actions
-- Title: Transaction Failed
-- text: "This transaction has failed and did not go through. The unstoppable can try resending it with a higher a gas limit. Estimate cost: [high_gas_limit] x [gas_price]"
-- buttons: Resend / Cancel.
| 1.0 | introduce "failed" transaction state - It's a transaction that was included in the blockchain but has failed status. At the moment such transactions appear as pending which is incorrect and misleading.
What we need:
- failed state on transactions list
- there should be an action button on such transaction. click on that button should trigger bottom controller with further actions
-- Title: Transaction Failed
-- text: "This transaction has failed and did not go through. The unstoppable can try resending it with a higher a gas limit. Estimate cost: [high_gas_limit] x [gas_price]"
-- buttons: Resend / Cancel.
| priority | introduce failed transaction state it s a transaction that was included in the blockchain but has failed status at the moment such transactions appear as pending which is incorrect and misleading what we need failed state on transactions list there should be an action button on such transaction click on that button should trigger bottom controller with further actions title transaction failed text this transaction has failed and did not go through the unstoppable can try resending it with a higher a gas limit estimate cost x buttons resend cancel | 1 |
391,303 | 11,572,047,358 | IssuesEvent | 2020-02-20 22:57:46 | grpc/grpc | https://api.github.com/repos/grpc/grpc | closed | C++ auth library: Add x-goog-user-project metadata header for outgoing requests | kind/enhancement priority/P2 | <!--
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers here:
- grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
- StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### Is your feature request related to a problem? Please describe.
When using 3 legged OAuth credentials(type=authorized_user), the requests are associated with a single project (in gcloud sdk's case, it's gcloud project). Users experienced various issues like quota, rate limiting, etc.
There is an on-going effort to avoid this problem by:
1) gcloud sdk will soon start adding `quota_project` field in the auth json file.
2) on the server side, if `x-goog-user-project` metadata is present, treat that project as the quota project.
3) client library should pick the `quota_project` field from the auth json file and set the value as `x-goog-user-project` metadata.
### Describe the solution you'd like
For the point 3) above for C++ client libraries, we need some changes in gRPC's auth library.
The auth library picks `quota_project` field from the auth json file with type "authorized_user" , and set that value as `x-goog-user-project` metadata for every outgoing requests.
| 1.0 | C++ auth library: Add x-goog-user-project metadata header for outgoing requests - <!--
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers here:
- grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
- StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### Is your feature request related to a problem? Please describe.
When using 3 legged OAuth credentials(type=authorized_user), the requests are associated with a single project (in gcloud sdk's case, it's gcloud project). Users experienced various issues like quota, rate limiting, etc.
There is an on-going effort to avoid this problem by:
1) gcloud sdk will soon start adding `quota_project` field in the auth json file.
2) on the server side, if `x-goog-user-project` metadata is present, treat that project as the quota project.
3) client library should pick the `quota_project` field from the auth json file and set the value as `x-goog-user-project` metadata.
### Describe the solution you'd like
For the point 3) above for C++ client libraries, we need some changes in gRPC's auth library.
The auth library picks `quota_project` field from the auth json file with type "authorized_user" , and set that value as `x-goog-user-project` metadata for every outgoing requests.
| priority | c auth library add x goog user project metadata header for outgoing requests this form is for bug reports and feature requests only for general questions and troubleshooting please ask look for answers here grpc io mailing list stackoverflow with grpc tag issues specific to grpc java grpc go grpc node grpc dart grpc web should be created in the repository they belong to e g is your feature request related to a problem please describe when using legged oauth credentials type authorized user the requests are associated with a single project in gcloud sdk s case it s gcloud project users experienced various issues like quota rate limiting etc there is an on going effort to avoid this problem by gcloud sdk will soon start adding quota project field in the auth json file on the server side if x goog user project metadata is present treat that project as the quota project client library should pick the quota project field from the auth json file and set the value as x goog user project metadata describe the solution you d like for the point above for c client libraries we need some changes in grpc s auth library the auth library picks quota project field from the auth json file with type authorized user and set that value as x goog user project metadata for every outgoing requests | 1 |
188,471 | 15,163,030,539 | IssuesEvent | 2021-02-12 11:32:22 | Technologicat/unpythonic | https://api.github.com/repos/Technologicat/unpythonic | opened | New release plan - update the milestones! | documentation enhancement | Due to the ongoing development of the third-generation macro expander [`mcpyrate`](https://github.com/Technologicat/mcpyrate/), it's been a while since I've touched the codebase of `unpythonic`. Having the new advanced features of `mcpyrate` available is a game-changer for writing macro-enabled code. This changes development priorities in the short-term future of `unpythonic`.
Summary of new release plan:
1. **Release 0.14.3 as-is.**
- It's working just fine; the only thing it lacks is some polish in the documentation of the latest features.
- It's important to 0.14.3 packaged, because the code has changed a lot since 0.14.2.
- Releasing 0.14.3 will finally get the macro-enabled unit test framework `unpythonic.test.fixtures` officially published.
2. **Update the remaining milestones to reflect this new release plan.**
3. **Perform any final tune-up for the 0.14.x series.**
- This includes polishing the docs, as well as fixing any small things that can be done quickly. Keeping it small and simple.
- This small update will become 0.14.4, **the final release in the 0.14.x series**.
4. **Start development on `unpythonic` 0.15.0.**
- Migrate all macro code to `mcpyrate`, see #72. Taking advantage of `mcpyrate` is why we need 0.15.0 now instead of later.
- **The 0.15.x series will use `mcpyrate`; version 0.15.0 will drop support for MacroPy.**
- Add support for Python 3.8, see #16.
- Python 3.9 and 3.10 support would be nice-to-haves while at it, but GTS hasn't updated yet.
- What I know at this point regarding AST changes in 3.9 is documented in https://github.com/Technologicat/mcpyrate/issues/20.
- Drop support for Python 3.4 and 3.5. We should be able to still support 3.6 and 3.7; the CI process will catch any breakage even though development now occurs on 3.8.
- Review the plans for API changes already documented here in the 0.15.0 milestone. Keep any small ones in 0.15.0; for any that require major development effort, re-schedule to 0.16.0.
- I'd like to get all near-term breaking changes out of the way sooner rather than later. I'd also like to avoid a monolithic mega-update that changes everything at once and takes forever to develop. This means the 0.15.x series may be short-lived, such that 0.16.0 may come soon after 0.15.0. But for 0.16.0, I don't plan to break anything that, as of this writing, is not already documented in this issue tracker.
- The exact division of features between 0.15.x and 0.16.x remains open as of this writing. 0.15.x might get the features originally planned for later releases in the 0.14.x series, or they might be moved to 0.16.x. This depends on which features, once 0.15.0 is out, seem reasonable to develop next. (Since `unpythonic` is an experiment, *reasonable* roughly coincides with *interesting*.)
| 1.0 | New release plan - update the milestones! - Due to the ongoing development of the third-generation macro expander [`mcpyrate`](https://github.com/Technologicat/mcpyrate/), it's been a while since I've touched the codebase of `unpythonic`. Having the new advanced features of `mcpyrate` available is a game-changer for writing macro-enabled code. This changes development priorities in the short-term future of `unpythonic`.
Summary of new release plan:
1. **Release 0.14.3 as-is.**
- It's working just fine; the only thing it lacks is some polish in the documentation of the latest features.
- It's important to 0.14.3 packaged, because the code has changed a lot since 0.14.2.
- Releasing 0.14.3 will finally get the macro-enabled unit test framework `unpythonic.test.fixtures` officially published.
2. **Update the remaining milestones to reflect this new release plan.**
3. **Perform any final tune-up for the 0.14.x series.**
- This includes polishing the docs, as well as fixing any small things that can be done quickly. Keeping it small and simple.
- This small update will become 0.14.4, **the final release in the 0.14.x series**.
4. **Start development on `unpythonic` 0.15.0.**
- Migrate all macro code to `mcpyrate`, see #72. Taking advantage of `mcpyrate` is why we need 0.15.0 now instead of later.
- **The 0.15.x series will use `mcpyrate`; version 0.15.0 will drop support for MacroPy.**
- Add support for Python 3.8, see #16.
- Python 3.9 and 3.10 support would be nice-to-haves while at it, but GTS hasn't updated yet.
- What I know at this point regarding AST changes in 3.9 is documented in https://github.com/Technologicat/mcpyrate/issues/20.
- Drop support for Python 3.4 and 3.5. We should be able to still support 3.6 and 3.7; the CI process will catch any breakage even though development now occurs on 3.8.
- Review the plans for API changes already documented here in the 0.15.0 milestone. Keep any small ones in 0.15.0; for any that require major development effort, re-schedule to 0.16.0.
- I'd like to get all near-term breaking changes out of the way sooner rather than later. I'd also like to avoid a monolithic mega-update that changes everything at once and takes forever to develop. This means the 0.15.x series may be short-lived, such that 0.16.0 may come soon after 0.15.0. But for 0.16.0, I don't plan to break anything that, as of this writing, is not already documented in this issue tracker.
- The exact division of features between 0.15.x and 0.16.x remains open as of this writing. 0.15.x might get the features originally planned for later releases in the 0.14.x series, or they might be moved to 0.16.x. This depends on which features, once 0.15.0 is out, seem reasonable to develop next. (Since `unpythonic` is an experiment, *reasonable* roughly coincides with *interesting*.)
| non_priority | new release plan update the milestones due to the ongoing development of the third generation macro expander it s been a while since i ve touched the codebase of unpythonic having the new advanced features of mcpyrate available is a game changer for writing macro enabled code this changes development priorities in the short term future of unpythonic summary of new release plan release as is it s working just fine the only thing it lacks is some polish in the documentation of the latest features it s important to packaged because the code has changed a lot since releasing will finally get the macro enabled unit test framework unpythonic test fixtures officially published update the remaining milestones to reflect this new release plan perform any final tune up for the x series this includes polishing the docs as well as fixing any small things that can be done quickly keeping it small and simple this small update will become the final release in the x series start development on unpythonic migrate all macro code to mcpyrate see taking advantage of mcpyrate is why we need now instead of later the x series will use mcpyrate version will drop support for macropy add support for python see python and support would be nice to haves while at it but gts hasn t updated yet what i know at this point regarding ast changes in is documented in drop support for python and we should be able to still support and the ci process will catch any breakage even though development now occurs on review the plans for api changes already documented here in the milestone keep any small ones in for any that require major development effort re schedule to i d like to get all near term breaking changes out of the way sooner rather than later i d also like to avoid a monolithic mega update that changes everything at once and takes forever to develop this means the x series may be short lived such that may come soon after but for i don t plan to break anything that as of this writing is not already documented in this issue tracker the exact division of features between x and x remains open as of this writing x might get the features originally planned for later releases in the x series or they might be moved to x this depends on which features once is out seem reasonable to develop next since unpythonic is an experiment reasonable roughly coincides with interesting | 0 |
634,936 | 20,374,692,054 | IssuesEvent | 2022-02-21 14:33:47 | AY2122S2-CS2103T-T09-3/tp | https://api.github.com/repos/AY2122S2-CS2103T-T09-3/tp | opened | Update Readme Photo | type.Task priority.High | ## Details
1. Create a UI mockup of the final product.
2. Save the image of the UI in `docs/images/Ui.png`
## Notes
1. Limit the file to contain one screenshot/mockup only and ensure the new image is roughly the same height x width proportions as the original one.
2. If you did the above update correctly, UI mock up and profile photos should appear in your project website and this [Project List Page](https://nus-cs2103-ay2122s2.github.io/website/admin/teamList.html). | 1.0 | Update Readme Photo - ## Details
1. Create a UI mockup of the final product.
2. Save the image of the UI in `docs/images/Ui.png`
## Notes
1. Limit the file to contain one screenshot/mockup only and ensure the new image is roughly the same height x width proportions as the original one.
2. If you did the above update correctly, UI mock up and profile photos should appear in your project website and this [Project List Page](https://nus-cs2103-ay2122s2.github.io/website/admin/teamList.html). | priority | update readme photo details create a ui mockup of the final product save the image of the ui in docs images ui png notes limit the file to contain one screenshot mockup only and ensure the new image is roughly the same height x width proportions as the original one if you did the above update correctly ui mock up and profile photos should appear in your project website and this | 1 |
134,115 | 19,087,682,145 | IssuesEvent | 2021-11-29 08:34:57 | Joystream/atlas | https://api.github.com/repos/Joystream/atlas | opened | Adjust the Action bar component to support more use cases | design design-system | ### Description
Based on info gathered from @TCzechowski, from the Transaction confirmation view.
### Figma link
_No response_
### How urgent this is?
This will block me soon | 2.0 | Adjust the Action bar component to support more use cases - ### Description
Based on info gathered from @TCzechowski, from the Transaction confirmation view.
### Figma link
_No response_
### How urgent this is?
This will block me soon | non_priority | adjust the action bar component to support more use cases description based on info gathered from tczechowski from the transaction confirmation view figma link no response how urgent this is this will block me soon | 0 |
370,331 | 10,928,117,127 | IssuesEvent | 2019-11-22 18:16:51 | ansible/awx | https://api.github.com/repos/ansible/awx | closed | selecting multiple vault credentials with the same description fails | component:api priority:medium state:needs_info type:bug | ##### ISSUE TYPE
- Bug Report
##### STEPS TO REPRODUCE
Create two vault creds with the same description. Try to assign both to a job template in UI and save
##### EXPECTED RESULTS
It works!
##### ACTUAL RESULTS
It doesn't work, api has failure
---
@AlexSCorey has more info | 1.0 | selecting multiple vault credentials with the same description fails - ##### ISSUE TYPE
- Bug Report
##### STEPS TO REPRODUCE
Create two vault creds with the same description. Try to assign both to a job template in UI and save
##### EXPECTED RESULTS
It works!
##### ACTUAL RESULTS
It doesn't work, api has failure
---
@AlexSCorey has more info | priority | selecting multiple vault credentials with the same description fails issue type bug report steps to reproduce create two vault creds with the same description try to assign both to a job template in ui and save expected results it works actual results it doesn t work api has failure alexscorey has more info | 1 |
45,293 | 12,706,210,764 | IssuesEvent | 2020-06-23 06:43:34 | vim/vim | https://api.github.com/repos/vim/vim | closed | setreg(..., ..., 'al') keeps appending newlines | Priority-Medium auto-migrated defect patch | ```
What steps will reproduce the problem?
1. Select a charwise text into a register
2. Do :call setreg('"', '', 'al') to transform it into a linewise register
3. Paste, everything works (check with :reg ")
4. Do :call setreg('"', '', 'al') to transform it into a linewise register again
5. Paste, now there's 2 newlines in the register (check with :reg ")
What is the expected output? What do you see instead?
I would expect that if a register is already linewise, it wouldn't get any
extra newlines.
What version of the product are you using? On what operating system?
Vim 4.7.488 on OSX 10.10.1
Please provide any additional information below.
From the documentation I could not find out whether this was intentional or
not. I suspect not as in the original conversation when this feature was added
in 2002, Bram mentioned that it would be neat if it could be used to save and
restore registers.
```
Original issue reported on code.google.com by `nicolash...@gmail.com` on 27 Jan 2015 at 1:16
| 1.0 | setreg(..., ..., 'al') keeps appending newlines - ```
What steps will reproduce the problem?
1. Select a charwise text into a register
2. Do :call setreg('"', '', 'al') to transform it into a linewise register
3. Paste, everything works (check with :reg ")
4. Do :call setreg('"', '', 'al') to transform it into a linewise register again
5. Paste, now there's 2 newlines in the register (check with :reg ")
What is the expected output? What do you see instead?
I would expect that if a register is already linewise, it wouldn't get any
extra newlines.
What version of the product are you using? On what operating system?
Vim 4.7.488 on OSX 10.10.1
Please provide any additional information below.
From the documentation I could not find out whether this was intentional or
not. I suspect not as in the original conversation when this feature was added
in 2002, Bram mentioned that it would be neat if it could be used to save and
restore registers.
```
Original issue reported on code.google.com by `nicolash...@gmail.com` on 27 Jan 2015 at 1:16
| non_priority | setreg al keeps appending newlines what steps will reproduce the problem select a charwise text into a register do call setreg al to transform it into a linewise register paste everything works check with reg do call setreg al to transform it into a linewise register again paste now there s newlines in the register check with reg what is the expected output what do you see instead i would expect that if a register is already linewise it wouldn t get any extra newlines what version of the product are you using on what operating system vim on osx please provide any additional information below from the documentation i could not find out whether this was intentional or not i suspect not as in the original conversation when this feature was added in bram mentioned that it would be neat if it could be used to save and restore registers original issue reported on code google com by nicolash gmail com on jan at | 0 |
609,436 | 18,873,139,773 | IssuesEvent | 2021-11-13 14:55:57 | alerta/alerta-webui | https://api.github.com/repos/alerta/alerta-webui | closed | Unable to navigate to subsequent Blackout pages | bug priority: medium | **Issue Summary**
When there are enough blackouts to require multiple pages in the Alerta UI the `+` button at the bottom right blocks the navigation to the next page
**Environment**
- OS: Linux
- API version: 8.6.0
- Deployment: self-hosted
- For self-hosted, WSGI environment: nginx/uwsgi
- Database: Postgres
- Server config:
Auth enabled? Yes
Auth provider? saml2 (okta)
Customer views? No
- web UI version: 8.5.0
**To Reproduce**
Steps to reproduce the behavior:
1. Create enough blackouts to require more than 1 page
2. Click on `blackouts` link
3. Scroll down to bottom right of the page to navigate to page 2 of blackouts
4. The issue is the yellow `+` for creating a blackout in the UI obstructs the navigation to the 2nd page
I realize that you can display more per page, but at least for me, this causes a load on my computer when loading many many items. We'd like to keep some blackout information around for about a month before deleting it so potentially there could be a lot.
For web app issues, include any web browser JavaScript console errors.
**Expected behavior**
The create button could be placed in a position that allows me to get to the underlying navigation link.
**Screenshots**

**Additional context**
Add any other context about the problem here.
NOTE: Please provide as much information about your issue as possible.
Failure to provide basic details about your specific environment make
it impossible to know if an issue has already been fixed, can delay a
response and may result in your issue being closed without a resolution.
| 1.0 | Unable to navigate to subsequent Blackout pages - **Issue Summary**
When there are enough blackouts to require multiple pages in the Alerta UI the `+` button at the bottom right blocks the navigation to the next page
**Environment**
- OS: Linux
- API version: 8.6.0
- Deployment: self-hosted
- For self-hosted, WSGI environment: nginx/uwsgi
- Database: Postgres
- Server config:
Auth enabled? Yes
Auth provider? saml2 (okta)
Customer views? No
- web UI version: 8.5.0
**To Reproduce**
Steps to reproduce the behavior:
1. Create enough blackouts to require more than 1 page
2. Click on `blackouts` link
3. Scroll down to bottom right of the page to navigate to page 2 of blackouts
4. The issue is the yellow `+` for creating a blackout in the UI obstructs the navigation to the 2nd page
I realize that you can display more per page, but at least for me, this causes a load on my computer when loading many many items. We'd like to keep some blackout information around for about a month before deleting it so potentially there could be a lot.
For web app issues, include any web browser JavaScript console errors.
**Expected behavior**
The create button could be placed in a position that allows me to get to the underlying navigation link.
**Screenshots**

**Additional context**
Add any other context about the problem here.
NOTE: Please provide as much information about your issue as possible.
Failure to provide basic details about your specific environment make
it impossible to know if an issue has already been fixed, can delay a
response and may result in your issue being closed without a resolution.
| priority | unable to navigate to subsequent blackout pages issue summary when there are enough blackouts to require multiple pages in the alerta ui the button at the bottom right blocks the navigation to the next page environment os linux api version deployment self hosted for self hosted wsgi environment nginx uwsgi database postgres server config auth enabled yes auth provider okta customer views no web ui version to reproduce steps to reproduce the behavior create enough blackouts to require more than page click on blackouts link scroll down to bottom right of the page to navigate to page of blackouts the issue is the yellow for creating a blackout in the ui obstructs the navigation to the page i realize that you can display more per page but at least for me this causes a load on my computer when loading many many items we d like to keep some blackout information around for about a month before deleting it so potentially there could be a lot for web app issues include any web browser javascript console errors expected behavior the create button could be placed in a position that allows me to get to the underlying navigation link screenshots additional context add any other context about the problem here note please provide as much information about your issue as possible failure to provide basic details about your specific environment make it impossible to know if an issue has already been fixed can delay a response and may result in your issue being closed without a resolution | 1 |
231,877 | 7,644,286,781 | IssuesEvent | 2018-05-08 15:04:18 | kcgrimes/grimes-simple-revive | https://api.github.com/repos/kcgrimes/grimes-simple-revive | closed | Move init.sqf definitions internally to reduce footprint | Priority: Low Status: Completed Type: Feature | Move the locality-related definitions from the init.sqf installation requirements to G_Revive_Init_Vars.sqf, make the only requirement in init.sqf to be the execVM. | 1.0 | Move init.sqf definitions internally to reduce footprint - Move the locality-related definitions from the init.sqf installation requirements to G_Revive_Init_Vars.sqf, make the only requirement in init.sqf to be the execVM. | priority | move init sqf definitions internally to reduce footprint move the locality related definitions from the init sqf installation requirements to g revive init vars sqf make the only requirement in init sqf to be the execvm | 1 |
75,206 | 3,460,134,222 | IssuesEvent | 2015-12-19 00:26:58 | ForgeEssentials/ForgeEssentials | https://api.github.com/repos/ForgeEssentials/ForgeEssentials | closed | [1.7.10 build #935] Servervote module doesn't seem to register votes | accepted bug low-priority | Hi! Whatever I try, I can't get the voting module working correctly. The server lists do acknowledge votifier works but nothing shows up in-game when a vote is cast. Also I cannot give any rewards for voting since any kind of reward entered in the section makes the parser go error.
Config:
B:allowOfflineVotes=true
> offlineVoteList.txt contains votes but are never processed
B:flatFileLog=true
> nothing is written to vote.log
S:msgAll=%player has voted for this server on %service.
> Never any message is shown
S:msgVoter=Thanks for voting for our server!
> Same, never shown either
S:rewards <>
>
5x6789 failed
1x1 failed
10x264 failed
all result in parsing errors
Votifier {
S:hostname=
S:port=8192
}
I have no idea why this part exist. If I enter the public IP or DNS for my server all server lists throw up token error.
An error by the parser when a reward is entered:
>java.lang.NullPointerException
at net.minecraft.item.ItemStack.func_77977_a(ItemStack.java:361) ~[add.class:?]
at com.forgeessentials.servervote.ConfigServerVote.load(ConfigServerVote.java:88) ~[ConfigServerVote.class:?]
at com.forgeessentials.core.moduleLauncher.config.ConfigManager.load(ConfigManager.java:95) ~[ConfigManager.class:?]
at com.forgeessentials.core.moduleLauncher.ModuleLauncher.reloadConfigs(ModuleLauncher.java:135) ~[ModuleLauncher.class:?]
at com.forgeessentials.core.commands.CommandFEInfo.parse(CommandFEInfo.java:71) ~[CommandFEInfo.class:?]
at com.forgeessentials.core.commands.ParserCommandBase.func_71515_b(ParserCommandBase.java:17) ~[ParserCommandBase.class:?]
at net.minecraft.command.CommandHandler.func_71556_a(CommandHandler.java:94) [z.class:?]
at net.minecraft.server.dedicated.DedicatedServer.func_71333_ah(DedicatedServer.java:370) [lt.class:?]
at net.minecraft.server.dedicated.DedicatedServer.func_71190_q(DedicatedServer.java:335) [lt.class:?]
at net.minecraft.server.MinecraftServer.func_71217_p(MinecraftServer.java:547) [MinecraftServer.class:?]
at fastcraft.K.a(F:21) [fastcraft-1.22-ctest14.jar:?]
at fastcraft.H.aq(F:157) [fastcraft-1.22-ctest14.jar:?]
at net.minecraft.server.MinecraftServer.run(MinecraftServer.java:396) [MinecraftServer.class:?]
Am I missing something? | 1.0 | [1.7.10 build #935] Servervote module doesn't seem to register votes - Hi! Whatever I try, I can't get the voting module working correctly. The server lists do acknowledge votifier works but nothing shows up in-game when a vote is cast. Also I cannot give any rewards for voting since any kind of reward entered in the section makes the parser go error.
Config:
B:allowOfflineVotes=true
> offlineVoteList.txt contains votes but are never processed
B:flatFileLog=true
> nothing is written to vote.log
S:msgAll=%player has voted for this server on %service.
> Never any message is shown
S:msgVoter=Thanks for voting for our server!
> Same, never shown either
S:rewards <>
>
5x6789 failed
1x1 failed
10x264 failed
all result in parsing errors
Votifier {
S:hostname=
S:port=8192
}
I have no idea why this part exist. If I enter the public IP or DNS for my server all server lists throw up token error.
An error by the parser when a reward is entered:
>java.lang.NullPointerException
at net.minecraft.item.ItemStack.func_77977_a(ItemStack.java:361) ~[add.class:?]
at com.forgeessentials.servervote.ConfigServerVote.load(ConfigServerVote.java:88) ~[ConfigServerVote.class:?]
at com.forgeessentials.core.moduleLauncher.config.ConfigManager.load(ConfigManager.java:95) ~[ConfigManager.class:?]
at com.forgeessentials.core.moduleLauncher.ModuleLauncher.reloadConfigs(ModuleLauncher.java:135) ~[ModuleLauncher.class:?]
at com.forgeessentials.core.commands.CommandFEInfo.parse(CommandFEInfo.java:71) ~[CommandFEInfo.class:?]
at com.forgeessentials.core.commands.ParserCommandBase.func_71515_b(ParserCommandBase.java:17) ~[ParserCommandBase.class:?]
at net.minecraft.command.CommandHandler.func_71556_a(CommandHandler.java:94) [z.class:?]
at net.minecraft.server.dedicated.DedicatedServer.func_71333_ah(DedicatedServer.java:370) [lt.class:?]
at net.minecraft.server.dedicated.DedicatedServer.func_71190_q(DedicatedServer.java:335) [lt.class:?]
at net.minecraft.server.MinecraftServer.func_71217_p(MinecraftServer.java:547) [MinecraftServer.class:?]
at fastcraft.K.a(F:21) [fastcraft-1.22-ctest14.jar:?]
at fastcraft.H.aq(F:157) [fastcraft-1.22-ctest14.jar:?]
at net.minecraft.server.MinecraftServer.run(MinecraftServer.java:396) [MinecraftServer.class:?]
Am I missing something? | priority | servervote module doesn t seem to register votes hi whatever i try i can t get the voting module working correctly the server lists do acknowledge votifier works but nothing shows up in game when a vote is cast also i cannot give any rewards for voting since any kind of reward entered in the section makes the parser go error config b allowofflinevotes true offlinevotelist txt contains votes but are never processed b flatfilelog true nothing is written to vote log s msgall player has voted for this server on service never any message is shown s msgvoter thanks for voting for our server same never shown either s rewards failed failed failed all result in parsing errors votifier s hostname s port i have no idea why this part exist if i enter the public ip or dns for my server all server lists throw up token error an error by the parser when a reward is entered java lang nullpointerexception at net minecraft item itemstack func a itemstack java at com forgeessentials servervote configservervote load configservervote java at com forgeessentials core modulelauncher config configmanager load configmanager java at com forgeessentials core modulelauncher modulelauncher reloadconfigs modulelauncher java at com forgeessentials core commands commandfeinfo parse commandfeinfo java at com forgeessentials core commands parsercommandbase func b parsercommandbase java at net minecraft command commandhandler func a commandhandler java at net minecraft server dedicated dedicatedserver func ah dedicatedserver java at net minecraft server dedicated dedicatedserver func q dedicatedserver java at net minecraft server minecraftserver func p minecraftserver java at fastcraft k a f at fastcraft h aq f at net minecraft server minecraftserver run minecraftserver java am i missing something | 1 |
143,407 | 13,062,815,021 | IssuesEvent | 2020-07-30 15:40:06 | plentico/plenti | https://api.github.com/repos/plentico/plenti | closed | Client SPA can't hydrate | documentation | During development on the client build process I sometimes break the hydration process which makes the client spa routing stop working (the url changes, but the page content stays the same). I've hit this a couple of times and always forget what the fix is so I figured I'd document here for future reference.
<details>
<summary>Error messages in browser</summary>
<br>
FireFox:
```
Uncaught (in promise) DOMException: Element.replaceWith: Cannot insert a Text as a child of a Document
```
Chrome:
```
Uncaught (in promise) DOMException: Failed to execute 'replaceWith' on 'Element': Nodes of type '#document-fragment' may not be inserted inside nodes of type '#document'.
at replaceContainer (http://localhost:3000/spa/ejected/main.js:20:20)
at http://localhost:3000/spa/ejected/main.js:25:13
```
</details>
This often happens because in the built components (`/public/spa/*`) are simply referencing the svelte component filename, not the actual component. So the `c() {t = text(whatever_component)},` will be different, but it doesn't actually have the context of the component. The build needs to pass the actual component to `svelte.compile()`, not just the filename or else the actual filename will be used as the component text. | 1.0 | Client SPA can't hydrate - During development on the client build process I sometimes break the hydration process which makes the client spa routing stop working (the url changes, but the page content stays the same). I've hit this a couple of times and always forget what the fix is so I figured I'd document here for future reference.
<details>
<summary>Error messages in browser</summary>
<br>
FireFox:
```
Uncaught (in promise) DOMException: Element.replaceWith: Cannot insert a Text as a child of a Document
```
Chrome:
```
Uncaught (in promise) DOMException: Failed to execute 'replaceWith' on 'Element': Nodes of type '#document-fragment' may not be inserted inside nodes of type '#document'.
at replaceContainer (http://localhost:3000/spa/ejected/main.js:20:20)
at http://localhost:3000/spa/ejected/main.js:25:13
```
</details>
This often happens because in the built components (`/public/spa/*`) are simply referencing the svelte component filename, not the actual component. So the `c() {t = text(whatever_component)},` will be different, but it doesn't actually have the context of the component. The build needs to pass the actual component to `svelte.compile()`, not just the filename or else the actual filename will be used as the component text. | non_priority | client spa can t hydrate during development on the client build process i sometimes break the hydration process which makes the client spa routing stop working the url changes but the page content stays the same i ve hit this a couple of times and always forget what the fix is so i figured i d document here for future reference error messages in browser firefox uncaught in promise domexception element replacewith cannot insert a text as a child of a document chrome uncaught in promise domexception failed to execute replacewith on element nodes of type document fragment may not be inserted inside nodes of type document at replacecontainer at this often happens because in the built components public spa are simply referencing the svelte component filename not the actual component so the c t text whatever component will be different but it doesn t actually have the context of the component the build needs to pass the actual component to svelte compile not just the filename or else the actual filename will be used as the component text | 0 |
104,853 | 11,424,350,590 | IssuesEvent | 2020-02-03 17:33:54 | clockelliptic/react-fullslide | https://api.github.com/repos/clockelliptic/react-fullslide | opened | More examples needed | documentation | The following examples / documentation is needed (please add to this list):
- Stopping scroll/swipe propagation in overflow containers as to prevent accidental page changes
- Implementing custom Nav buttons
- UI/UX techniques for handling page overflow on small screens | 1.0 | More examples needed - The following examples / documentation is needed (please add to this list):
- Stopping scroll/swipe propagation in overflow containers as to prevent accidental page changes
- Implementing custom Nav buttons
- UI/UX techniques for handling page overflow on small screens | non_priority | more examples needed the following examples documentation is needed please add to this list stopping scroll swipe propagation in overflow containers as to prevent accidental page changes implementing custom nav buttons ui ux techniques for handling page overflow on small screens | 0 |
361,197 | 10,705,516,579 | IssuesEvent | 2019-10-24 13:51:21 | kubeflow/kubeflow | https://api.github.com/repos/kubeflow/kubeflow | closed | Profile controller to beta | area/enterprise_readiness kind/feature priority/p0 | /kind feature
**Why you need this feature:**
We need to get the profile controller to beta quality in 0.7.
We should do an API review.
See also: #3092
We should figure out the required features and file separate issues for them (see #3654)
I think a big question is aligning terminology and implementation with the multi-tenancy story being developed in Kubernetes.
Here's the original proposal about the [multi-tenant CRD](https://docs.google.com/document/d/1hpJX5O_siMmNGMvIHvz8Pm7XOjJLz5g57XWrgwWarFw/edit#heading=h.c0uts5ftkk58)
@johnugeorge @kkasravi I think there was a more recent doc about soft multi-tenancy. Can you provide a link.
| 1.0 | Profile controller to beta - /kind feature
**Why you need this feature:**
We need to get the profile controller to beta quality in 0.7.
We should do an API review.
See also: #3092
We should figure out the required features and file separate issues for them (see #3654)
I think a big question is aligning terminology and implementation with the multi-tenancy story being developed in Kubernetes.
Here's the original proposal about the [multi-tenant CRD](https://docs.google.com/document/d/1hpJX5O_siMmNGMvIHvz8Pm7XOjJLz5g57XWrgwWarFw/edit#heading=h.c0uts5ftkk58)
@johnugeorge @kkasravi I think there was a more recent doc about soft multi-tenancy. Can you provide a link.
| priority | profile controller to beta kind feature why you need this feature we need to get the profile controller to beta quality in we should do an api review see also we should figure out the required features and file separate issues for them see i think a big question is aligning terminology and implementation with the multi tenancy story being developed in kubernetes here s the original proposal about the johnugeorge kkasravi i think there was a more recent doc about soft multi tenancy can you provide a link | 1 |
478,881 | 13,787,452,222 | IssuesEvent | 2020-10-09 04:54:06 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | [Admin Portal] Role permissions listing page pagination issue | 3.x.x Priority/Normal Type/Bug Type/React-UI | ### Description:
Paginated rows go completely haywire when going back and forth in page numbers.
### Steps to reproduce:
### Affected Product Version:
<!-- Members can use Affected/*** labels -->
### Environment details (with versions):
- OS:
- Client:
- Env (Docker/K8s):
---
### Optional Fields
#### Related Issues:
<!-- Any related issues from this/other repositories-->
#### Suggested Labels:
<!--Only to be used by non-members-->
#### Suggested Assignees:
<!--Only to be used by non-members--> | 1.0 | [Admin Portal] Role permissions listing page pagination issue - ### Description:
Paginated rows go completely haywire when going back and forth in page numbers.
### Steps to reproduce:
### Affected Product Version:
<!-- Members can use Affected/*** labels -->
### Environment details (with versions):
- OS:
- Client:
- Env (Docker/K8s):
---
### Optional Fields
#### Related Issues:
<!-- Any related issues from this/other repositories-->
#### Suggested Labels:
<!--Only to be used by non-members-->
#### Suggested Assignees:
<!--Only to be used by non-members--> | priority | role permissions listing page pagination issue description paginated rows go completely haywire when going back and forth in page numbers steps to reproduce affected product version environment details with versions os client env docker optional fields related issues suggested labels suggested assignees | 1 |
53,868 | 28,961,609,001 | IssuesEvent | 2023-05-10 03:19:22 | nvaccess/nvda | https://api.github.com/repos/nvaccess/nvda | closed | NVDA takes an incredibly long time to gain focus on Firefox when the system is under high load | performance blocked/needs-external-fix p2 app/firefox triaged | CC @jcsteh
<!-- Please read the text in this edit field before filling it in.
Please thoroughly read NVDA's wiki article on how to fill in this template, including how to provide the required files.
Issues may be closed if the required information is not present.
https://github.com/nvaccess/nvda/blob/master/devDocs/githubIssueTemplateExplanationAndExamples.md
Please also note that the NVDA project has a Citizen and Contributor Code of Conduct which can be found at https://github.com/nvaccess/nvda/blob/master/CODE_OF_CONDUCT.md. NV Access expects that all contributors and other community members read and abide by the rules set out in this document while participating or contributing to this project. This includes creating or commenting on issues and pull requests.
Each of the questions and sections below start with multiple hash symbols (#). Place your answers and information on the blank line below each question.
-->
### Steps to reproduce:
1. Perform a CPU intensive operation, in my case I was compiling a rust application
2. Switch to Firefox while the background operation is in progress, I was reading some docs while waiting for the compile to complete.
### Actual behavior:
<!--
Use "Speak command keys" (NVDA+4) and speech viewer to copy and paste here.
Use braille viewer to copy and paste here.
You may additionally include an explanation.
-->
NVDA takes up to 10 seconds to gain focus, and a few instances of the following error is logged:
```sh
ERROR - api.setFocusObject (19:16:17.722) - MainThread (15268):
Error updating tree interceptor
Traceback (most recent call last):
File "api.pyc", line 158, in setFocusObject
File "treeInterceptorHandler.pyc", line 33, in update
File "NVDAObjects\__init__.pyc", line 422, in _get_treeInterceptor
File "treeInterceptorHandler.pyc", line 25, in getTreeInterceptor
File "virtualBuffers\gecko_ia2.pyc", line 302, in __contains__
File "comtypes\__init__.pyc", line 856, in __call__
File "monkeyPatches\comtypesMonkeyPatches.pyc", line 39, in __call__
exceptions.CallCancelled: COM call cancelled
```
### Expected behavior:
<!--
Use "Speak command keys" (NVDA+4) and speech viewer to copy and paste here.
Use braille viewer to copy and paste here.
You may additionally include an explanation.
-->
NVDA should not take up to 10 seconds to gain focus on Firefox. I don't see this extreme lag happening when switching focus to other applications like the desktop.
### NVDA logs, crash dumps and other attachments:
### System configuration
#### NVDA installed/portable/running from source:
Installed
#### NVDA version:
alpha-26524,bc1f92f9 (2022.4.0.26524)
#### Windows version:
10.0.19044.1889
#### Name and version of other software in use when reproducing the issue:
Firefox 105.0b9
#### Other information about your system:
### Other questions
#### Does the issue still occur after restarting your computer?
Yes
#### Have you tried any other versions of NVDA? If so, please report their behaviors.
#### If NVDA add-ons are disabled, is your problem still occurring?
#### Does the issue still occur after you run the COM Registration Fixing Tool in NVDA's tools menu?
| True | NVDA takes an incredibly long time to gain focus on Firefox when the system is under high load - CC @jcsteh
<!-- Please read the text in this edit field before filling it in.
Please thoroughly read NVDA's wiki article on how to fill in this template, including how to provide the required files.
Issues may be closed if the required information is not present.
https://github.com/nvaccess/nvda/blob/master/devDocs/githubIssueTemplateExplanationAndExamples.md
Please also note that the NVDA project has a Citizen and Contributor Code of Conduct which can be found at https://github.com/nvaccess/nvda/blob/master/CODE_OF_CONDUCT.md. NV Access expects that all contributors and other community members read and abide by the rules set out in this document while participating or contributing to this project. This includes creating or commenting on issues and pull requests.
Each of the questions and sections below start with multiple hash symbols (#). Place your answers and information on the blank line below each question.
-->
### Steps to reproduce:
1. Perform a CPU intensive operation, in my case I was compiling a rust application
2. Switch to Firefox while the background operation is in progress, I was reading some docs while waiting for the compile to complete.
### Actual behavior:
<!--
Use "Speak command keys" (NVDA+4) and speech viewer to copy and paste here.
Use braille viewer to copy and paste here.
You may additionally include an explanation.
-->
NVDA takes up to 10 seconds to gain focus, and a few instances of the following error is logged:
```sh
ERROR - api.setFocusObject (19:16:17.722) - MainThread (15268):
Error updating tree interceptor
Traceback (most recent call last):
File "api.pyc", line 158, in setFocusObject
File "treeInterceptorHandler.pyc", line 33, in update
File "NVDAObjects\__init__.pyc", line 422, in _get_treeInterceptor
File "treeInterceptorHandler.pyc", line 25, in getTreeInterceptor
File "virtualBuffers\gecko_ia2.pyc", line 302, in __contains__
File "comtypes\__init__.pyc", line 856, in __call__
File "monkeyPatches\comtypesMonkeyPatches.pyc", line 39, in __call__
exceptions.CallCancelled: COM call cancelled
```
### Expected behavior:
<!--
Use "Speak command keys" (NVDA+4) and speech viewer to copy and paste here.
Use braille viewer to copy and paste here.
You may additionally include an explanation.
-->
NVDA should not take up to 10 seconds to gain focus on Firefox. I don't see this extreme lag happening when switching focus to other applications like the desktop.
### NVDA logs, crash dumps and other attachments:
### System configuration
#### NVDA installed/portable/running from source:
Installed
#### NVDA version:
alpha-26524,bc1f92f9 (2022.4.0.26524)
#### Windows version:
10.0.19044.1889
#### Name and version of other software in use when reproducing the issue:
Firefox 105.0b9
#### Other information about your system:
### Other questions
#### Does the issue still occur after restarting your computer?
Yes
#### Have you tried any other versions of NVDA? If so, please report their behaviors.
#### If NVDA add-ons are disabled, is your problem still occurring?
#### Does the issue still occur after you run the COM Registration Fixing Tool in NVDA's tools menu?
| non_priority | nvda takes an incredibly long time to gain focus on firefox when the system is under high load cc jcsteh please read the text in this edit field before filling it in please thoroughly read nvda s wiki article on how to fill in this template including how to provide the required files issues may be closed if the required information is not present please also note that the nvda project has a citizen and contributor code of conduct which can be found at nv access expects that all contributors and other community members read and abide by the rules set out in this document while participating or contributing to this project this includes creating or commenting on issues and pull requests each of the questions and sections below start with multiple hash symbols place your answers and information on the blank line below each question steps to reproduce perform a cpu intensive operation in my case i was compiling a rust application switch to firefox while the background operation is in progress i was reading some docs while waiting for the compile to complete actual behavior use speak command keys nvda and speech viewer to copy and paste here use braille viewer to copy and paste here you may additionally include an explanation nvda takes up to seconds to gain focus and a few instances of the following error is logged sh error api setfocusobject mainthread error updating tree interceptor traceback most recent call last file api pyc line in setfocusobject file treeinterceptorhandler pyc line in update file nvdaobjects init pyc line in get treeinterceptor file treeinterceptorhandler pyc line in gettreeinterceptor file virtualbuffers gecko pyc line in contains file comtypes init pyc line in call file monkeypatches comtypesmonkeypatches pyc line in call exceptions callcancelled com call cancelled expected behavior use speak command keys nvda and speech viewer to copy and paste here use braille viewer to copy and paste here you may additionally include an explanation nvda should not take up to seconds to gain focus on firefox i don t see this extreme lag happening when switching focus to other applications like the desktop nvda logs crash dumps and other attachments system configuration nvda installed portable running from source installed nvda version alpha windows version name and version of other software in use when reproducing the issue firefox other information about your system other questions does the issue still occur after restarting your computer yes have you tried any other versions of nvda if so please report their behaviors if nvda add ons are disabled is your problem still occurring does the issue still occur after you run the com registration fixing tool in nvda s tools menu | 0 |
413,799 | 27,969,989,749 | IssuesEvent | 2023-03-25 00:25:24 | etdds/esp-idf-lvgl-displays | https://api.github.com/repos/etdds/esp-idf-lvgl-displays | closed | Typo in documentation | documentation | Thanks for the library !
You have a type in the documentation, for Direct Component and Submodule Component instructions

| 1.0 | Typo in documentation - Thanks for the library !
You have a type in the documentation, for Direct Component and Submodule Component instructions

| non_priority | typo in documentation thanks for the library you have a type in the documentation for direct component and submodule component instructions | 0 |
241,314 | 26,256,746,404 | IssuesEvent | 2023-01-06 01:54:00 | rgordon95/conFusionAng | https://api.github.com/repos/rgordon95/conFusionAng | opened | CVE-2021-23807 (High) detected in jsonpointer-4.0.1.tgz | security vulnerability | ## CVE-2021-23807 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jsonpointer-4.0.1.tgz</b></p></summary>
<p>Simple JSON Addressing.</p>
<p>Library home page: <a href="https://registry.npmjs.org/jsonpointer/-/jsonpointer-4.0.1.tgz">https://registry.npmjs.org/jsonpointer/-/jsonpointer-4.0.1.tgz</a></p>
<p>Path to dependency file: /conFusionAng/package.json</p>
<p>Path to vulnerable library: /node_modules/jsonpointer/package.json</p>
<p>
Dependency Hierarchy:
- karma-2.0.5.tgz (Root Library)
- log4js-2.11.0.tgz
- loggly-1.1.1.tgz
- request-2.75.0.tgz
- har-validator-2.0.6.tgz
- is-my-json-valid-2.19.0.tgz
- :x: **jsonpointer-4.0.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package jsonpointer before 5.0.0. A type confusion vulnerability can lead to a bypass of a previous Prototype Pollution fix when the pointer components are arrays.
<p>Publish Date: 2021-11-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23807>CVE-2021-23807</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23807">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23807</a></p>
<p>Release Date: 2021-11-03</p>
<p>Fix Resolution (jsonpointer): 5.0.0</p>
<p>Direct dependency fix Resolution (karma): 3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-23807 (High) detected in jsonpointer-4.0.1.tgz - ## CVE-2021-23807 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jsonpointer-4.0.1.tgz</b></p></summary>
<p>Simple JSON Addressing.</p>
<p>Library home page: <a href="https://registry.npmjs.org/jsonpointer/-/jsonpointer-4.0.1.tgz">https://registry.npmjs.org/jsonpointer/-/jsonpointer-4.0.1.tgz</a></p>
<p>Path to dependency file: /conFusionAng/package.json</p>
<p>Path to vulnerable library: /node_modules/jsonpointer/package.json</p>
<p>
Dependency Hierarchy:
- karma-2.0.5.tgz (Root Library)
- log4js-2.11.0.tgz
- loggly-1.1.1.tgz
- request-2.75.0.tgz
- har-validator-2.0.6.tgz
- is-my-json-valid-2.19.0.tgz
- :x: **jsonpointer-4.0.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package jsonpointer before 5.0.0. A type confusion vulnerability can lead to a bypass of a previous Prototype Pollution fix when the pointer components are arrays.
<p>Publish Date: 2021-11-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23807>CVE-2021-23807</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23807">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23807</a></p>
<p>Release Date: 2021-11-03</p>
<p>Fix Resolution (jsonpointer): 5.0.0</p>
<p>Direct dependency fix Resolution (karma): 3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in jsonpointer tgz cve high severity vulnerability vulnerable library jsonpointer tgz simple json addressing library home page a href path to dependency file confusionang package json path to vulnerable library node modules jsonpointer package json dependency hierarchy karma tgz root library tgz loggly tgz request tgz har validator tgz is my json valid tgz x jsonpointer tgz vulnerable library vulnerability details this affects the package jsonpointer before a type confusion vulnerability can lead to a bypass of a previous prototype pollution fix when the pointer components are arrays publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jsonpointer direct dependency fix resolution karma step up your open source security game with mend | 0 |
705,085 | 24,221,484,717 | IssuesEvent | 2022-09-26 11:15:50 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.emag.ro - see bug description | priority-normal browser-focus-geckoview engine-gecko | <!-- @browser: Firefox Mobile 105.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:105.0) Gecko/105.0 Firefox/105.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/111368 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.emag.ro/converse-tenisi-chuck-taylor-all-star-7j237c-6/pd/DZ5HDMBBM/?ref=fam#22-EU
**Browser / Version**: Firefox Mobile 105.0
**Operating System**: Android 11
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: items on the page not loading
**Steps to Reproduce**:
Some important items show as loading but never finish loading.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/9/1541ef39-0e6e-46a3-997f-ffada4bb8ce6.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220915150737</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/9/2c281731-9cd9-4ac3-9867-af5bccd4c7e6)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.emag.ro - see bug description - <!-- @browser: Firefox Mobile 105.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:105.0) Gecko/105.0 Firefox/105.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/111368 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.emag.ro/converse-tenisi-chuck-taylor-all-star-7j237c-6/pd/DZ5HDMBBM/?ref=fam#22-EU
**Browser / Version**: Firefox Mobile 105.0
**Operating System**: Android 11
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: items on the page not loading
**Steps to Reproduce**:
Some important items show as loading but never finish loading.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/9/1541ef39-0e6e-46a3-997f-ffada4bb8ce6.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220915150737</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/9/2c281731-9cd9-4ac3-9867-af5bccd4c7e6)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description items on the page not loading steps to reproduce some important items show as loading but never finish loading view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel release hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 1 |
576,397 | 17,086,201,815 | IssuesEvent | 2021-07-08 12:11:22 | HEPData/hepdata | https://api.github.com/repos/HEPData/hepdata | closed | Data encoding: Limit number of e-mail messages from reviewer comments | complexity: medium priority: high type: enhancement | Dear developers,
In order to reduce the number of e-mail messages received by an encoder while a submitted, preliminar record is reviewed, I would propose to check and send an e-mail every 30 minutes for each preliminar record informing the encoder of the comments and suggestions filled-in by the reviewer. Or at least offer the coordinators a choice to select such a behaviour policy for records under their responsibility.
Thanks,
Alex | 1.0 | Data encoding: Limit number of e-mail messages from reviewer comments - Dear developers,
In order to reduce the number of e-mail messages received by an encoder while a submitted, preliminar record is reviewed, I would propose to check and send an e-mail every 30 minutes for each preliminar record informing the encoder of the comments and suggestions filled-in by the reviewer. Or at least offer the coordinators a choice to select such a behaviour policy for records under their responsibility.
Thanks,
Alex | priority | data encoding limit number of e mail messages from reviewer comments dear developers in order to reduce the number of e mail messages received by an encoder while a submitted preliminar record is reviewed i would propose to check and send an e mail every minutes for each preliminar record informing the encoder of the comments and suggestions filled in by the reviewer or at least offer the coordinators a choice to select such a behaviour policy for records under their responsibility thanks alex | 1 |
231,587 | 7,641,018,886 | IssuesEvent | 2018-05-08 02:11:25 | Gamebuster19901/InventoryDecrapifier | https://api.github.com/repos/Gamebuster19901/InventoryDecrapifier | closed | Blacklist Configurations | Category - Enhancement Priority - Normal ↓ Side - Client Status - In Progress | Add blacklist configurations so the user can switch between different blacklists. | 1.0 | Blacklist Configurations - Add blacklist configurations so the user can switch between different blacklists. | priority | blacklist configurations add blacklist configurations so the user can switch between different blacklists | 1 |
170,465 | 26,964,316,673 | IssuesEvent | 2023-02-08 20:53:17 | carbon-design-system/carbon-for-ibm-dotcom | https://api.github.com/repos/carbon-design-system/carbon-for-ibm-dotcom | closed | [Cloud-Masthead]: address various visual bugs | bug dev package: web components adopter: AEM owner: AEM Needs design approval | ### Description
From @jwitkowski79 :
Hello! We noticed a few bugs on the Cloud masthead that we wanted to log. Feel free to break these up into separate issues if that makes more sense:
* The Heading-01 arrows should be using the color $icon-01. Right now it's currently using the blue link color.

* There's a gray bar with borders that appears to the right of the menu. I noticed this on the non-Cloud version too. Anyway we can remove this?
* Link with description text: On hover, the color of the description text should switch from $text-02 to $text-01. Right now it stays at $text-02 when I hover over it

* There's a weird issue with the focus state when I compare the Cloud version of the masthead to the non-Cloud version of the masthead. On the non-Cloud version, when I click on "Products" and hover over something in the menu, the color behind the title Products switches to $ui-01. When I follow the same steps on the Cloud version, the color stays darker and does not switch to $ui-01. I made a quick video of the issue:
https://user-images.githubusercontent.com/191049/213565841-1c4515c7-e1db-469a-8eda-e9adef39d19a.mp4
### Component(s) impacted
* Masthead
* Cloud Masthead
### Browser
Chrome
### Carbon for IBM.com version
Canary
### Severity
Severity 3 = The problem is visible or noticeable to users but does not impede the usability or functionality. Affects minor functionality, has a workaround.
### Application/website
AEM
### Package
@carbon/ibmdotcom-web-components
### CodeSandbox example
https://carbon-design-system.github.io/carbon-for-ibm-dotcom/canary/web-components/iframe.html?args=&id=components-cloud-masthead--default&viewMode=story
### Steps to reproduce the issue (if applicable)
See description.
### Release date (if applicable)
_No response_
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon-for-ibm-dotcom/blob/main/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon-for-ibm-dotcom/issues) for duplicate issues | 1.0 | [Cloud-Masthead]: address various visual bugs - ### Description
From @jwitkowski79 :
Hello! We noticed a few bugs on the Cloud masthead that we wanted to log. Feel free to break these up into separate issues if that makes more sense:
* The Heading-01 arrows should be using the color $icon-01. Right now it's currently using the blue link color.

* There's a gray bar with borders that appears to the right of the menu. I noticed this on the non-Cloud version too. Anyway we can remove this?
* Link with description text: On hover, the color of the description text should switch from $text-02 to $text-01. Right now it stays at $text-02 when I hover over it

* There's a weird issue with the focus state when I compare the Cloud version of the masthead to the non-Cloud version of the masthead. On the non-Cloud version, when I click on "Products" and hover over something in the menu, the color behind the title Products switches to $ui-01. When I follow the same steps on the Cloud version, the color stays darker and does not switch to $ui-01. I made a quick video of the issue:
https://user-images.githubusercontent.com/191049/213565841-1c4515c7-e1db-469a-8eda-e9adef39d19a.mp4
### Component(s) impacted
* Masthead
* Cloud Masthead
### Browser
Chrome
### Carbon for IBM.com version
Canary
### Severity
Severity 3 = The problem is visible or noticeable to users but does not impede the usability or functionality. Affects minor functionality, has a workaround.
### Application/website
AEM
### Package
@carbon/ibmdotcom-web-components
### CodeSandbox example
https://carbon-design-system.github.io/carbon-for-ibm-dotcom/canary/web-components/iframe.html?args=&id=components-cloud-masthead--default&viewMode=story
### Steps to reproduce the issue (if applicable)
See description.
### Release date (if applicable)
_No response_
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon-for-ibm-dotcom/blob/main/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon-for-ibm-dotcom/issues) for duplicate issues | non_priority | address various visual bugs description from hello we noticed a few bugs on the cloud masthead that we wanted to log feel free to break these up into separate issues if that makes more sense the heading arrows should be using the color icon right now it s currently using the blue link color there s a gray bar with borders that appears to the right of the menu i noticed this on the non cloud version too anyway we can remove this link with description text on hover the color of the description text should switch from text to text right now it stays at text when i hover over it there s a weird issue with the focus state when i compare the cloud version of the masthead to the non cloud version of the masthead on the non cloud version when i click on products and hover over something in the menu the color behind the title products switches to ui when i follow the same steps on the cloud version the color stays darker and does not switch to ui i made a quick video of the issue component s impacted masthead cloud masthead browser chrome carbon for ibm com version canary severity severity the problem is visible or noticeable to users but does not impede the usability or functionality affects minor functionality has a workaround application website aem package carbon ibmdotcom web components codesandbox example steps to reproduce the issue if applicable see description release date if applicable no response code of conduct i agree to follow this project s i checked the for duplicate issues | 0 |
785,335 | 27,610,015,327 | IssuesEvent | 2023-03-09 15:23:51 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [CDCSDK] Handle DDL for tablegroups in CDC | kind/enhancement priority/high area/cdcsdk | Jira Link: [DB-3440](https://yugabyte.atlassian.net/browse/DB-3440)
### Description
With tablegroup/colocation, a tablet can contain multiple tables. The task is to emit a CDC record if DDL operation is encountered for any of these tables. | 1.0 | [CDCSDK] Handle DDL for tablegroups in CDC - Jira Link: [DB-3440](https://yugabyte.atlassian.net/browse/DB-3440)
### Description
With tablegroup/colocation, a tablet can contain multiple tables. The task is to emit a CDC record if DDL operation is encountered for any of these tables. | priority | handle ddl for tablegroups in cdc jira link description with tablegroup colocation a tablet can contain multiple tables the task is to emit a cdc record if ddl operation is encountered for any of these tables | 1 |
12,449 | 3,274,086,388 | IssuesEvent | 2015-10-26 08:55:00 | owncloud/client | https://api.github.com/repos/owncloud/client | closed | [Windows] in Option Settings, the URL does not look good | bug Platform Specific ReadyToTest | ### Steps to reproduce
1. Install the version 2.0.2
2. Create two accounts (e.g: user1@docker.oc......, user2@docker.oc.....)
3. Right click to icon oC
4. Click on Settings...
### Expected behaviour
You should see the entire URL address
### Actual behaviour
It does not look good at URL

### Server configuration
Desktop v ownCloud-2.0.2.5463-nightly20150915-setup.exe
Server v {"installed":true,"maintenance":false,"version":"8.1.1.3","versionstring":"8.1.1","edition":"Enterprise"} | 1.0 | [Windows] in Option Settings, the URL does not look good - ### Steps to reproduce
1. Install the version 2.0.2
2. Create two accounts (e.g: user1@docker.oc......, user2@docker.oc.....)
3. Right click to icon oC
4. Click on Settings...
### Expected behaviour
You should see the entire URL address
### Actual behaviour
It does not look good at URL

### Server configuration
Desktop v ownCloud-2.0.2.5463-nightly20150915-setup.exe
Server v {"installed":true,"maintenance":false,"version":"8.1.1.3","versionstring":"8.1.1","edition":"Enterprise"} | non_priority | in option settings the url does not look good steps to reproduce install the version create two accounts e g docker oc docker oc right click to icon oc click on settings expected behaviour you should see the entire url address actual behaviour it does not look good at url server configuration desktop v owncloud setup exe server v installed true maintenance false version versionstring edition enterprise | 0 |
353,815 | 25,137,164,206 | IssuesEvent | 2022-11-09 19:36:36 | open-telemetry/opentelemetry-collector-contrib | https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib | closed | Broken links in README of signalfxexporter | documentation Stale exporter/signalfx | There's a broken link at [line 55 of the signalfxexporter's README](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/e62787567cf7aeec18bfe3e7c77ffa1cd2e3a593/exporter/signalfxexporter/README.md?plain=1#L55).
I'm guessing it should lead to `./internal/translation/default_metrics.go` instead of `./translation/default_metrics.go`, but I don't know for sure.
There may be more broken links, I haven't checked. | 1.0 | Broken links in README of signalfxexporter - There's a broken link at [line 55 of the signalfxexporter's README](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/e62787567cf7aeec18bfe3e7c77ffa1cd2e3a593/exporter/signalfxexporter/README.md?plain=1#L55).
I'm guessing it should lead to `./internal/translation/default_metrics.go` instead of `./translation/default_metrics.go`, but I don't know for sure.
There may be more broken links, I haven't checked. | non_priority | broken links in readme of signalfxexporter there s a broken link at i m guessing it should lead to internal translation default metrics go instead of translation default metrics go but i don t know for sure there may be more broken links i haven t checked | 0 |
77,049 | 3,506,256,022 | IssuesEvent | 2016-01-08 05:00:48 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | closed | [Spell] Eye of Kilrog (BB #109) | migrated Priority: Medium Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 17.04.2010 07:42:51 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/109
<hr>
It should move quickly but it moves very slow ; /
and I am not sure but it should be able to summon when you have other minion summoned at the same time.
Example:
I have summoned Imp. Now I start to summon Eye of Kilrogg but it says You have already a summoned creature.
Here's how it should work:
>Casing Eye of Kilrogg will replace your current pet, causing it to temporarily despawn. How is this useful? When you cancel the effect your pet appears on top of you. This is very helpful in certain instances where you have to jump off of something and don't want your pet to path around and aggro everything nearby - set it on stay, go where you need to be, summon an Eye, and then cancel it. Your pet will teleport to you. | 1.0 | [Spell] Eye of Kilrog (BB #109) - This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 17.04.2010 07:42:51 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/109
<hr>
It should move quickly but it moves very slow ; /
and I am not sure but it should be able to summon when you have other minion summoned at the same time.
Example:
I have summoned Imp. Now I start to summon Eye of Kilrogg but it says You have already a summoned creature.
Here's how it should work:
>Casing Eye of Kilrogg will replace your current pet, causing it to temporarily despawn. How is this useful? When you cancel the effect your pet appears on top of you. This is very helpful in certain instances where you have to jump off of something and don't want your pet to path around and aggro everything nearby - set it on stay, go where you need to be, summon an Eye, and then cancel it. Your pet will teleport to you. | priority | eye of kilrog bb this issue was migrated from bitbucket original reporter original date gmt original priority major original type bug original state resolved direct link it should move quickly but it moves very slow and i am not sure but it should be able to summon when you have other minion summoned at the same time example i have summoned imp now i start to summon eye of kilrogg but it says you have already a summoned creature here s how it should work casing eye of kilrogg will replace your current pet causing it to temporarily despawn how is this useful when you cancel the effect your pet appears on top of you this is very helpful in certain instances where you have to jump off of something and don t want your pet to path around and aggro everything nearby set it on stay go where you need to be summon an eye and then cancel it your pet will teleport to you | 1 |
297,639 | 9,179,606,229 | IssuesEvent | 2019-03-05 03:55:05 | dandahle/Catalyst-AppVetting | https://api.github.com/repos/dandahle/Catalyst-AppVetting | opened | Make "Handle-It" label for Work Items & Select in Handle-It PDF | Priority 1 | Ability to designate a Work Item as "Handle-It" with a 'tick box' so it can be filtered for export in the Handle-It PDF. All other Work Items will be ignored for Handle-It PDF. | 1.0 | Make "Handle-It" label for Work Items & Select in Handle-It PDF - Ability to designate a Work Item as "Handle-It" with a 'tick box' so it can be filtered for export in the Handle-It PDF. All other Work Items will be ignored for Handle-It PDF. | priority | make handle it label for work items select in handle it pdf ability to designate a work item as handle it with a tick box so it can be filtered for export in the handle it pdf all other work items will be ignored for handle it pdf | 1 |
165,665 | 26,207,700,136 | IssuesEvent | 2023-01-04 01:11:10 | chapel-lang/chapel | https://api.github.com/repos/chapel-lang/chapel | closed | unexpected behavior of DefaultRectangularDom leader iterator | type: Design area: Libraries / Modules type: Performance | When compiling under NUMA and executing in a situation where there is only 1 sublocale, the DefaultRectangularDom standalone iterator generates a normal(*) number of tasks. The leader iterator, however, generates only a single task. This is unexpected and we should probably fix it.
(*) For example, when running with --dataParTasksPerLocale=3 and there are 12 indices in the domain, the standalone iterator gets numChunks=3 and its coforall spawns 3 tasks.
Cf. the leader iterator caps numChunks with the number of sublocales, so it ends up being 1. The "if numChunks == 1" branch does not bother with a coforall, so the current task runs the entire iteration space.
Vass observes:
* Under NUMA when executing in a situation where there is only 1 sublocale, do we want our code (for example, the parallel iterators) to behave the same way as they do under FLAT? If so, we could simply replace the check `numSublocs != 0` in the DR leader iterator with `numSublocs > 1` .
* The DR domain standalone iterator is numa-unaware. Do we want to make it numa-aware? Or do we want to make the leader iterator numa-unaware?
| 1.0 | unexpected behavior of DefaultRectangularDom leader iterator - When compiling under NUMA and executing in a situation where there is only 1 sublocale, the DefaultRectangularDom standalone iterator generates a normal(*) number of tasks. The leader iterator, however, generates only a single task. This is unexpected and we should probably fix it.
(*) For example, when running with --dataParTasksPerLocale=3 and there are 12 indices in the domain, the standalone iterator gets numChunks=3 and its coforall spawns 3 tasks.
Cf. the leader iterator caps numChunks with the number of sublocales, so it ends up being 1. The "if numChunks == 1" branch does not bother with a coforall, so the current task runs the entire iteration space.
Vass observes:
* Under NUMA when executing in a situation where there is only 1 sublocale, do we want our code (for example, the parallel iterators) to behave the same way as they do under FLAT? If so, we could simply replace the check `numSublocs != 0` in the DR leader iterator with `numSublocs > 1` .
* The DR domain standalone iterator is numa-unaware. Do we want to make it numa-aware? Or do we want to make the leader iterator numa-unaware?
| non_priority | unexpected behavior of defaultrectangulardom leader iterator when compiling under numa and executing in a situation where there is only sublocale the defaultrectangulardom standalone iterator generates a normal number of tasks the leader iterator however generates only a single task this is unexpected and we should probably fix it for example when running with datapartasksperlocale and there are indices in the domain the standalone iterator gets numchunks and its coforall spawns tasks cf the leader iterator caps numchunks with the number of sublocales so it ends up being the if numchunks branch does not bother with a coforall so the current task runs the entire iteration space vass observes under numa when executing in a situation where there is only sublocale do we want our code for example the parallel iterators to behave the same way as they do under flat if so we could simply replace the check numsublocs in the dr leader iterator with numsublocs the dr domain standalone iterator is numa unaware do we want to make it numa aware or do we want to make the leader iterator numa unaware | 0 |
254,084 | 21,726,949,805 | IssuesEvent | 2022-05-11 08:32:34 | kyma-project/kyma | https://api.github.com/repos/kyma-project/kyma | closed | kyma-preview-integration-dev timeout in tests | kind/bug kind/failing-test | **Description**
`The kyma-preview-integration-dev` plan fails consistently with the following output:
```
0 passing (1m)
1 failing
1) Execute SKR test
"before all" hook: Provision SKR in "Execute SKR test":
Error: before hook failed: Error: Error thrown by ensureOperationSucceeded: operation didn't succeed in time:
{
"state": "failed",
"description": "Operation created : cannot create provisioning input creator"
}
```
No other errors or reasons are provided.
It looks like some underlying, but not reported, problem is causing that.
**Expected result**
More precise reason of failure is given - or there's no error at all.
It would be nice to have an error reported like: "error at _scriptname_:95" or something like that.
**Actual result**
The generic message doesn't point to the root cause of the problem.
**Steps to reproduce**
Observe the build log of the pipeline, for example: https://storage.googleapis.com/kyma-prow-logs/logs/kyma-preview-integration-dev/1523392415275159552/build-log.txt
**Troubleshooting**
N/A
| 1.0 | kyma-preview-integration-dev timeout in tests - **Description**
`The kyma-preview-integration-dev` plan fails consistently with the following output:
```
0 passing (1m)
1 failing
1) Execute SKR test
"before all" hook: Provision SKR in "Execute SKR test":
Error: before hook failed: Error: Error thrown by ensureOperationSucceeded: operation didn't succeed in time:
{
"state": "failed",
"description": "Operation created : cannot create provisioning input creator"
}
```
No other errors or reasons are provided.
It looks like some underlying, but not reported, problem is causing that.
**Expected result**
More precise reason of failure is given - or there's no error at all.
It would be nice to have an error reported like: "error at _scriptname_:95" or something like that.
**Actual result**
The generic message doesn't point to the root cause of the problem.
**Steps to reproduce**
Observe the build log of the pipeline, for example: https://storage.googleapis.com/kyma-prow-logs/logs/kyma-preview-integration-dev/1523392415275159552/build-log.txt
**Troubleshooting**
N/A
| non_priority | kyma preview integration dev timeout in tests description the kyma preview integration dev plan fails consistently with the following output passing failing execute skr test before all hook provision skr in execute skr test error before hook failed error error thrown by ensureoperationsucceeded operation didn t succeed in time state failed description operation created cannot create provisioning input creator no other errors or reasons are provided it looks like some underlying but not reported problem is causing that expected result more precise reason of failure is given or there s no error at all it would be nice to have an error reported like error at scriptname or something like that actual result the generic message doesn t point to the root cause of the problem steps to reproduce observe the build log of the pipeline for example troubleshooting n a | 0 |
618,387 | 19,434,003,658 | IssuesEvent | 2021-12-21 15:05:29 | infor-design/enterprise | https://api.github.com/repos/infor-design/enterprise | closed | Datagrid: A new event that will be triggered when filter operator is changed | type: enhancement :sparkles: [2] priority: high team: lawson | **Is your feature request related to a problem or use case? Please describe.**
If we select an operator in a filter row (except isEmpty and isNotEmpty) and the value is empty, filterConditions ignore the value of the operator and not added in the filterExpr array. This becomes an issue in LMCLIENT because reapply that condition, the value of filter operator is reset to its default value. Please see gif below.
**Describe the solution you'd like**
An event that we can listen to when filter operator is changed. Something like:
this.element.triggerHandler('filterOperatorChanged', { operator: 'greaterThan', defaultOperator: 'equals', value: 'test', columnId: 'field1'});
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

And what application do you work on? LMCLIENT
| 1.0 | Datagrid: A new event that will be triggered when filter operator is changed - **Is your feature request related to a problem or use case? Please describe.**
If we select an operator in a filter row (except isEmpty and isNotEmpty) and the value is empty, filterConditions ignore the value of the operator and not added in the filterExpr array. This becomes an issue in LMCLIENT because reapply that condition, the value of filter operator is reset to its default value. Please see gif below.
**Describe the solution you'd like**
An event that we can listen to when filter operator is changed. Something like:
this.element.triggerHandler('filterOperatorChanged', { operator: 'greaterThan', defaultOperator: 'equals', value: 'test', columnId: 'field1'});
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

And what application do you work on? LMCLIENT
| priority | datagrid a new event that will be triggered when filter operator is changed is your feature request related to a problem or use case please describe if we select an operator in a filter row except isempty and isnotempty and the value is empty filterconditions ignore the value of the operator and not added in the filterexpr array this becomes an issue in lmclient because reapply that condition the value of filter operator is reset to its default value please see gif below describe the solution you d like an event that we can listen to when filter operator is changed something like this element triggerhandler filteroperatorchanged operator greaterthan defaultoperator equals value test columnid describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context or screenshots about the feature request here and what application do you work on lmclient | 1 |
330,750 | 10,055,060,273 | IssuesEvent | 2019-07-22 04:44:50 | atlassian/react-beautiful-dnd | https://api.github.com/repos/atlassian/react-beautiful-dnd | closed | Add BeforeDragStart hook | idea 🤔 new feature 🎨 priority: low 🤞 | ## Bug or feature request?
Feature
Hi! This library is awesome. But it looks like we really need BeforeDragStart hook. For example, I want to collapse nested list before i drag it. OnDragStart hook can't help, because react-beautiful-dnd creates mirror before that hook, and cause of that the height of the mirror is the height of the expanded nested list (if it was expended of course). | 1.0 | Add BeforeDragStart hook - ## Bug or feature request?
Feature
Hi! This library is awesome. But it looks like we really need BeforeDragStart hook. For example, I want to collapse nested list before i drag it. OnDragStart hook can't help, because react-beautiful-dnd creates mirror before that hook, and cause of that the height of the mirror is the height of the expanded nested list (if it was expended of course). | priority | add beforedragstart hook bug or feature request feature hi this library is awesome but it looks like we really need beforedragstart hook for example i want to collapse nested list before i drag it ondragstart hook can t help because react beautiful dnd creates mirror before that hook and cause of that the height of the mirror is the height of the expanded nested list if it was expended of course | 1 |
215,055 | 7,286,245,433 | IssuesEvent | 2018-02-23 08:59:59 | spring-projects/spring-boot | https://api.github.com/repos/spring-projects/spring-boot | opened | `spring.rabbitmq.cache.channel.checkout-timeout` missed during migration Duration | priority: normal type: chore | this is a duration in ms that's still a `Long` in the code. The description has been changed in the appendix to refer to a duration and needs to be harmonized in the code. | 1.0 | `spring.rabbitmq.cache.channel.checkout-timeout` missed during migration Duration - this is a duration in ms that's still a `Long` in the code. The description has been changed in the appendix to refer to a duration and needs to be harmonized in the code. | priority | spring rabbitmq cache channel checkout timeout missed during migration duration this is a duration in ms that s still a long in the code the description has been changed in the appendix to refer to a duration and needs to be harmonized in the code | 1 |
111,693 | 24,174,695,600 | IssuesEvent | 2022-09-22 23:23:12 | Azure/autorest.go | https://api.github.com/repos/Azure/autorest.go | closed | Prevent breaking change for adding a new request body type | design-discussion APIChange CodeGen | ### Problem description
For current codegen, an operation has only one JSON request at first, we will generate this operation with no naming suffix. Then new request body type has been added to this operation. According to the current logic, if an operation with more than one request body types, we create a new method with the media type name as a suffix for the non-binary types. This causes the breaking changes for the original operation name from XXX to XXXWithJSON.
### Example
- [original swagger](https://github.com/Azure/autorest.testserver/blob/e66f072497164e23886e40ac687774333b8d3671/swagger/dpg-initial.json#L75)
- [updated swagger](https://github.com/Azure/autorest.testserver/blob/e66f072497164e23886e40ac687774333b8d3671/swagger/dpg-update1.json#L93)
- [original generated code](https://github.com/tadelesh/go-dpg-poc/blob/8773e9124d045b666f33f7bb48c4c81490707a11/evolution/service/initial/params_client.go#L166)
- [updated generated code](https://github.com/tadelesh/go-dpg-poc/blob/8773e9124d045b666f33f7bb48c4c81490707a11/evolution/service/update/params_client.go#L291)
### Solution
For operation with more than one response body types, we refer the sequence of the consumes in swagger and add WithXXX suffix for the second consumes and so on. It strongly relies on the swagger change, but it could minimize the influence for current generated control plane and data plane code. | 1.0 | Prevent breaking change for adding a new request body type - ### Problem description
For current codegen, an operation has only one JSON request at first, we will generate this operation with no naming suffix. Then new request body type has been added to this operation. According to the current logic, if an operation with more than one request body types, we create a new method with the media type name as a suffix for the non-binary types. This causes the breaking changes for the original operation name from XXX to XXXWithJSON.
### Example
- [original swagger](https://github.com/Azure/autorest.testserver/blob/e66f072497164e23886e40ac687774333b8d3671/swagger/dpg-initial.json#L75)
- [updated swagger](https://github.com/Azure/autorest.testserver/blob/e66f072497164e23886e40ac687774333b8d3671/swagger/dpg-update1.json#L93)
- [original generated code](https://github.com/tadelesh/go-dpg-poc/blob/8773e9124d045b666f33f7bb48c4c81490707a11/evolution/service/initial/params_client.go#L166)
- [updated generated code](https://github.com/tadelesh/go-dpg-poc/blob/8773e9124d045b666f33f7bb48c4c81490707a11/evolution/service/update/params_client.go#L291)
### Solution
For operation with more than one response body types, we refer the sequence of the consumes in swagger and add WithXXX suffix for the second consumes and so on. It strongly relies on the swagger change, but it could minimize the influence for current generated control plane and data plane code. | non_priority | prevent breaking change for adding a new request body type problem description for current codegen an operation has only one json request at first we will generate this operation with no naming suffix then new request body type has been added to this operation according to the current logic if an operation with more than one request body types we create a new method with the media type name as a suffix for the non binary types this causes the breaking changes for the original operation name from xxx to xxxwithjson example solution for operation with more than one response body types we refer the sequence of the consumes in swagger and add withxxx suffix for the second consumes and so on it strongly relies on the swagger change but it could minimize the influence for current generated control plane and data plane code | 0 |
741,590 | 25,806,359,778 | IssuesEvent | 2022-12-11 12:52:23 | kubernetes/minikube | https://api.github.com/repos/kubernetes/minikube | closed | Alternative Buildroot 2021.02 image, using Linux version 5 | kind/feature priority/backlog lifecycle/frozen area/guest-vm | The current 2020.02 LTS series are now end of life, we should upgrade to 2021.02 LTS if we want to keep it.
While doing so, we should make yet another attempt at upgrading from Linux version 4 to Linux version 5...
```
# From https://www.kernel.org/pub/linux/kernel/v5.x/sha256sums.asc
sha256 930ae76b9a3b64b98802849aca332d17a706f20595de21e1ae729b55ee461add linux-5.10.25.tar.xz
sha256 1c3cef545f366b56332c11c28d074c9d9148c28059a970ec8710826652237560 linux-5.4.107.tar.xz
# From https://www.kernel.org/pub/linux/kernel/v4.x/sha256sums.asc
sha256 05db750ba01ad557bef50835c253894fad9fb0db2224f0e803b25e2ff7ab2365 linux-4.19.182.tar.xz
sha256 7adc041af81424ff8d68affe3005fa9e5babc4e84e0b07e4effdf54225ba9426 linux-4.14.226.tar.xz
```
This relates to #9992 and #10501 | 1.0 | Alternative Buildroot 2021.02 image, using Linux version 5 - The current 2020.02 LTS series are now end of life, we should upgrade to 2021.02 LTS if we want to keep it.
While doing so, we should make yet another attempt at upgrading from Linux version 4 to Linux version 5...
```
# From https://www.kernel.org/pub/linux/kernel/v5.x/sha256sums.asc
sha256 930ae76b9a3b64b98802849aca332d17a706f20595de21e1ae729b55ee461add linux-5.10.25.tar.xz
sha256 1c3cef545f366b56332c11c28d074c9d9148c28059a970ec8710826652237560 linux-5.4.107.tar.xz
# From https://www.kernel.org/pub/linux/kernel/v4.x/sha256sums.asc
sha256 05db750ba01ad557bef50835c253894fad9fb0db2224f0e803b25e2ff7ab2365 linux-4.19.182.tar.xz
sha256 7adc041af81424ff8d68affe3005fa9e5babc4e84e0b07e4effdf54225ba9426 linux-4.14.226.tar.xz
```
This relates to #9992 and #10501 | priority | alternative buildroot image using linux version the current lts series are now end of life we should upgrade to lts if we want to keep it while doing so we should make yet another attempt at upgrading from linux version to linux version from linux tar xz linux tar xz from linux tar xz linux tar xz this relates to and | 1 |
251,030 | 21,411,918,895 | IssuesEvent | 2022-04-22 07:07:33 | react-native-video/react-native-video | https://api.github.com/repos/react-native-video/react-native-video | opened | Testing | help wanted discussion test | I have no clue how to write tests for a React Native module like this but I would assume it's possible...
Anyone wants to take a first stab at setting up some tests? Anything at all would be fantastic! I am all ready with a `test` label! | 1.0 | Testing - I have no clue how to write tests for a React Native module like this but I would assume it's possible...
Anyone wants to take a first stab at setting up some tests? Anything at all would be fantastic! I am all ready with a `test` label! | non_priority | testing i have no clue how to write tests for a react native module like this but i would assume it s possible anyone wants to take a first stab at setting up some tests anything at all would be fantastic i am all ready with a test label | 0 |
46,211 | 13,055,869,008 | IssuesEvent | 2020-07-30 02:58:38 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | opened | wimpsim-reader can't find a RandomService on multiple platforms (Trac #756) | Incomplete Migration Migrated from Trac combo simulation defect | Migrated from https://code.icecube.wisc.edu/ticket/756
```json
{
"status": "closed",
"changetime": "2014-09-20T08:34:18",
"description": "{{{\n Start 21: wimpsim-reader::test_earth.py\n 21/237 Test #21: wimpsim-reader::test_earth.py ..................................***Failed 2.04 sec\nTraceback (most recent call last):\n File \"/build/buildslave/yaoguai/quick_simulation_ubuntu_12_04/source/wimpsim-reader/resources/test/test_earth.py\", line 292, in <module>\n DoUnitTestEarth(tray, \"DoUnitTestEarth\", params)\n File \"/build/buildslave/yaoguai/quick_simulation_ubuntu_12_04/source/wimpsim-reader/resources/test/test_earth.py\", line 212, in DoUnitTestEarth\n from icecube import icectray, dataclasses, dataio\nImportError: cannot import name icectray\n\n Start 22: wimpsim-reader::test_earth_event.py\n 22/237 Test #22: wimpsim-reader::test_earth_event.py ............................ Passed 1.96 sec\n Start 23: wimpsim-reader::test_sun.py\n 23/237 Test #23: wimpsim-reader::test_sun.py ....................................***Failed 1.95 sec\nFATAL (I3WimpSimReader): No Random Service configured! (I3WimpSimReader.cxx:148 in virtual void I3WimpSimReader::Configure())\nERROR (I3Tray): Exception thrown while configuring module \"wimpsim-reader\". (I3Tray.cxx:384 in void I3Tray::Configure())\nwimpsim-reader (I3WimpSimReader)\n EndMJD\n Description : MJD to end simulation; if unspecified: read everything\n Default : nan\n Configured : 56059.25\n\n FileNameList\n Description : The WimpSim file to read from\n Default : []\n Configured : ['/build/buildslave/yaoguai/quick_simulation_ubuntu_12_04/source/wimpsim-reader/resources/test_data_sun.dat']\n\n InfoFileName\n Description : write Information to this textfile\n Default : ''\n\n InjectionRadius\n Description : If >0, events will be injected in cylinder with zmin, zmax height\n Default : nan\n Configured : 700.0\n\n LowerZenCut\n Description : optional lower Zenith Cut in [rad]\n Default : 0.0\n Configured : 0.0\n\n NEvents\n Description : Number of events to issue, if unconfigured read everything\n Default : 0\n Configured : 0\n\n Oversampling\n Description : N oversamplings\n Default : 0\n Configured : 0\n\n PositionLimits\n Description : Array of xmin,xmax,ymin,ymax,zmin,zmax\n Default : [-800.0, 800.0, -800.0, 800.0, -800.0, 800.0]\n Configured : [-800, 800, -800, 800, -800, 800]\n\n RandomService\n Description : The RandomService in the context\n Default : None\n\n RandomServiceName\n Description : Name of the RandomService to be used\n Default : 'I3RandomService'\n Configured : 'Random'\n\n SensitiveHeight\n Description : Muon box activated height\n Default : nan\n Configured : 1300.0\n\n SensitiveRadius\n Description : Muon box activated radius\n Default : nan\n Configured : 700.0\n\n StartMJD\n Description : MJD to start simulation; if unspecified: read everything\n Default : nan\n Configured : 55694\n\n UpperZenCut\n Description : optional upper Zenith Cut in [rad]\n Default : 3.141592653589793\n Configured : 3.141592653589793\n\n UseElectrons\n Description : Read and simulate electron vertices\n Default : False\n Configured : True\n\n UseMuons\n Description : Read and simulate muon vertices\n Default : True\n Configured : True\n\n UseNC\n Description : Read and simulate NeutralCurrent vertices\n Default : False\n Configured : True\n\n UseTaus\n Description : Read and simulate tau vertices\n Default : False\n Configured : True\n\nTraceback (most recent call last):\n File \"/build/buildslave/yaoguai/quick_simulation_ubuntu_12_04/source/wimpsim-reader/resources/test/test_sun.py\", line 305, in <module>\n tray.Execute()\n File \"/build/buildslave/yaoguai/quick_simulation_ubuntu_12_04/build/lib/I3Tray.py\", line 231, in Execute\n super(I3Tray, self).Execute()\nRuntimeError: No Random Service configured! (in virtual void I3WimpSimReader::Configure())\n}}}",
"reporter": "nega",
"cc": "dataclass@icecube.wisc.edu",
"resolution": "fixed",
"_ts": "1411202058919605",
"component": "combo simulation",
"summary": "wimpsim-reader can't find a RandomService on multiple platforms",
"priority": "normal",
"keywords": "wimpsim-reader tests randomservice",
"time": "2014-09-06T18:17:44",
"milestone": "",
"owner": "mzoll",
"type": "defect"
}
```
| 1.0 | wimpsim-reader can't find a RandomService on multiple platforms (Trac #756) - Migrated from https://code.icecube.wisc.edu/ticket/756
```json
{
"status": "closed",
"changetime": "2014-09-20T08:34:18",
"description": "{{{\n Start 21: wimpsim-reader::test_earth.py\n 21/237 Test #21: wimpsim-reader::test_earth.py ..................................***Failed 2.04 sec\nTraceback (most recent call last):\n File \"/build/buildslave/yaoguai/quick_simulation_ubuntu_12_04/source/wimpsim-reader/resources/test/test_earth.py\", line 292, in <module>\n DoUnitTestEarth(tray, \"DoUnitTestEarth\", params)\n File \"/build/buildslave/yaoguai/quick_simulation_ubuntu_12_04/source/wimpsim-reader/resources/test/test_earth.py\", line 212, in DoUnitTestEarth\n from icecube import icectray, dataclasses, dataio\nImportError: cannot import name icectray\n\n Start 22: wimpsim-reader::test_earth_event.py\n 22/237 Test #22: wimpsim-reader::test_earth_event.py ............................ Passed 1.96 sec\n Start 23: wimpsim-reader::test_sun.py\n 23/237 Test #23: wimpsim-reader::test_sun.py ....................................***Failed 1.95 sec\nFATAL (I3WimpSimReader): No Random Service configured! (I3WimpSimReader.cxx:148 in virtual void I3WimpSimReader::Configure())\nERROR (I3Tray): Exception thrown while configuring module \"wimpsim-reader\". (I3Tray.cxx:384 in void I3Tray::Configure())\nwimpsim-reader (I3WimpSimReader)\n EndMJD\n Description : MJD to end simulation; if unspecified: read everything\n Default : nan\n Configured : 56059.25\n\n FileNameList\n Description : The WimpSim file to read from\n Default : []\n Configured : ['/build/buildslave/yaoguai/quick_simulation_ubuntu_12_04/source/wimpsim-reader/resources/test_data_sun.dat']\n\n InfoFileName\n Description : write Information to this textfile\n Default : ''\n\n InjectionRadius\n Description : If >0, events will be injected in cylinder with zmin, zmax height\n Default : nan\n Configured : 700.0\n\n LowerZenCut\n Description : optional lower Zenith Cut in [rad]\n Default : 0.0\n Configured : 0.0\n\n NEvents\n Description : Number of events to issue, if unconfigured read everything\n Default : 0\n Configured : 0\n\n Oversampling\n Description : N oversamplings\n Default : 0\n Configured : 0\n\n PositionLimits\n Description : Array of xmin,xmax,ymin,ymax,zmin,zmax\n Default : [-800.0, 800.0, -800.0, 800.0, -800.0, 800.0]\n Configured : [-800, 800, -800, 800, -800, 800]\n\n RandomService\n Description : The RandomService in the context\n Default : None\n\n RandomServiceName\n Description : Name of the RandomService to be used\n Default : 'I3RandomService'\n Configured : 'Random'\n\n SensitiveHeight\n Description : Muon box activated height\n Default : nan\n Configured : 1300.0\n\n SensitiveRadius\n Description : Muon box activated radius\n Default : nan\n Configured : 700.0\n\n StartMJD\n Description : MJD to start simulation; if unspecified: read everything\n Default : nan\n Configured : 55694\n\n UpperZenCut\n Description : optional upper Zenith Cut in [rad]\n Default : 3.141592653589793\n Configured : 3.141592653589793\n\n UseElectrons\n Description : Read and simulate electron vertices\n Default : False\n Configured : True\n\n UseMuons\n Description : Read and simulate muon vertices\n Default : True\n Configured : True\n\n UseNC\n Description : Read and simulate NeutralCurrent vertices\n Default : False\n Configured : True\n\n UseTaus\n Description : Read and simulate tau vertices\n Default : False\n Configured : True\n\nTraceback (most recent call last):\n File \"/build/buildslave/yaoguai/quick_simulation_ubuntu_12_04/source/wimpsim-reader/resources/test/test_sun.py\", line 305, in <module>\n tray.Execute()\n File \"/build/buildslave/yaoguai/quick_simulation_ubuntu_12_04/build/lib/I3Tray.py\", line 231, in Execute\n super(I3Tray, self).Execute()\nRuntimeError: No Random Service configured! (in virtual void I3WimpSimReader::Configure())\n}}}",
"reporter": "nega",
"cc": "dataclass@icecube.wisc.edu",
"resolution": "fixed",
"_ts": "1411202058919605",
"component": "combo simulation",
"summary": "wimpsim-reader can't find a RandomService on multiple platforms",
"priority": "normal",
"keywords": "wimpsim-reader tests randomservice",
"time": "2014-09-06T18:17:44",
"milestone": "",
"owner": "mzoll",
"type": "defect"
}
```
| non_priority | wimpsim reader can t find a randomservice on multiple platforms trac migrated from json status closed changetime description n start wimpsim reader test earth py n test wimpsim reader test earth py failed sec ntraceback most recent call last n file build buildslave yaoguai quick simulation ubuntu source wimpsim reader resources test test earth py line in n dounittestearth tray dounittestearth params n file build buildslave yaoguai quick simulation ubuntu source wimpsim reader resources test test earth py line in dounittestearth n from icecube import icectray dataclasses dataio nimporterror cannot import name icectray n n start wimpsim reader test earth event py n test wimpsim reader test earth event py passed sec n start wimpsim reader test sun py n test wimpsim reader test sun py failed sec nfatal no random service configured cxx in virtual void configure nerror exception thrown while configuring module wimpsim reader cxx in void configure nwimpsim reader n endmjd n description mjd to end simulation if unspecified read everything n default nan n configured n n filenamelist n description the wimpsim file to read from n default n configured n n infofilename n description write information to this textfile n default n n injectionradius n description if events will be injected in cylinder with zmin zmax height n default nan n configured n n lowerzencut n description optional lower zenith cut in n default n configured n n nevents n description number of events to issue if unconfigured read everything n default n configured n n oversampling n description n oversamplings n default n configured n n positionlimits n description array of xmin xmax ymin ymax zmin zmax n default n configured n n randomservice n description the randomservice in the context n default none n n randomservicename n description name of the randomservice to be used n default n configured random n n sensitiveheight n description muon box activated height n default nan n configured n n sensitiveradius n description muon box activated radius n default nan n configured n n startmjd n description mjd to start simulation if unspecified read everything n default nan n configured n n upperzencut n description optional upper zenith cut in n default n configured n n useelectrons n description read and simulate electron vertices n default false n configured true n n usemuons n description read and simulate muon vertices n default true n configured true n n usenc n description read and simulate neutralcurrent vertices n default false n configured true n n usetaus n description read and simulate tau vertices n default false n configured true n ntraceback most recent call last n file build buildslave yaoguai quick simulation ubuntu source wimpsim reader resources test test sun py line in n tray execute n file build buildslave yaoguai quick simulation ubuntu build lib py line in execute n super self execute nruntimeerror no random service configured in virtual void configure n reporter nega cc dataclass icecube wisc edu resolution fixed ts component combo simulation summary wimpsim reader can t find a randomservice on multiple platforms priority normal keywords wimpsim reader tests randomservice time milestone owner mzoll type defect | 0 |
803,455 | 29,177,754,022 | IssuesEvent | 2023-05-19 09:19:10 | BiologicalRecordsCentre/ABLE | https://api.github.com/repos/BiologicalRecordsCentre/ABLE | reopened | ButterflyCount: Add Saint Helena as country option | app scale: small Priority 1 | We now have Saint Helena in the species list and it is being used for a new BMS on the island. Can you add this to this list of countries for the app setup | 1.0 | ButterflyCount: Add Saint Helena as country option - We now have Saint Helena in the species list and it is being used for a new BMS on the island. Can you add this to this list of countries for the app setup | priority | butterflycount add saint helena as country option we now have saint helena in the species list and it is being used for a new bms on the island can you add this to this list of countries for the app setup | 1 |
142,570 | 13,034,465,908 | IssuesEvent | 2020-07-28 08:45:22 | RainbowMiner/RainbowMiner | https://api.github.com/repos/RainbowMiner/RainbowMiner | closed | xmrig with CryptoNightHeavyXhv Not Using All Available Threads | answered documentation question | This combo referenced above is only using 16 threads and 24 (or 22) are available.
[debug_2020-07-26.zip](https://github.com/RainbowMiner/RainbowMiner/files/4978372/debug_2020-07-26.zip)
| 1.0 | xmrig with CryptoNightHeavyXhv Not Using All Available Threads - This combo referenced above is only using 16 threads and 24 (or 22) are available.
[debug_2020-07-26.zip](https://github.com/RainbowMiner/RainbowMiner/files/4978372/debug_2020-07-26.zip)
| non_priority | xmrig with cryptonightheavyxhv not using all available threads this combo referenced above is only using threads and or are available | 0 |
79,736 | 23,031,883,386 | IssuesEvent | 2022-07-22 14:38:57 | kuptan/terraform-operator | https://api.github.com/repos/kuptan/terraform-operator | closed | Move to Go 1.18 & Kubebuilder upgrade | kind/build | Given Go 1.18 has been out for some time now, and seems to be stable expect for some minor issues with the performance of the [newly introduced Generics](https://go.dev/doc/tutorial/generics), we should start moving to it. | 1.0 | Move to Go 1.18 & Kubebuilder upgrade - Given Go 1.18 has been out for some time now, and seems to be stable expect for some minor issues with the performance of the [newly introduced Generics](https://go.dev/doc/tutorial/generics), we should start moving to it. | non_priority | move to go kubebuilder upgrade given go has been out for some time now and seems to be stable expect for some minor issues with the performance of the we should start moving to it | 0 |
78,132 | 14,951,671,854 | IssuesEvent | 2021-01-26 14:40:14 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | [4.0] [atum-bs5] Mobile menu toggle doesn't work | No Code Attached Yet | ### Steps to reproduce the issue
View the backend on a small device and attempt to open the main menu by clicking the toggle

### Expected result
Opens the main menu
### Actual result
Nothing happens
### System information (as much as possible)
- 4.0-dev HEAD (after npm i)
| 1.0 | [4.0] [atum-bs5] Mobile menu toggle doesn't work - ### Steps to reproduce the issue
View the backend on a small device and attempt to open the main menu by clicking the toggle

### Expected result
Opens the main menu
### Actual result
Nothing happens
### System information (as much as possible)
- 4.0-dev HEAD (after npm i)
| non_priority | mobile menu toggle doesn t work steps to reproduce the issue view the backend on a small device and attempt to open the main menu by clicking the toggle expected result opens the main menu actual result nothing happens system information as much as possible dev head after npm i | 0 |
370,235 | 10,927,199,949 | IssuesEvent | 2019-11-22 16:11:47 | mozilla/addons-server | https://api.github.com/repos/mozilla/addons-server | closed | Link to add-on on scanner result page should point to external domain, not internal admin one | component: admin tools component: scanners priority: p3 triaged | ### Describe the problem and steps to reproduce it:
1. Go to the scanner result page
2. Check the link to the add-on of a result
### What happened?
The link points to `/en-US/reviewers/review/...` and that resolves to the review page on the internal domain.
### What did you expect to happen?
The link should point to `addons.mozilla.org/en-US/reviewers/review/...`
### Anything else we should know?
(Please include a link to the page, screenshots and any relevant files.)
| 1.0 | Link to add-on on scanner result page should point to external domain, not internal admin one - ### Describe the problem and steps to reproduce it:
1. Go to the scanner result page
2. Check the link to the add-on of a result
### What happened?
The link points to `/en-US/reviewers/review/...` and that resolves to the review page on the internal domain.
### What did you expect to happen?
The link should point to `addons.mozilla.org/en-US/reviewers/review/...`
### Anything else we should know?
(Please include a link to the page, screenshots and any relevant files.)
| priority | link to add on on scanner result page should point to external domain not internal admin one describe the problem and steps to reproduce it go to the scanner result page check the link to the add on of a result what happened the link points to en us reviewers review and that resolves to the review page on the internal domain what did you expect to happen the link should point to addons mozilla org en us reviewers review anything else we should know please include a link to the page screenshots and any relevant files | 1 |
63,399 | 15,596,512,843 | IssuesEvent | 2021-03-18 15:56:29 | open-telemetry/opentelemetry-python | https://api.github.com/repos/open-telemetry/opentelemetry-python | closed | Upgrade mypy to 0.800 | build & infra good first issue help wanted | MyPy 0.800 is out. It finally supports namespace packages properly and we can remove the placeholder `__init__.pyi` files in a few spots. | 1.0 | Upgrade mypy to 0.800 - MyPy 0.800 is out. It finally supports namespace packages properly and we can remove the placeholder `__init__.pyi` files in a few spots. | non_priority | upgrade mypy to mypy is out it finally supports namespace packages properly and we can remove the placeholder init pyi files in a few spots | 0 |
13,941 | 8,743,412,812 | IssuesEvent | 2018-12-12 19:05:43 | mercycorps/TolaActivity | https://api.github.com/repos/mercycorps/TolaActivity | closed | Indicator list: Remove links to program pages and other indicator-dependent pages when a program has no indicators | usability | **Affects:**
1. Home page
2. Browse > Indicators view
## Home page

For programs with no indicators:
- [x] Unlink the program name so that it no longer goes to the program page.
- [x] Hide the two links under the program name -- Program page and Recent progress
## Browse > Indicators view

For programs with no indicators:
- [x] Unlink the program name and arrow icon so that they no longer go to the program page.
| True | Indicator list: Remove links to program pages and other indicator-dependent pages when a program has no indicators - **Affects:**
1. Home page
2. Browse > Indicators view
## Home page

For programs with no indicators:
- [x] Unlink the program name so that it no longer goes to the program page.
- [x] Hide the two links under the program name -- Program page and Recent progress
## Browse > Indicators view

For programs with no indicators:
- [x] Unlink the program name and arrow icon so that they no longer go to the program page.
| non_priority | indicator list remove links to program pages and other indicator dependent pages when a program has no indicators affects home page browse indicators view home page for programs with no indicators unlink the program name so that it no longer goes to the program page hide the two links under the program name program page and recent progress browse indicators view for programs with no indicators unlink the program name and arrow icon so that they no longer go to the program page | 0 |
323,001 | 9,834,978,171 | IssuesEvent | 2019-06-17 11:04:50 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | [0.8.2.0 release-preview] Crash when try to throw out the carcass | High Priority QA Regression Staging | Step to reproduce:
- take fox carcass
- try to throw out

- crash
[Crash.txt](https://github.com/StrangeLoopGames/EcoIssues/files/3296536/Crash.txt)
| 1.0 | [0.8.2.0 release-preview] Crash when try to throw out the carcass - Step to reproduce:
- take fox carcass
- try to throw out

- crash
[Crash.txt](https://github.com/StrangeLoopGames/EcoIssues/files/3296536/Crash.txt)
| priority | crash when try to throw out the carcass step to reproduce take fox carcass try to throw out crash | 1 |
15,516 | 8,950,002,003 | IssuesEvent | 2019-01-25 09:31:54 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | efs_facts does not filter targets: - subnet_id | affects_2.7 aws bug cloud module performance support:community | <!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
efs_facts does not show result with targets: - subnet_id
**output without targets: subnet_id:**
```
ok: [localhost] => {
"msg": {
"ansible_facts": {
"efs": [
{
"creation_time": "2018-10-25 09:43:15+02:00",
"creation_token": "test",
"encrypted": true,
"file_system_id": "fs-number",
"filesystem_address": "fs-number.efs.eu-central-1.amazonaws.com:/",
"kms_key_id": "arn:aws:kms:eu-central-1:somenumber:key/somenumber",
"life_cycle_state": "available",
"mount_point": ".fs-number.efs.eu-central-1.amazonaws.com:/",
"mount_targets": [
{
"file_system_id": "fs-number",
"ip_address": "IPAdress",
"life_cycle_state": "available",
"mount_target_id": "fsmt-f679adaf",
"network_interface_id": "eni-number",
"owner_id": "somenumber",
"security_groups": [
"sg-groupnumber"
],
"subnet_id": "subnet-03dfb6889a3e2ef29"
}
],
"name": "volumename",
"number_of_mount_targets": 1,
"owner_id": "somenumber",
"performance_mode": "generalPurpose",
"size_in_bytes": {
"value": 6144
},
"tags": {
"Creator": "Name",
"Function": "Function",
"Name": "OtherName"
},
"throughput_mode": "bursting"
}
]
},
"changed": false,
"failed": false
}
}
```
output with
``` - name: GatherEfsFacts
efs_facts:
tags:
Name: "{{ efs_volume_name }}"
Creator: "{{ tag_Creator }}"
Function: "{{ tag_Function }}"
targets:
- subnet_id: subnet-03dfb6889a3e2ef29
register: efs
```
```ASK [GatherEfsFacts] *********************************************************************************************************************************************************************************************
ok: [localhost]
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": {
"ansible_facts": {
"efs": []
},
"changed": false,
"failed": false
}
}
``
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
efs_facts
##### ANSIBLE VERSION
ansible 2.7.0
python version = 2.7.15 (default, Jun 27 2018, 13:05:28) [GCC 8.1.1 20180531]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```ansible-config dump --only-changed 99 ↵ 48 10:45:05
HOST_KEY_CHECKING(/home/michael/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/michael/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
see summary
<!--- Paste example playbooks or commands between quotes below -->
```
- name: GatherEfsFacts
efs_facts:
tags:
Name: "{{ efs_volume_name }}"
Creator: "{{ tag_Creator }}"
Function: "{{ tag_Function }}"
targets:
- subnet_id: subnet-03dfb6889a3e2ef29
register: efs
- debug: msg="{{ efs }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
| True | efs_facts does not filter targets: - subnet_id - <!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
efs_facts does not show result with targets: - subnet_id
**output without targets: subnet_id:**
```
ok: [localhost] => {
"msg": {
"ansible_facts": {
"efs": [
{
"creation_time": "2018-10-25 09:43:15+02:00",
"creation_token": "test",
"encrypted": true,
"file_system_id": "fs-number",
"filesystem_address": "fs-number.efs.eu-central-1.amazonaws.com:/",
"kms_key_id": "arn:aws:kms:eu-central-1:somenumber:key/somenumber",
"life_cycle_state": "available",
"mount_point": ".fs-number.efs.eu-central-1.amazonaws.com:/",
"mount_targets": [
{
"file_system_id": "fs-number",
"ip_address": "IPAdress",
"life_cycle_state": "available",
"mount_target_id": "fsmt-f679adaf",
"network_interface_id": "eni-number",
"owner_id": "somenumber",
"security_groups": [
"sg-groupnumber"
],
"subnet_id": "subnet-03dfb6889a3e2ef29"
}
],
"name": "volumename",
"number_of_mount_targets": 1,
"owner_id": "somenumber",
"performance_mode": "generalPurpose",
"size_in_bytes": {
"value": 6144
},
"tags": {
"Creator": "Name",
"Function": "Function",
"Name": "OtherName"
},
"throughput_mode": "bursting"
}
]
},
"changed": false,
"failed": false
}
}
```
output with
``` - name: GatherEfsFacts
efs_facts:
tags:
Name: "{{ efs_volume_name }}"
Creator: "{{ tag_Creator }}"
Function: "{{ tag_Function }}"
targets:
- subnet_id: subnet-03dfb6889a3e2ef29
register: efs
```
```ASK [GatherEfsFacts] *********************************************************************************************************************************************************************************************
ok: [localhost]
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": {
"ansible_facts": {
"efs": []
},
"changed": false,
"failed": false
}
}
``
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
efs_facts
##### ANSIBLE VERSION
ansible 2.7.0
python version = 2.7.15 (default, Jun 27 2018, 13:05:28) [GCC 8.1.1 20180531]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```ansible-config dump --only-changed 99 ↵ 48 10:45:05
HOST_KEY_CHECKING(/home/michael/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/michael/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
see summary
<!--- Paste example playbooks or commands between quotes below -->
```
- name: GatherEfsFacts
efs_facts:
tags:
Name: "{{ efs_volume_name }}"
Creator: "{{ tag_Creator }}"
Function: "{{ tag_Function }}"
targets:
- subnet_id: subnet-03dfb6889a3e2ef29
register: efs
- debug: msg="{{ efs }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
| non_priority | efs facts does not filter targets subnet id summary efs facts does not show result with targets subnet id output without targets subnet id ok msg ansible facts efs creation time creation token test encrypted true file system id fs number filesystem address fs number efs eu central amazonaws com kms key id arn aws kms eu central somenumber key somenumber life cycle state available mount point fs number efs eu central amazonaws com mount targets file system id fs number ip address ipadress life cycle state available mount target id fsmt network interface id eni number owner id somenumber security groups sg groupnumber subnet id subnet name volumename number of mount targets owner id somenumber performance mode generalpurpose size in bytes value tags creator name function function name othername throughput mode bursting changed false failed false output with name gatherefsfacts efs facts tags name efs volume name creator tag creator function tag function targets subnet id subnet register efs ask ok task ok msg ansible facts efs changed false failed false issue type bug report component name efs facts ansible version ansible python version default jun configuration ansible config dump only changed ↵ host key checking home michael ansible cfg false retry files enabled home michael ansible cfg false os environment steps to reproduce see summary name gatherefsfacts efs facts tags name efs volume name creator tag creator function tag function targets subnet id subnet register efs debug msg efs expected results actual results paste below | 0 |
624,768 | 19,706,426,568 | IssuesEvent | 2022-01-12 22:39:49 | GoogleCloudPlatform/java-docs-samples | https://api.github.com/repos/GoogleCloudPlatform/java-docs-samples | closed | functions.SlackSlashCommandTest: handlesMultipleUrlParamsTest failed | type: bug priority: p1 :rotating_light: api: cloudfunctions samples flakybot: issue flakybot: flaky | This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 0303f324ec8c1cd3f635a81ddbddcc889fa52495
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/a9aaa13c-781d-40c5-a399-127242aa1c65), [Sponge](http://sponge2/a9aaa13c-781d-40c5-a399-127242aa1c65)
status: failed
<details><summary>Test output</summary><br><pre>expected to contain: https://en.wikipedia.org/wiki/Lion
but was : {"response_type":"in_channel","text":"Query: lion","attachments":[{"title":"Osama bin Laden","title_link":"https://en.wikipedia.org/wiki/Osama_bin_Laden","text":"Osama bin Mohammed bin Awad bin Laden, also transliterated as Usama bin Ladin, was a Saudi Arabian terrorist and founder of the Pan-Islamic militant organization al-Qaeda. "}]}
at functions.SlackSlashCommandTest.handlesMultipleUrlParamsTest(SlackSlashCommandTest.java:178)
</pre></details> | 1.0 | functions.SlackSlashCommandTest: handlesMultipleUrlParamsTest failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 0303f324ec8c1cd3f635a81ddbddcc889fa52495
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/a9aaa13c-781d-40c5-a399-127242aa1c65), [Sponge](http://sponge2/a9aaa13c-781d-40c5-a399-127242aa1c65)
status: failed
<details><summary>Test output</summary><br><pre>expected to contain: https://en.wikipedia.org/wiki/Lion
but was : {"response_type":"in_channel","text":"Query: lion","attachments":[{"title":"Osama bin Laden","title_link":"https://en.wikipedia.org/wiki/Osama_bin_Laden","text":"Osama bin Mohammed bin Awad bin Laden, also transliterated as Usama bin Ladin, was a Saudi Arabian terrorist and founder of the Pan-Islamic militant organization al-Qaeda. "}]}
at functions.SlackSlashCommandTest.handlesMultipleUrlParamsTest(SlackSlashCommandTest.java:178)
</pre></details> | priority | functions slackslashcommandtest handlesmultipleurlparamstest failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output expected to contain but was response type in channel text query lion attachments at functions slackslashcommandtest handlesmultipleurlparamstest slackslashcommandtest java | 1 |
7,304 | 9,552,274,979 | IssuesEvent | 2019-05-02 16:14:34 | bazelbuild/bazel | https://api.github.com/repos/bazelbuild/bazel | reopened | incompatible_use_python_toolchains: The Python runtime is obtained from a toolchain rather than a flag | breaking-change-0.26 incompatible-change migration-0.25 team-Rules-Python | **Flag:** `--incompatible_use_python_toolchains`
**Available since:** 0.25
**Will be flipped in:** ???
**Feature tracking issue:** #7375
## Motivation
For background on toolchains, see [here](https://docs.bazel.build/versions/master/toolchains.html).
Previously, the Python runtime (i.e., the interpreter used to execute `py_binary` and `py_test` targets) could only be controlled globally, and required passing flags like `--python_top` to the bazel invocation. This is out-of-step with our ambitions for flagless builds and remote-execution-friendly toolchains. Using the toolchain mechanism means that each Python target can automatically select an appropriate runtime based on what target platform it is being built for.
## Change
Enabling this flag triggers the following changes.
1. Executable Python targets will retrieve their runtime from the new Python toolchain.
2. It is forbidden to set any of the legacy flags `--python_top`, `--python2_path`, or `--python3_path`. Note that the last two of those are already no-ops. It is also strongly discouraged to set `--python_path`, but this flag will be removed in a later cleanup due to #7901.
3. The `python_version` attribute of the [`py_runtime`](https://docs.bazel.build/versions/master/be/python.html#py_runtime) rule becomes mandatory. It must be either `"PY2"` or `"PY3"`, indicating which kind of runtime it is describing.
For builds that rely on a Python interpreter installed on the system, it is recommended that users (or platform rule authors) ensure that each platform has an appropriate Python toolchain definition.
If no Python toolchain is explicitly registered, on non-Windows platforms there is a new default toolchain that automatically detects and executes an interpreter (of the appropriate version) from `PATH`. This resolves longstanding issue #4815. A Windows version of this toolchain will come later (#7844).
## Migration
If you were relying on `--python_top`, and you want your whole build to continue to use the `py_runtime` you were pointing it to, you just need to follow the steps below to define a `py_runtime_pair` and `toolchain`, and register this toolchain in your workspace. So long as you don't add any platform constraints that would prevent your toolchain from matching, it will take precedence over the default toolchain described above.
If you were relying on `--python_path`, and you want your whole build to use the interpreter located at the absolute path you were passing in this flag, the steps are the same, except you also have to define a new `py_runtime` with the `interpreter_path` attribute set to that path.
Otherwise, if you were only relying on the default behavior that resolved `python` from `PATH`, just enjoy the new default behavior, which is:
1. First try `python2` or `python3` (depending on the target's version)
2. Then fall back on `python` if not found
3. Fail-fast if the interpreter that is found doesn't match the target's major Python version (`PY2` or `PY3`), as per the `python -V` flag.
On Windows the default behavior is currently unchanged (#7844).
Example toolchain definition:
```python
# In your BUILD file...
load("@bazel_tools//tools/python/toolchain.bzl", "py_runtime_pair")
py_runtime(
name = "my_py2_runtime",
interpreter_path = "/system/python2",
python_version = "PY2",
)
py_runtime(
name = "my_py3_runtime",
interpreter_path = "/system/python3",
python_version = "PY3",
)
py_runtime_pair(
name = "my_py_runtime_pair",
py2_runtime = ":my_py2_runtime",
py3_runtime = ":my_py3_runtime",
)
toolchain(
name = "my_toolchain",
target_compatible_with = [...], # optional platform constraints
toolchain = ":my_py_runtime_pair",
toolchain_type = "@bazel_tools//tools/python:toolchain_type",
)
```
```python
# In your WORKSPACE...
register_toolchains("//my_pkg:my_toolchain")
```
Of course, you can define and register many different toolchains and use platform constraints to restrict them to appropriate target platforms. It is recommended to use the constraint settings `@bazel_tools//tools/python:py2_interpreter_path` and `[...]:py3_interpreter_path` as the namespaces for constraints about where a platform's Python interpreters are located.
The new toolchain-related rules and default toolchain are implemented in Starlark under `@bazel_tools`. Their source code and documentation strings can be read [here](https://github.com/bazelbuild/bazel/blob/master/tools/python/toolchain.bzl). | True | incompatible_use_python_toolchains: The Python runtime is obtained from a toolchain rather than a flag - **Flag:** `--incompatible_use_python_toolchains`
**Available since:** 0.25
**Will be flipped in:** ???
**Feature tracking issue:** #7375
## Motivation
For background on toolchains, see [here](https://docs.bazel.build/versions/master/toolchains.html).
Previously, the Python runtime (i.e., the interpreter used to execute `py_binary` and `py_test` targets) could only be controlled globally, and required passing flags like `--python_top` to the bazel invocation. This is out-of-step with our ambitions for flagless builds and remote-execution-friendly toolchains. Using the toolchain mechanism means that each Python target can automatically select an appropriate runtime based on what target platform it is being built for.
## Change
Enabling this flag triggers the following changes.
1. Executable Python targets will retrieve their runtime from the new Python toolchain.
2. It is forbidden to set any of the legacy flags `--python_top`, `--python2_path`, or `--python3_path`. Note that the last two of those are already no-ops. It is also strongly discouraged to set `--python_path`, but this flag will be removed in a later cleanup due to #7901.
3. The `python_version` attribute of the [`py_runtime`](https://docs.bazel.build/versions/master/be/python.html#py_runtime) rule becomes mandatory. It must be either `"PY2"` or `"PY3"`, indicating which kind of runtime it is describing.
For builds that rely on a Python interpreter installed on the system, it is recommended that users (or platform rule authors) ensure that each platform has an appropriate Python toolchain definition.
If no Python toolchain is explicitly registered, on non-Windows platforms there is a new default toolchain that automatically detects and executes an interpreter (of the appropriate version) from `PATH`. This resolves longstanding issue #4815. A Windows version of this toolchain will come later (#7844).
## Migration
If you were relying on `--python_top`, and you want your whole build to continue to use the `py_runtime` you were pointing it to, you just need to follow the steps below to define a `py_runtime_pair` and `toolchain`, and register this toolchain in your workspace. So long as you don't add any platform constraints that would prevent your toolchain from matching, it will take precedence over the default toolchain described above.
If you were relying on `--python_path`, and you want your whole build to use the interpreter located at the absolute path you were passing in this flag, the steps are the same, except you also have to define a new `py_runtime` with the `interpreter_path` attribute set to that path.
Otherwise, if you were only relying on the default behavior that resolved `python` from `PATH`, just enjoy the new default behavior, which is:
1. First try `python2` or `python3` (depending on the target's version)
2. Then fall back on `python` if not found
3. Fail-fast if the interpreter that is found doesn't match the target's major Python version (`PY2` or `PY3`), as per the `python -V` flag.
On Windows the default behavior is currently unchanged (#7844).
Example toolchain definition:
```python
# In your BUILD file...
load("@bazel_tools//tools/python/toolchain.bzl", "py_runtime_pair")
py_runtime(
name = "my_py2_runtime",
interpreter_path = "/system/python2",
python_version = "PY2",
)
py_runtime(
name = "my_py3_runtime",
interpreter_path = "/system/python3",
python_version = "PY3",
)
py_runtime_pair(
name = "my_py_runtime_pair",
py2_runtime = ":my_py2_runtime",
py3_runtime = ":my_py3_runtime",
)
toolchain(
name = "my_toolchain",
target_compatible_with = [...], # optional platform constraints
toolchain = ":my_py_runtime_pair",
toolchain_type = "@bazel_tools//tools/python:toolchain_type",
)
```
```python
# In your WORKSPACE...
register_toolchains("//my_pkg:my_toolchain")
```
Of course, you can define and register many different toolchains and use platform constraints to restrict them to appropriate target platforms. It is recommended to use the constraint settings `@bazel_tools//tools/python:py2_interpreter_path` and `[...]:py3_interpreter_path` as the namespaces for constraints about where a platform's Python interpreters are located.
The new toolchain-related rules and default toolchain are implemented in Starlark under `@bazel_tools`. Their source code and documentation strings can be read [here](https://github.com/bazelbuild/bazel/blob/master/tools/python/toolchain.bzl). | non_priority | incompatible use python toolchains the python runtime is obtained from a toolchain rather than a flag flag incompatible use python toolchains available since will be flipped in feature tracking issue motivation for background on toolchains see previously the python runtime i e the interpreter used to execute py binary and py test targets could only be controlled globally and required passing flags like python top to the bazel invocation this is out of step with our ambitions for flagless builds and remote execution friendly toolchains using the toolchain mechanism means that each python target can automatically select an appropriate runtime based on what target platform it is being built for change enabling this flag triggers the following changes executable python targets will retrieve their runtime from the new python toolchain it is forbidden to set any of the legacy flags python top path or path note that the last two of those are already no ops it is also strongly discouraged to set python path but this flag will be removed in a later cleanup due to the python version attribute of the rule becomes mandatory it must be either or indicating which kind of runtime it is describing for builds that rely on a python interpreter installed on the system it is recommended that users or platform rule authors ensure that each platform has an appropriate python toolchain definition if no python toolchain is explicitly registered on non windows platforms there is a new default toolchain that automatically detects and executes an interpreter of the appropriate version from path this resolves longstanding issue a windows version of this toolchain will come later migration if you were relying on python top and you want your whole build to continue to use the py runtime you were pointing it to you just need to follow the steps below to define a py runtime pair and toolchain and register this toolchain in your workspace so long as you don t add any platform constraints that would prevent your toolchain from matching it will take precedence over the default toolchain described above if you were relying on python path and you want your whole build to use the interpreter located at the absolute path you were passing in this flag the steps are the same except you also have to define a new py runtime with the interpreter path attribute set to that path otherwise if you were only relying on the default behavior that resolved python from path just enjoy the new default behavior which is first try or depending on the target s version then fall back on python if not found fail fast if the interpreter that is found doesn t match the target s major python version or as per the python v flag on windows the default behavior is currently unchanged example toolchain definition python in your build file load bazel tools tools python toolchain bzl py runtime pair py runtime name my runtime interpreter path system python version py runtime name my runtime interpreter path system python version py runtime pair name my py runtime pair runtime my runtime runtime my runtime toolchain name my toolchain target compatible with optional platform constraints toolchain my py runtime pair toolchain type bazel tools tools python toolchain type python in your workspace register toolchains my pkg my toolchain of course you can define and register many different toolchains and use platform constraints to restrict them to appropriate target platforms it is recommended to use the constraint settings bazel tools tools python interpreter path and interpreter path as the namespaces for constraints about where a platform s python interpreters are located the new toolchain related rules and default toolchain are implemented in starlark under bazel tools their source code and documentation strings can be read | 0 |
82,371 | 3,605,995,162 | IssuesEvent | 2016-02-04 09:12:40 | dartino/sdk | https://api.github.com/repos/dartino/sdk | closed | Rename dart-dependencies-fletch bucket | Dartino-rename Priority-Low | As part of the fletch->dartino rename we should update the cloud storage bucket we use in DEPS:
```
dart-dependencies-fletch
``` | 1.0 | Rename dart-dependencies-fletch bucket - As part of the fletch->dartino rename we should update the cloud storage bucket we use in DEPS:
```
dart-dependencies-fletch
``` | priority | rename dart dependencies fletch bucket as part of the fletch dartino rename we should update the cloud storage bucket we use in deps dart dependencies fletch | 1 |
288,017 | 31,856,909,147 | IssuesEvent | 2023-09-15 08:08:59 | nidhi7598/linux-4.19.72_CVE-2022-3564 | https://api.github.com/repos/nidhi7598/linux-4.19.72_CVE-2022-3564 | closed | CVE-2019-18683 (High) detected in linuxlinux-4.19.294 - autoclosed | Mend: dependency security vulnerability | ## CVE-2019-18683 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72_CVE-2022-3564/commit/454c7dacf6fa9a6de86d4067f5a08f25cffa519b">454c7dacf6fa9a6de86d4067f5a08f25cffa519b</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/platform/vivid/vivid-sdr-cap.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/platform/vivid/vivid-sdr-cap.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in drivers/media/platform/vivid in the Linux kernel through 5.3.8. It is exploitable for privilege escalation on some Linux distributions where local users have /dev/video0 access, but only if the driver happens to be loaded. There are multiple race conditions during streaming stopping in this driver (part of the V4L2 subsystem). These issues are caused by wrong mutex locking in vivid_stop_generating_vid_cap(), vivid_stop_generating_vid_out(), sdr_cap_stop_streaming(), and the corresponding kthreads. At least one of these race conditions leads to a use-after-free.
<p>Publish Date: 2019-11-04
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-18683>CVE-2019-18683</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-18683">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-18683</a></p>
<p>Release Date: 2019-11-04</p>
<p>Fix Resolution: v5.5-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-18683 (High) detected in linuxlinux-4.19.294 - autoclosed - ## CVE-2019-18683 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72_CVE-2022-3564/commit/454c7dacf6fa9a6de86d4067f5a08f25cffa519b">454c7dacf6fa9a6de86d4067f5a08f25cffa519b</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/platform/vivid/vivid-sdr-cap.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/platform/vivid/vivid-sdr-cap.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in drivers/media/platform/vivid in the Linux kernel through 5.3.8. It is exploitable for privilege escalation on some Linux distributions where local users have /dev/video0 access, but only if the driver happens to be loaded. There are multiple race conditions during streaming stopping in this driver (part of the V4L2 subsystem). These issues are caused by wrong mutex locking in vivid_stop_generating_vid_cap(), vivid_stop_generating_vid_out(), sdr_cap_stop_streaming(), and the corresponding kthreads. At least one of these race conditions leads to a use-after-free.
<p>Publish Date: 2019-11-04
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-18683>CVE-2019-18683</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-18683">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-18683</a></p>
<p>Release Date: 2019-11-04</p>
<p>Fix Resolution: v5.5-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files drivers media platform vivid vivid sdr cap c drivers media platform vivid vivid sdr cap c vulnerability details an issue was discovered in drivers media platform vivid in the linux kernel through it is exploitable for privilege escalation on some linux distributions where local users have dev access but only if the driver happens to be loaded there are multiple race conditions during streaming stopping in this driver part of the subsystem these issues are caused by wrong mutex locking in vivid stop generating vid cap vivid stop generating vid out sdr cap stop streaming and the corresponding kthreads at least one of these race conditions leads to a use after free publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
291,621 | 8,940,948,277 | IssuesEvent | 2019-01-24 02:00:18 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | opened | missing / misleading bindings for publishing rgbd cameras from pydrake | configuration: python priority: medium team: manipulation type: feature request | I'm trying to reproduce the following lines
https://github.com/RobotLocomotion/drake/blob/master/examples/manipulation_station/proof_of_life.cc#L48-L65
in `end_effector_teleop.py`
We don't currently have bindings for `systems::sensors::ImageToLcmImageArrayT`, and my quick attempt was stymied by the complexity of dealing with the pixeltype templates.
I'm afraid I also struggled a bit to use LcmPublisherSystem:
```
>>> from robotlocomotion import image_array_t
>>> from pydrake.systems.lcm import LcmPublisherSystem
>>> a = LcmPublisherSystem.Make("test",image_array_t(), None, 0.1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: _make_lcm_system() takes exactly 4 arguments (5 given)
```
@EricCousineau-TRI -- might you be able to help? this a step in the direction of moving manipulation station to be a thin python-only example. ;-) | 1.0 | missing / misleading bindings for publishing rgbd cameras from pydrake - I'm trying to reproduce the following lines
https://github.com/RobotLocomotion/drake/blob/master/examples/manipulation_station/proof_of_life.cc#L48-L65
in `end_effector_teleop.py`
We don't currently have bindings for `systems::sensors::ImageToLcmImageArrayT`, and my quick attempt was stymied by the complexity of dealing with the pixeltype templates.
I'm afraid I also struggled a bit to use LcmPublisherSystem:
```
>>> from robotlocomotion import image_array_t
>>> from pydrake.systems.lcm import LcmPublisherSystem
>>> a = LcmPublisherSystem.Make("test",image_array_t(), None, 0.1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: _make_lcm_system() takes exactly 4 arguments (5 given)
```
@EricCousineau-TRI -- might you be able to help? this a step in the direction of moving manipulation station to be a thin python-only example. ;-) | priority | missing misleading bindings for publishing rgbd cameras from pydrake i m trying to reproduce the following lines in end effector teleop py we don t currently have bindings for systems sensors imagetolcmimagearrayt and my quick attempt was stymied by the complexity of dealing with the pixeltype templates i m afraid i also struggled a bit to use lcmpublishersystem from robotlocomotion import image array t from pydrake systems lcm import lcmpublishersystem a lcmpublishersystem make test image array t none traceback most recent call last file line in typeerror make lcm system takes exactly arguments given ericcousineau tri might you be able to help this a step in the direction of moving manipulation station to be a thin python only example | 1 |
181,190 | 6,657,067,588 | IssuesEvent | 2017-09-30 00:29:52 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Power Grid Across World Boundary Does Not Work [0.5.6.1 Stable] | Medium Priority | If you try and power lights across the world boundary (XXX,XX,999---000) the lights will not light up and do not recognize the power grid. This may be similar to "https://github.com/StrangeLoopGames/EcoIssues/issues/1142" but that issue involved a crash and this issue no longer causes a crash, just no power transfer. | 1.0 | Power Grid Across World Boundary Does Not Work [0.5.6.1 Stable] - If you try and power lights across the world boundary (XXX,XX,999---000) the lights will not light up and do not recognize the power grid. This may be similar to "https://github.com/StrangeLoopGames/EcoIssues/issues/1142" but that issue involved a crash and this issue no longer causes a crash, just no power transfer. | priority | power grid across world boundary does not work if you try and power lights across the world boundary xxx xx the lights will not light up and do not recognize the power grid this may be similar to but that issue involved a crash and this issue no longer causes a crash just no power transfer | 1 |
62,926 | 8,648,892,016 | IssuesEvent | 2018-11-26 17:46:36 | fga-eps-mds/2018.2-Integra-Vendas | https://api.github.com/repos/fga-eps-mds/2018.2-Integra-Vendas | opened | Atualizar gestão de riscos | 0-Scrum Master 2-Documentation | <!--- Descreva a atividade que deve ser feita para atender a issue --->
Atualizar documentação da pontuação de riscos.
**Tarefas**
- [ ] Colher as pontuações de risco definidos nas sprints da R2;
- [ ] Documentar pontuações no GH Pages.
**Observações**
* A *issue* deve ser pontuada;
* A *issue* deve ser delegada a alguém;
* A *issue* deve ter *labels*;
* A *issue* deve pertencer a uma *milestone*.
| 1.0 | Atualizar gestão de riscos - <!--- Descreva a atividade que deve ser feita para atender a issue --->
Atualizar documentação da pontuação de riscos.
**Tarefas**
- [ ] Colher as pontuações de risco definidos nas sprints da R2;
- [ ] Documentar pontuações no GH Pages.
**Observações**
* A *issue* deve ser pontuada;
* A *issue* deve ser delegada a alguém;
* A *issue* deve ter *labels*;
* A *issue* deve pertencer a uma *milestone*.
| non_priority | atualizar gestão de riscos atualizar documentação da pontuação de riscos tarefas colher as pontuações de risco definidos nas sprints da documentar pontuações no gh pages observações a issue deve ser pontuada a issue deve ser delegada a alguém a issue deve ter labels a issue deve pertencer a uma milestone | 0 |
57,583 | 14,166,566,753 | IssuesEvent | 2020-11-12 09:05:59 | epiphany-platform/epiphany | https://api.github.com/repos/epiphany-platform/epiphany | closed | [BACKPORT] Ability to use "long lasting" Kubernetes certificates - 0.7.x backport | area/kubernetes area/security type/backport | **Is your feature request related to a problem?**
By design Kubernetes assumes all newly created certificates have expiration time set to 1 year. There is really no automatic way to overcome that and modify expiration time.
**Describe the solution you'd like**
This issue is already fixed and implemented in task #1302 and we would like to backport it to 0.7.x and test it.
```
---
kind: configuration/kubernetes-master
title: "Kubernetes Master Config"
name: default
provider: azure
specification:
advanced:
certificates:
location: /etc/kubernetes/pki
expiration_days: 800
renew: false
```
Recommended tests:
- single machine, single-master and HA installations
- parameter values: renew: true, renew: false with different periods, including default
- new installations using running epicli apply for the second time after changing parameters
**Describe alternatives you've considered**
it's possible to renew certs by kubeadm manually: kubeadm alpha certs renew apiserver, but we don't want to do it manually
`kubeadm alpha certs renew apiserver
`
**Additional context**
These changes can be modified to work without openssl_* modules, with shell.
| True | [BACKPORT] Ability to use "long lasting" Kubernetes certificates - 0.7.x backport - **Is your feature request related to a problem?**
By design Kubernetes assumes all newly created certificates have expiration time set to 1 year. There is really no automatic way to overcome that and modify expiration time.
**Describe the solution you'd like**
This issue is already fixed and implemented in task #1302 and we would like to backport it to 0.7.x and test it.
```
---
kind: configuration/kubernetes-master
title: "Kubernetes Master Config"
name: default
provider: azure
specification:
advanced:
certificates:
location: /etc/kubernetes/pki
expiration_days: 800
renew: false
```
Recommended tests:
- single machine, single-master and HA installations
- parameter values: renew: true, renew: false with different periods, including default
- new installations using running epicli apply for the second time after changing parameters
**Describe alternatives you've considered**
it's possible to renew certs by kubeadm manually: kubeadm alpha certs renew apiserver, but we don't want to do it manually
`kubeadm alpha certs renew apiserver
`
**Additional context**
These changes can be modified to work without openssl_* modules, with shell.
| non_priority | ability to use long lasting kubernetes certificates x backport is your feature request related to a problem by design kubernetes assumes all newly created certificates have expiration time set to year there is really no automatic way to overcome that and modify expiration time describe the solution you d like this issue is already fixed and implemented in task and we would like to backport it to x and test it kind configuration kubernetes master title kubernetes master config name default provider azure specification advanced certificates location etc kubernetes pki expiration days renew false recommended tests single machine single master and ha installations parameter values renew true renew false with different periods including default new installations using running epicli apply for the second time after changing parameters describe alternatives you ve considered it s possible to renew certs by kubeadm manually kubeadm alpha certs renew apiserver but we don t want to do it manually kubeadm alpha certs renew apiserver additional context these changes can be modified to work without openssl modules with shell | 0 |
694,892 | 23,835,223,835 | IssuesEvent | 2022-09-06 04:47:40 | openmsupply/mobile | https://api.github.com/repos/openmsupply/mobile | closed | Kiribati vax: Creating vaccination event for patients not belonging to current store still updates the patient | Priority: immediate Bug: development Project: Kiribati COVID-19 Vax Solution: Agreed | ## Describe the bug
- We cannot edit patients not belonging to current store. That is enforced in Create vaccination event code too. However, when vaccine event are created (or updated) the trigger also triggers a patient update. This is for new vaccine event creation where the patient is of current store and should be editable (allowing you to edit patient info from vaccine event creation multi-part form Step 1), but there are no check to see if form has been updated or to see if the patient belong from the current store.
- This causes the patient of other stores to be edited on every vaccine event creation and update.
- Be it with same data without any form change (since form is not editable), making an update would create a sync out record which is unnecessary. Even worse it may cause sync race condition.
- We cannot easily check if the form has been updated before triggering the Patient record update from Vax event selector. So the update would still happen for unedited patient form section in new vaccine event creation if the patient belongs to the same store. However, since it the same store, sync are queued simultaneously and although creating unnecessary sync would not break things.
- **We must, however, stop** updating of Patients from unauthorised store though to avoid sync race condition.
### To reproduce
Steps to reproduce the behaviour:
1. This may not be reproducible easily as our code is stopping in UI, the editing of patient detail section of vaccination event.
2. We know the unnecessary sync queue happens though.
3. So to test this condition do this:
1. Use two tablets
2. In tablet 1 create a patient and its vaccine event. Sync it.
3. In tablet 2 pull the changes.
4. In tablet 1 again edit the patient details, dob or something and sync it too.
5. In tablet 2 without syncing in the step `iv` updates, create a vaccine event. Since the patient is from Tablet 1 you would not be allowed to edit the patient so you have made no changes to the patient details but since you have not synced tablet 1 changes from step `iv` you would still see the old dob which is ok.
6. Now sync your changes.
7. Go to tablet 1 pull the latest sync changes.
8. Go to the patient's detail, the changes you had made in step `iv` are gone. The changes in step `v` have overridden your step `iv` changes, which should not happen because
a. the patient does not belong to tablet 2 and hence should be able to be editable from tablet 2
b. You have not made any changes to patient detail so no patient update sync should have happened.
### Expected behaviour
Patient should not be editable from non home stores. This includes creation/editing of vax events.
Vax event creation and editing should not trigger patient update if the current store is not patient's home store.
### Proposed Solution
Vax event creation and editing should not trigger patient update if the current store is not patient's home store.
### Version and device info
- App version:
- Tablet model:
- OS version:
### Additional context
Add any other context about the problem here.
| 1.0 | Kiribati vax: Creating vaccination event for patients not belonging to current store still updates the patient - ## Describe the bug
- We cannot edit patients not belonging to current store. That is enforced in Create vaccination event code too. However, when vaccine event are created (or updated) the trigger also triggers a patient update. This is for new vaccine event creation where the patient is of current store and should be editable (allowing you to edit patient info from vaccine event creation multi-part form Step 1), but there are no check to see if form has been updated or to see if the patient belong from the current store.
- This causes the patient of other stores to be edited on every vaccine event creation and update.
- Be it with same data without any form change (since form is not editable), making an update would create a sync out record which is unnecessary. Even worse it may cause sync race condition.
- We cannot easily check if the form has been updated before triggering the Patient record update from Vax event selector. So the update would still happen for unedited patient form section in new vaccine event creation if the patient belongs to the same store. However, since it the same store, sync are queued simultaneously and although creating unnecessary sync would not break things.
- **We must, however, stop** updating of Patients from unauthorised store though to avoid sync race condition.
### To reproduce
Steps to reproduce the behaviour:
1. This may not be reproducible easily as our code is stopping in UI, the editing of patient detail section of vaccination event.
2. We know the unnecessary sync queue happens though.
3. So to test this condition do this:
1. Use two tablets
2. In tablet 1 create a patient and its vaccine event. Sync it.
3. In tablet 2 pull the changes.
4. In tablet 1 again edit the patient details, dob or something and sync it too.
5. In tablet 2 without syncing in the step `iv` updates, create a vaccine event. Since the patient is from Tablet 1 you would not be allowed to edit the patient so you have made no changes to the patient details but since you have not synced tablet 1 changes from step `iv` you would still see the old dob which is ok.
6. Now sync your changes.
7. Go to tablet 1 pull the latest sync changes.
8. Go to the patient's detail, the changes you had made in step `iv` are gone. The changes in step `v` have overridden your step `iv` changes, which should not happen because
a. the patient does not belong to tablet 2 and hence should be able to be editable from tablet 2
b. You have not made any changes to patient detail so no patient update sync should have happened.
### Expected behaviour
Patient should not be editable from non home stores. This includes creation/editing of vax events.
Vax event creation and editing should not trigger patient update if the current store is not patient's home store.
### Proposed Solution
Vax event creation and editing should not trigger patient update if the current store is not patient's home store.
### Version and device info
- App version:
- Tablet model:
- OS version:
### Additional context
Add any other context about the problem here.
| priority | kiribati vax creating vaccination event for patients not belonging to current store still updates the patient describe the bug we cannot edit patients not belonging to current store that is enforced in create vaccination event code too however when vaccine event are created or updated the trigger also triggers a patient update this is for new vaccine event creation where the patient is of current store and should be editable allowing you to edit patient info from vaccine event creation multi part form step but there are no check to see if form has been updated or to see if the patient belong from the current store this causes the patient of other stores to be edited on every vaccine event creation and update be it with same data without any form change since form is not editable making an update would create a sync out record which is unnecessary even worse it may cause sync race condition we cannot easily check if the form has been updated before triggering the patient record update from vax event selector so the update would still happen for unedited patient form section in new vaccine event creation if the patient belongs to the same store however since it the same store sync are queued simultaneously and although creating unnecessary sync would not break things we must however stop updating of patients from unauthorised store though to avoid sync race condition to reproduce steps to reproduce the behaviour this may not be reproducible easily as our code is stopping in ui the editing of patient detail section of vaccination event we know the unnecessary sync queue happens though so to test this condition do this use two tablets in tablet create a patient and its vaccine event sync it in tablet pull the changes in tablet again edit the patient details dob or something and sync it too in tablet without syncing in the step iv updates create a vaccine event since the patient is from tablet you would not be allowed to edit the patient so you have made no changes to the patient details but since you have not synced tablet changes from step iv you would still see the old dob which is ok now sync your changes go to tablet pull the latest sync changes go to the patient s detail the changes you had made in step iv are gone the changes in step v have overridden your step iv changes which should not happen because a the patient does not belong to tablet and hence should be able to be editable from tablet b you have not made any changes to patient detail so no patient update sync should have happened expected behaviour patient should not be editable from non home stores this includes creation editing of vax events vax event creation and editing should not trigger patient update if the current store is not patient s home store proposed solution vax event creation and editing should not trigger patient update if the current store is not patient s home store version and device info app version tablet model os version additional context add any other context about the problem here | 1 |
613 | 7,517,345,246 | IssuesEvent | 2018-04-12 03:02:09 | gctools-outilsgc/gcpedia | https://api.github.com/repos/gctools-outilsgc/gcpedia | opened | add en/fr page language link registration to install script | Automation | ```sql
insert into interwiki values ("fr", "https://domain/$1", "", "", 1, 0);
insert into interwiki values ("en", "https://domain/$1", "", "", 1, 0);
``` | 1.0 | add en/fr page language link registration to install script - ```sql
insert into interwiki values ("fr", "https://domain/$1", "", "", 1, 0);
insert into interwiki values ("en", "https://domain/$1", "", "", 1, 0);
``` | non_priority | add en fr page language link registration to install script sql insert into interwiki values fr insert into interwiki values en | 0 |
597,731 | 18,170,621,974 | IssuesEvent | 2021-09-27 19:33:32 | airqo-platform/AirQo-api | https://api.github.com/repos/airqo-platform/AirQo-api | closed | correct field in Sites schema | bug priority-high | **What were you trying to achieve?**
View site details...
**What are the expected results?**
all keys/attributes to be named correctly...
**What are the received results?**
One attribute needs correction.
**What are the steps to reproduce the issue?**
Current one is `distance_to_nearest_residential_area` but it is supposed to be `distance_to_nearest_residential_road`
**Additional context**
Any other information you would like to share?
| 1.0 | correct field in Sites schema - **What were you trying to achieve?**
View site details...
**What are the expected results?**
all keys/attributes to be named correctly...
**What are the received results?**
One attribute needs correction.
**What are the steps to reproduce the issue?**
Current one is `distance_to_nearest_residential_area` but it is supposed to be `distance_to_nearest_residential_road`
**Additional context**
Any other information you would like to share?
| priority | correct field in sites schema what were you trying to achieve view site details what are the expected results all keys attributes to be named correctly what are the received results one attribute needs correction what are the steps to reproduce the issue current one is distance to nearest residential area but it is supposed to be distance to nearest residential road additional context any other information you would like to share | 1 |
1,373 | 2,512,036,172 | IssuesEvent | 2015-01-14 13:40:26 | YetiForceCompany/YetiForceCRM | https://api.github.com/repos/YetiForceCompany/YetiForceCRM | closed | Bug in User "Exports Basic Data" | low priority bug | "Settings -> Users -> Action -> Exports Basic Data" has no function. (test.yetiforce.com) | 1.0 | Bug in User "Exports Basic Data" - "Settings -> Users -> Action -> Exports Basic Data" has no function. (test.yetiforce.com) | priority | bug in user exports basic data settings users action exports basic data has no function test yetiforce com | 1 |
6,439 | 7,620,186,725 | IssuesEvent | 2018-05-03 01:00:22 | terraform-providers/terraform-provider-aws | https://api.github.com/repos/terraform-providers/terraform-provider-aws | closed | aws_iam_policy_attachment singleton | enhancement service/iam | _This issue was originally opened by @dansteen as hashicorp/terraform#5947. It was migrated here as part of the [provider split](https://www.hashicorp.com/blog/upcoming-provider-changes-in-terraform-0-10/). The original body of the issue is below._
<hr>
The docs say the following about iam policies being attached to roles:
```
NOTE: The aws_iam_policy_attachment resource is only meant to be used once for
each managed policy. All of the users/roles/groups that a single policy is being attached
to should be declared by a single aws_iam_policy_attachment resource.
```
This kind of messes with the workflow when you have a heavily modularized setup. As an example, if I have a set of "standard" iam policies that need to be applied to every box in the environment (for example, things required for box buildup and tear-down etc), every time I instantiate a module, I will need to remember to add that specific module instance into the "roles" list for the aws_iam_policy_attachment line. If I could just include the aws_iam_policy_attachment config _inside_ the module, then for each module instance the additional roles would automatically pick up the policies.
I am currently using in-line policies to work around this, but it would definitely be nice to be able to have this laid out a bit more "modularly".
Thanks!
| 1.0 | aws_iam_policy_attachment singleton - _This issue was originally opened by @dansteen as hashicorp/terraform#5947. It was migrated here as part of the [provider split](https://www.hashicorp.com/blog/upcoming-provider-changes-in-terraform-0-10/). The original body of the issue is below._
<hr>
The docs say the following about iam policies being attached to roles:
```
NOTE: The aws_iam_policy_attachment resource is only meant to be used once for
each managed policy. All of the users/roles/groups that a single policy is being attached
to should be declared by a single aws_iam_policy_attachment resource.
```
This kind of messes with the workflow when you have a heavily modularized setup. As an example, if I have a set of "standard" iam policies that need to be applied to every box in the environment (for example, things required for box buildup and tear-down etc), every time I instantiate a module, I will need to remember to add that specific module instance into the "roles" list for the aws_iam_policy_attachment line. If I could just include the aws_iam_policy_attachment config _inside_ the module, then for each module instance the additional roles would automatically pick up the policies.
I am currently using in-line policies to work around this, but it would definitely be nice to be able to have this laid out a bit more "modularly".
Thanks!
| non_priority | aws iam policy attachment singleton this issue was originally opened by dansteen as hashicorp terraform it was migrated here as part of the the original body of the issue is below the docs say the following about iam policies being attached to roles note the aws iam policy attachment resource is only meant to be used once for each managed policy all of the users roles groups that a single policy is being attached to should be declared by a single aws iam policy attachment resource this kind of messes with the workflow when you have a heavily modularized setup as an example if i have a set of standard iam policies that need to be applied to every box in the environment for example things required for box buildup and tear down etc every time i instantiate a module i will need to remember to add that specific module instance into the roles list for the aws iam policy attachment line if i could just include the aws iam policy attachment config inside the module then for each module instance the additional roles would automatically pick up the policies i am currently using in line policies to work around this but it would definitely be nice to be able to have this laid out a bit more modularly thanks | 0 |
93,342 | 3,899,079,307 | IssuesEvent | 2016-04-17 14:25:11 | raspibo/eventman | https://api.github.com/repos/raspibo/eventman | closed | remove invalid keys to MongoDB requests | bug in progress priority: high | Some key values (especially the ones prepended by $) are no longer valid with MongoDB 2.6, leading to a server side exception. This happens adding a new attendee.
Another example, editing a datetime setting, the value is set but on console we get:
> [E 160409 18:35:56 web:1496] Uncaught exception POST /events/57092f65dff0d704314b4ed1 (::1)
> HTTPServerRequest(protocol='http', host='localhost:5242', method='POST', uri='/events/57092f65dff0d704314b4ed1', version='HTTP/1.1', remote_ip='::1', headers={'Origin': 'http://localhost:5242', 'Content-Length': '199', 'Accept-Language': 'en-US,en;q=0.8,it;q=0.6', 'Accept-Encoding': 'gzip, deflate', 'Host': 'localhost:5242', 'Accept': 'application/json, text/plain, */*', 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/49.0.2623.108 Chrome/49.0.2623.108 Safari/537.36', 'Dnt': '1', 'Connection': 'keep-alive', 'Referer': 'http://localhost:5242/', 'Cookie': 'org.cups.sid=2c73cdc5fc32b1864ed7a36c5c76aaba; user="2|1:0|10:1460214844|4:user|12:cmVtb3RlMQ==|30cb585baa2ae173a6a3a9b36e0909fccf9f0c365ce31134dd54db954d488fbe"', 'Content-Type': 'application/json;charset=UTF-8'})
> Traceback (most recent call last):
> File "/usr/lib/python2.7/dist-packages/tornado/web.py", line 1415, in _execute
> result = yield result
> File "/usr/lib/python2.7/dist-packages/tornado/gen.py", line 870, in run
> value = future.result()
> File "/usr/lib/python2.7/dist-packages/tornado/concurrent.py", line 215, in result
> raise_exc_info(self._exc_info)
> File "/usr/lib/python2.7/dist-packages/tornado/gen.py", line 215, in wrapper
> result = func(*args, **kwargs)
> File "eventman_server.py", line 59, in my_wrapper
> return original_wrapper(self, *args, **kwargs)
> File "/usr/lib/python2.7/dist-packages/tornado/web.py", line 2721, in wrapper
> return method(self, *args, **kwargs)
> File "eventman_server.py", line 226, in post
> merged, newData = self.db.update(self.collection, id_, data)
> File "/home/da/git/eventman/backend.py", line 227, in update
> update={operator: data}, full_response=True, new=True, upsert=create)
> File "/home/da/.local/lib/python2.7/site-packages/pymongo/collection.py", line 1738, in find_and_modify
> **kwargs)
> File "/home/da/.local/lib/python2.7/site-packages/pymongo/database.py", line 439, in command
> uuid_subtype, compile_re, **kwargs)[0]
> File "/home/da/.local/lib/python2.7/site-packages/pymongo/database.py", line 345, in _command
> msg, allowable_errors)
> File "/home/da/.local/lib/python2.7/site-packages/pymongo/helpers.py", line 182, in _check_command_response
> raise OperationFailure(msg % errmsg, code, response)
> OperationFailure: command SON([('findAndModify', u'events'), ('query', {'_id': ObjectId('57092f65dff0d704314b4ed1')}), ('update', {'$set': {u'begin-time': u'2016-04-09T15:35:30.236Z', u'$resolved': True, u'title': u'nexto', u'end-date': u'2016-05-13T22:00:00.000Z', u'begin-date': u'2016-05-13T22:00:00.000Z', u'$promise': {}}}), ('new', True), ('upsert', True)]) on namespace eventman.$cmd failed: exception: The dollar ($) prefixed field '$promise' in '$promise' is not valid for storage. | 1.0 | remove invalid keys to MongoDB requests - Some key values (especially the ones prepended by $) are no longer valid with MongoDB 2.6, leading to a server side exception. This happens adding a new attendee.
Another example, editing a datetime setting, the value is set but on console we get:
> [E 160409 18:35:56 web:1496] Uncaught exception POST /events/57092f65dff0d704314b4ed1 (::1)
> HTTPServerRequest(protocol='http', host='localhost:5242', method='POST', uri='/events/57092f65dff0d704314b4ed1', version='HTTP/1.1', remote_ip='::1', headers={'Origin': 'http://localhost:5242', 'Content-Length': '199', 'Accept-Language': 'en-US,en;q=0.8,it;q=0.6', 'Accept-Encoding': 'gzip, deflate', 'Host': 'localhost:5242', 'Accept': 'application/json, text/plain, */*', 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/49.0.2623.108 Chrome/49.0.2623.108 Safari/537.36', 'Dnt': '1', 'Connection': 'keep-alive', 'Referer': 'http://localhost:5242/', 'Cookie': 'org.cups.sid=2c73cdc5fc32b1864ed7a36c5c76aaba; user="2|1:0|10:1460214844|4:user|12:cmVtb3RlMQ==|30cb585baa2ae173a6a3a9b36e0909fccf9f0c365ce31134dd54db954d488fbe"', 'Content-Type': 'application/json;charset=UTF-8'})
> Traceback (most recent call last):
> File "/usr/lib/python2.7/dist-packages/tornado/web.py", line 1415, in _execute
> result = yield result
> File "/usr/lib/python2.7/dist-packages/tornado/gen.py", line 870, in run
> value = future.result()
> File "/usr/lib/python2.7/dist-packages/tornado/concurrent.py", line 215, in result
> raise_exc_info(self._exc_info)
> File "/usr/lib/python2.7/dist-packages/tornado/gen.py", line 215, in wrapper
> result = func(*args, **kwargs)
> File "eventman_server.py", line 59, in my_wrapper
> return original_wrapper(self, *args, **kwargs)
> File "/usr/lib/python2.7/dist-packages/tornado/web.py", line 2721, in wrapper
> return method(self, *args, **kwargs)
> File "eventman_server.py", line 226, in post
> merged, newData = self.db.update(self.collection, id_, data)
> File "/home/da/git/eventman/backend.py", line 227, in update
> update={operator: data}, full_response=True, new=True, upsert=create)
> File "/home/da/.local/lib/python2.7/site-packages/pymongo/collection.py", line 1738, in find_and_modify
> **kwargs)
> File "/home/da/.local/lib/python2.7/site-packages/pymongo/database.py", line 439, in command
> uuid_subtype, compile_re, **kwargs)[0]
> File "/home/da/.local/lib/python2.7/site-packages/pymongo/database.py", line 345, in _command
> msg, allowable_errors)
> File "/home/da/.local/lib/python2.7/site-packages/pymongo/helpers.py", line 182, in _check_command_response
> raise OperationFailure(msg % errmsg, code, response)
> OperationFailure: command SON([('findAndModify', u'events'), ('query', {'_id': ObjectId('57092f65dff0d704314b4ed1')}), ('update', {'$set': {u'begin-time': u'2016-04-09T15:35:30.236Z', u'$resolved': True, u'title': u'nexto', u'end-date': u'2016-05-13T22:00:00.000Z', u'begin-date': u'2016-05-13T22:00:00.000Z', u'$promise': {}}}), ('new', True), ('upsert', True)]) on namespace eventman.$cmd failed: exception: The dollar ($) prefixed field '$promise' in '$promise' is not valid for storage. | priority | remove invalid keys to mongodb requests some key values especially the ones prepended by are no longer valid with mongodb leading to a server side exception this happens adding a new attendee another example editing a datetime setting the value is set but on console we get uncaught exception post events httpserverrequest protocol http host localhost method post uri events version http remote ip headers origin content length accept language en us en q it q accept encoding gzip deflate host localhost accept application json text plain user agent mozilla linux applewebkit khtml like gecko ubuntu chromium chrome safari dnt connection keep alive referer cookie org cups sid user user content type application json charset utf traceback most recent call last file usr lib dist packages tornado web py line in execute result yield result file usr lib dist packages tornado gen py line in run value future result file usr lib dist packages tornado concurrent py line in result raise exc info self exc info file usr lib dist packages tornado gen py line in wrapper result func args kwargs file eventman server py line in my wrapper return original wrapper self args kwargs file usr lib dist packages tornado web py line in wrapper return method self args kwargs file eventman server py line in post merged newdata self db update self collection id data file home da git eventman backend py line in update update operator data full response true new true upsert create file home da local lib site packages pymongo collection py line in find and modify kwargs file home da local lib site packages pymongo database py line in command uuid subtype compile re kwargs file home da local lib site packages pymongo database py line in command msg allowable errors file home da local lib site packages pymongo helpers py line in check command response raise operationfailure msg errmsg code response operationfailure command son on namespace eventman cmd failed exception the dollar prefixed field promise in promise is not valid for storage | 1 |
679,087 | 23,220,829,619 | IssuesEvent | 2022-08-02 18:02:31 | kubernetes-sigs/kind | https://api.github.com/repos/kubernetes-sigs/kind | closed | ship stargz snapshotter | kind/feature priority/awaiting-more-evidence | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
Consider shipping stargz /crfs support in containerd by default for faster image pulls with images that support it.
https://github.com/containerd/stargz-snapshotter
**Why is this needed**:
Potential user experience improvements. Faster CI.
cc @mattmoor @AkihiroSuda
NOTE: we're probably not shipping much more in the way of features in 2020, we've lost a lot of time to bug hunting and should probably focus on wrapping up a final 2020 release. I think we should track this anyhow.
Logistically the biggest issue is probably making sure we have builds for all the architectures and that they'll work with whatever arbitrary containerd versions we are using (to keep up with various fixes, a few times we've needed mitigations for quirks / Kubernetes @ HEAD).
For containerd itself we still have https://github.com/kind-ci/containerd-nightlies to help with this (containerd *has* nightlies, but they're difficult to programmatically access / predict for install).
We also have to consider the binary size, users don't appreciate larger kind images (nor do I), so we don't typically agree to ship additional binaries that aren't strictly necessary. | 1.0 | ship stargz snapshotter - <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
Consider shipping stargz /crfs support in containerd by default for faster image pulls with images that support it.
https://github.com/containerd/stargz-snapshotter
**Why is this needed**:
Potential user experience improvements. Faster CI.
cc @mattmoor @AkihiroSuda
NOTE: we're probably not shipping much more in the way of features in 2020, we've lost a lot of time to bug hunting and should probably focus on wrapping up a final 2020 release. I think we should track this anyhow.
Logistically the biggest issue is probably making sure we have builds for all the architectures and that they'll work with whatever arbitrary containerd versions we are using (to keep up with various fixes, a few times we've needed mitigations for quirks / Kubernetes @ HEAD).
For containerd itself we still have https://github.com/kind-ci/containerd-nightlies to help with this (containerd *has* nightlies, but they're difficult to programmatically access / predict for install).
We also have to consider the binary size, users don't appreciate larger kind images (nor do I), so we don't typically agree to ship additional binaries that aren't strictly necessary. | priority | ship stargz snapshotter what would you like to be added consider shipping stargz crfs support in containerd by default for faster image pulls with images that support it why is this needed potential user experience improvements faster ci cc mattmoor akihirosuda note we re probably not shipping much more in the way of features in we ve lost a lot of time to bug hunting and should probably focus on wrapping up a final release i think we should track this anyhow logistically the biggest issue is probably making sure we have builds for all the architectures and that they ll work with whatever arbitrary containerd versions we are using to keep up with various fixes a few times we ve needed mitigations for quirks kubernetes head for containerd itself we still have to help with this containerd has nightlies but they re difficult to programmatically access predict for install we also have to consider the binary size users don t appreciate larger kind images nor do i so we don t typically agree to ship additional binaries that aren t strictly necessary | 1 |
86,680 | 10,787,752,863 | IssuesEvent | 2019-11-05 08:19:42 | AugurProject/augur | https://api.github.com/repos/AugurProject/augur | closed | Betting: Market list page - logged out | Design Epic Roadmap: Betting | A version of the default homepage for the betting UI for logged out users.
Problem:
* Help users understand what the product is and how it works
* Includes live markets on the page
| 1.0 | Betting: Market list page - logged out - A version of the default homepage for the betting UI for logged out users.
Problem:
* Help users understand what the product is and how it works
* Includes live markets on the page
| non_priority | betting market list page logged out a version of the default homepage for the betting ui for logged out users problem help users understand what the product is and how it works includes live markets on the page | 0 |
425,484 | 29,481,690,776 | IssuesEvent | 2023-06-02 06:24:41 | spack/spack | https://api.github.com/repos/spack/spack | closed | Docs won't build locally, spack.ci import error | bug macOS documentation triage | Tried building the docs locally for the first time in a long time and ran into an error.
### Steps to reproduce the issue
```console
$ cd lib/spack/docs
$ make html
```
### Error Message
```console
Warning, treated as error:
autodoc: failed to import module 'ci' from module 'spack'; the following exception was raised:
Traceback (most recent call last):
File "/Users/Adam/.spack/.spack-env/view/lib/python3.8/site-packages/sphinx/ext/autodoc/importer.py", line 66, in import_module
return importlib.import_module(modname)
File "/Users/Adam/.spack/.spack-env/view/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/Users/Adam/spack/lib/spack/spack/ci.py", line 48, in <module>
spack_compiler = spack.main.SpackCommand('compiler')
File "/Users/Adam/spack/lib/spack/spack/main.py", line 522, in __init__
self.command = self.parser.add_command(command_name)
File "/Users/Adam/spack/lib/spack/spack/main.py", line 321, in add_command
module.setup_parser(subparser)
File "/Users/Adam/spack/lib/spack/spack/cmd/compiler.py", line 30, in setup_parser
scopes = spack.config.scopes()
File "/Users/Adam/spack/lib/spack/spack/config.py", line 911, in scopes
return config.scopes
File "/Users/Adam/spack/lib/spack/llnl/util/lang.py", line 762, in __getattr__
return getattr(self.instance, name)
File "/Users/Adam/spack/lib/spack/llnl/util/lang.py", line 761, in __getattr__
raise AttributeError()
AttributeError
make: *** [html] Error 2
```
### Information on your system
* **Spack:** 0.16.1-2558-ecb7d6dca1
* **Python:** 3.8.7
* **Platform:** darwin-catalina-ivybridge
* **Concretizer:** clingo
### Additional information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [x] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [x] I have searched the issues of this repo and believe this is not a duplicate
- [x] I have run the failing commands in debug mode and reported the output
`git blame` says @scottwittenburg @opadron @scheibelp may have some idea.
| 1.0 | Docs won't build locally, spack.ci import error - Tried building the docs locally for the first time in a long time and ran into an error.
### Steps to reproduce the issue
```console
$ cd lib/spack/docs
$ make html
```
### Error Message
```console
Warning, treated as error:
autodoc: failed to import module 'ci' from module 'spack'; the following exception was raised:
Traceback (most recent call last):
File "/Users/Adam/.spack/.spack-env/view/lib/python3.8/site-packages/sphinx/ext/autodoc/importer.py", line 66, in import_module
return importlib.import_module(modname)
File "/Users/Adam/.spack/.spack-env/view/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/Users/Adam/spack/lib/spack/spack/ci.py", line 48, in <module>
spack_compiler = spack.main.SpackCommand('compiler')
File "/Users/Adam/spack/lib/spack/spack/main.py", line 522, in __init__
self.command = self.parser.add_command(command_name)
File "/Users/Adam/spack/lib/spack/spack/main.py", line 321, in add_command
module.setup_parser(subparser)
File "/Users/Adam/spack/lib/spack/spack/cmd/compiler.py", line 30, in setup_parser
scopes = spack.config.scopes()
File "/Users/Adam/spack/lib/spack/spack/config.py", line 911, in scopes
return config.scopes
File "/Users/Adam/spack/lib/spack/llnl/util/lang.py", line 762, in __getattr__
return getattr(self.instance, name)
File "/Users/Adam/spack/lib/spack/llnl/util/lang.py", line 761, in __getattr__
raise AttributeError()
AttributeError
make: *** [html] Error 2
```
### Information on your system
* **Spack:** 0.16.1-2558-ecb7d6dca1
* **Python:** 3.8.7
* **Platform:** darwin-catalina-ivybridge
* **Concretizer:** clingo
### Additional information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [x] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [x] I have searched the issues of this repo and believe this is not a duplicate
- [x] I have run the failing commands in debug mode and reported the output
`git blame` says @scottwittenburg @opadron @scheibelp may have some idea.
| non_priority | docs won t build locally spack ci import error tried building the docs locally for the first time in a long time and ran into an error steps to reproduce the issue console cd lib spack docs make html error message console warning treated as error autodoc failed to import module ci from module spack the following exception was raised traceback most recent call last file users adam spack spack env view lib site packages sphinx ext autodoc importer py line in import module return importlib import module modname file users adam spack spack env view lib importlib init py line in import module return bootstrap gcd import name package level file line in gcd import file line in find and load file line in find and load unlocked file line in load unlocked file line in exec module file line in call with frames removed file users adam spack lib spack spack ci py line in spack compiler spack main spackcommand compiler file users adam spack lib spack spack main py line in init self command self parser add command command name file users adam spack lib spack spack main py line in add command module setup parser subparser file users adam spack lib spack spack cmd compiler py line in setup parser scopes spack config scopes file users adam spack lib spack spack config py line in scopes return config scopes file users adam spack lib spack llnl util lang py line in getattr return getattr self instance name file users adam spack lib spack llnl util lang py line in getattr raise attributeerror attributeerror make error information on your system spack python platform darwin catalina ivybridge concretizer clingo additional information i have run spack debug report and reported the version of spack python platform i have searched the issues of this repo and believe this is not a duplicate i have run the failing commands in debug mode and reported the output git blame says scottwittenburg opadron scheibelp may have some idea | 0 |
634,706 | 20,370,160,310 | IssuesEvent | 2022-02-21 10:25:48 | RasaHQ/rasa | https://api.github.com/repos/RasaHQ/rasa | closed | Socket Channel not working in rasa 3.0.0 | type:bug :bug: area:rasa-oss :ferris_wheel: priority:high effort:atom-squad/2 | ### Rasa Open Source version
3.0.0
### Rasa SDK version
3.0.0
### Rasa X version
_No response_
### Python version
3.8
### What operating system are you using?
Linux
### What happened?
Using the [rasa chat widget](https://rasa.com/docs/rasa/connectors/your-own-website/#chat-widget) to talk to the bot throws the following exception in rasa 3.0.0:
```
[2021-11-28 13:09:58 +0000] [4700] [ERROR] Exception occurred while handling uri: 'http://localhost:5005/socket.io/?EIO=4&transport=polling&t=Nrcgg60'
Traceback (most recent call last):
File "handle_request", line 83, in handle_request
class Sanic(BaseSanic, metaclass=TouchUpMeta):
File "/opt/venv/lib/python3.8/site-packages/engineio/asyncio_server.py", line 317, in handle_request
return await self._make_response(r, environ)
File "/opt/venv/lib/python3.8/site-packages/engineio/asyncio_server.py", line 385, in _make_response
response = make_response(
File "/opt/venv/lib/python3.8/site-packages/engineio/async_drivers/sanic.py", line 102, in make_response
return HTTPResponse(body=payload, content_type=content_type,
TypeError: 'NoneType' object is not callable
```
This did not happen in rasa 2.8.15.
This [repo](https://github.com/hsm207/rasa_moodbot/tree/chat_widget) contains a reproducible example.
### Command / Request
_No response_
### Relevant log output
_No response_
**Definition of done**
- [ ] Determine if the cause is in Rasa Open Source 3.0
- [ ] If the problem is in `Rasa Chat Widget`, update this issue and move to `Fabric` backlog
- [ ] Try to add test for this? | 1.0 | Socket Channel not working in rasa 3.0.0 - ### Rasa Open Source version
3.0.0
### Rasa SDK version
3.0.0
### Rasa X version
_No response_
### Python version
3.8
### What operating system are you using?
Linux
### What happened?
Using the [rasa chat widget](https://rasa.com/docs/rasa/connectors/your-own-website/#chat-widget) to talk to the bot throws the following exception in rasa 3.0.0:
```
[2021-11-28 13:09:58 +0000] [4700] [ERROR] Exception occurred while handling uri: 'http://localhost:5005/socket.io/?EIO=4&transport=polling&t=Nrcgg60'
Traceback (most recent call last):
File "handle_request", line 83, in handle_request
class Sanic(BaseSanic, metaclass=TouchUpMeta):
File "/opt/venv/lib/python3.8/site-packages/engineio/asyncio_server.py", line 317, in handle_request
return await self._make_response(r, environ)
File "/opt/venv/lib/python3.8/site-packages/engineio/asyncio_server.py", line 385, in _make_response
response = make_response(
File "/opt/venv/lib/python3.8/site-packages/engineio/async_drivers/sanic.py", line 102, in make_response
return HTTPResponse(body=payload, content_type=content_type,
TypeError: 'NoneType' object is not callable
```
This did not happen in rasa 2.8.15.
This [repo](https://github.com/hsm207/rasa_moodbot/tree/chat_widget) contains a reproducible example.
### Command / Request
_No response_
### Relevant log output
_No response_
**Definition of done**
- [ ] Determine if the cause is in Rasa Open Source 3.0
- [ ] If the problem is in `Rasa Chat Widget`, update this issue and move to `Fabric` backlog
- [ ] Try to add test for this? | priority | socket channel not working in rasa rasa open source version rasa sdk version rasa x version no response python version what operating system are you using linux what happened using the to talk to the bot throws the following exception in rasa exception occurred while handling uri traceback most recent call last file handle request line in handle request class sanic basesanic metaclass touchupmeta file opt venv lib site packages engineio asyncio server py line in handle request return await self make response r environ file opt venv lib site packages engineio asyncio server py line in make response response make response file opt venv lib site packages engineio async drivers sanic py line in make response return httpresponse body payload content type content type typeerror nonetype object is not callable this did not happen in rasa this contains a reproducible example command request no response relevant log output no response definition of done determine if the cause is in rasa open source if the problem is in rasa chat widget update this issue and move to fabric backlog try to add test for this | 1 |
71,636 | 3,366,108,931 | IssuesEvent | 2015-11-21 03:06:17 | mmisw/mmiorr | https://api.github.com/repos/mmisw/mmiorr | closed | VINE check-boxes can be jump-boxes | bug CannotReproduce Component-UI imported OpSys-OSX Priority-Medium vine xdomes | _From [grayb...@marinemetadata.org](https://code.google.com/u/100453653739559568016/) on April 01, 2013 05:32:23_
What steps will reproduce the problem?
1. Go to perform a mapping in VINE. Create a mapping or two.
2. (Possibly have to do some intermediate step before this behavior occurs)
3. Check on a box to the left of one of the mappings.
What is the expected output/behavior?
Box should become checked.
What do you see instead?
Window jumps up to a higher part of the page, and box remains unchecked.
"Select All" makes the boxes active again.
What version of the product are you using? (You can see the version in the lower left corner of the ORR page.)
ORR Portal 2.0.42.beta (201302221022)
If relevant, please attach the file you were working on (for submission of new ontology, text for import in the vocabulary interface, etc.).
If possible and relevant, please provide a screen shot showing the problem. Please provide information about your browser and version (Firefox, Safari, Chrome, IE, etc.), operating system, etc.
_Original issue: http://code.google.com/p/mmisw/issues/detail?id=312_ | 1.0 | VINE check-boxes can be jump-boxes - _From [grayb...@marinemetadata.org](https://code.google.com/u/100453653739559568016/) on April 01, 2013 05:32:23_
What steps will reproduce the problem?
1. Go to perform a mapping in VINE. Create a mapping or two.
2. (Possibly have to do some intermediate step before this behavior occurs)
3. Check on a box to the left of one of the mappings.
What is the expected output/behavior?
Box should become checked.
What do you see instead?
Window jumps up to a higher part of the page, and box remains unchecked.
"Select All" makes the boxes active again.
What version of the product are you using? (You can see the version in the lower left corner of the ORR page.)
ORR Portal 2.0.42.beta (201302221022)
If relevant, please attach the file you were working on (for submission of new ontology, text for import in the vocabulary interface, etc.).
If possible and relevant, please provide a screen shot showing the problem. Please provide information about your browser and version (Firefox, Safari, Chrome, IE, etc.), operating system, etc.
_Original issue: http://code.google.com/p/mmisw/issues/detail?id=312_ | priority | vine check boxes can be jump boxes from on april what steps will reproduce the problem go to perform a mapping in vine create a mapping or two possibly have to do some intermediate step before this behavior occurs check on a box to the left of one of the mappings what is the expected output behavior box should become checked what do you see instead window jumps up to a higher part of the page and box remains unchecked select all makes the boxes active again what version of the product are you using you can see the version in the lower left corner of the orr page orr portal beta if relevant please attach the file you were working on for submission of new ontology text for import in the vocabulary interface etc if possible and relevant please provide a screen shot showing the problem please provide information about your browser and version firefox safari chrome ie etc operating system etc original issue | 1 |
384,497 | 26,588,207,321 | IssuesEvent | 2023-01-23 05:09:59 | gqlc/gqlc | https://api.github.com/repos/gqlc/gqlc | closed | story(contrib): add user story issue template | documentation | To help make contribution effort a bit more contained, gqlc will be adopting to write all issues as "user stories" aka we're moving to agile open source.
| 1.0 | story(contrib): add user story issue template - To help make contribution effort a bit more contained, gqlc will be adopting to write all issues as "user stories" aka we're moving to agile open source.
| non_priority | story contrib add user story issue template to help make contribution effort a bit more contained gqlc will be adopting to write all issues as user stories aka we re moving to agile open source | 0 |
326 | 3,104,298,458 | IssuesEvent | 2015-08-31 15:10:25 | radare/radare2 | https://api.github.com/repos/radare/radare2 | closed | Integrate/Re-implement Objective-C class-dump parser | architecture file-format | Usable from debugger, disk ,... (class-dump tool is GPL, so we cant integrate it in core, parsing the output is somewhat painful and ugly. | 1.0 | Integrate/Re-implement Objective-C class-dump parser - Usable from debugger, disk ,... (class-dump tool is GPL, so we cant integrate it in core, parsing the output is somewhat painful and ugly. | non_priority | integrate re implement objective c class dump parser usable from debugger disk class dump tool is gpl so we cant integrate it in core parsing the output is somewhat painful and ugly | 0 |
48,249 | 13,067,567,651 | IssuesEvent | 2020-07-31 00:52:38 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | closed | [icetray] Remove ROOT version from I3TrayInfo (Trac #2057) | Migrated from Trac combo core defect | icetray project includes ROOT as a tool for the sole purpose of saving the version string in I3TrayInfo. TrayInfos are so rarely used and icerec is almost completely de-ROOT-ified. this is small, but I don't think anyone ever uses this information
Migrated from https://code.icecube.wisc.edu/ticket/2057
```json
{
"status": "closed",
"changetime": "2019-02-13T14:14:38",
"description": "icetray project includes ROOT as a tool for the sole purpose of saving the version string in I3TrayInfo. TrayInfos are so rarely used and icerec is almost completely de-ROOT-ified. this is small, but I don't think anyone ever uses this information ",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1550067278746682",
"component": "combo core",
"summary": "[icetray] Remove ROOT version from I3TrayInfo",
"priority": "normal",
"keywords": "",
"time": "2017-07-28T11:44:57",
"milestone": "",
"owner": "kjmeagher",
"type": "defect"
}
```
| 1.0 | [icetray] Remove ROOT version from I3TrayInfo (Trac #2057) - icetray project includes ROOT as a tool for the sole purpose of saving the version string in I3TrayInfo. TrayInfos are so rarely used and icerec is almost completely de-ROOT-ified. this is small, but I don't think anyone ever uses this information
Migrated from https://code.icecube.wisc.edu/ticket/2057
```json
{
"status": "closed",
"changetime": "2019-02-13T14:14:38",
"description": "icetray project includes ROOT as a tool for the sole purpose of saving the version string in I3TrayInfo. TrayInfos are so rarely used and icerec is almost completely de-ROOT-ified. this is small, but I don't think anyone ever uses this information ",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1550067278746682",
"component": "combo core",
"summary": "[icetray] Remove ROOT version from I3TrayInfo",
"priority": "normal",
"keywords": "",
"time": "2017-07-28T11:44:57",
"milestone": "",
"owner": "kjmeagher",
"type": "defect"
}
```
| non_priority | remove root version from trac icetray project includes root as a tool for the sole purpose of saving the version string in trayinfos are so rarely used and icerec is almost completely de root ified this is small but i don t think anyone ever uses this information migrated from json status closed changetime description icetray project includes root as a tool for the sole purpose of saving the version string in trayinfos are so rarely used and icerec is almost completely de root ified this is small but i don t think anyone ever uses this information reporter kjmeagher cc resolution fixed ts component combo core summary remove root version from priority normal keywords time milestone owner kjmeagher type defect | 0 |
635,913 | 20,513,685,147 | IssuesEvent | 2022-03-01 09:33:33 | loelschlaeger/fHMM | https://api.github.com/repos/loelschlaeger/fHMM | closed | `coef` function for model coefficients | high priority | Implement `coef` method to extract estimated model coefficients.
| 1.0 | `coef` function for model coefficients - Implement `coef` method to extract estimated model coefficients.
| priority | coef function for model coefficients implement coef method to extract estimated model coefficients | 1 |
395,943 | 11,698,911,409 | IssuesEvent | 2020-03-06 14:44:13 | mostafaboustani/Soen-341 | https://api.github.com/repos/mostafaboustani/Soen-341 | closed | * Upload Picture | Fullstack priority | The user should have a button on the user's profile page to upload a picture. This picture will be saved in MongoDB and associated with their account for further reference. | 1.0 | * Upload Picture - The user should have a button on the user's profile page to upload a picture. This picture will be saved in MongoDB and associated with their account for further reference. | priority | upload picture the user should have a button on the user s profile page to upload a picture this picture will be saved in mongodb and associated with their account for further reference | 1 |
173,625 | 6,528,883,884 | IssuesEvent | 2017-08-30 09:20:37 | pmem/issues | https://api.github.com/repos/pmem/issues | closed | test: rpmem_addr_ext/TEST0 fails | Exposure: Low OS: Linux Priority: 4 low Type: Bug | Found on 4cefd7357ccce2ef0b64a1b7ce3c1a878d8afffb
> rpmem_addr_ext/TEST0: SETUP (all/pmem/nondebug/sockets/GPSPM)
> rpmem_addr_ext/TEST0: START: rpmem_addr_ext
> [MATCHING FAILED, COMPLETE FILE (node_1_rpmem0.log) BELOW]
> <librpmem>: <1> [out.c:263 out_init] pid 21380: program: /tmp/node1/test_rpmem_addr_ext/rpmem_addr_ext
> <librpmem>: <1> [out.c:265 out_init] librpmem version 1.1
> <librpmem>: <1> [out.c:269 out_init] src version: 1.3+b1-33-g4cefd73
> <librpmem>: <1> [out.c:277 out_init] compiled with support for Valgrind pmemcheck
> <librpmem>: <1> [out.c:282 out_init] compiled with support for Valgrind helgrind
> <librpmem>: <1> [out.c:287 out_init] compiled with support for Valgrind memcheck
> <librpmem>: <1> [out.c:292 out_init] compiled with support for Valgrind drd
> <librpmem>: <3> [librpmem.c:63 librpmem_init]
> <librpmem>: <3> [librpmem.c:68 librpmem_init] Libfabric is fork safe
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user 192.168.0.182 22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user 192.168.0.182 22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user 192.168.0.182 22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user 192.168.0.182 22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user 192.168.0.182 22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user 192.168.0.182 22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user 192.168.0.182 22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user 192.168.0.182 22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user 192.168.0.182 22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user 192.168.0.182 22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user:192.168.0.182::22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user:192.168.0.182::22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user:192.168.0.182::22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user:192.168.0.182::22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user:192.168.0.182::22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user:192.168.0.182::22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user:192.168.0.182::22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user:192.168.0.182::22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user:192.168.0.182::22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user:192.168.0.182::22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user::192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user::192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user::192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user::192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user::192.168.0.182:22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user::192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user::192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user::192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user::192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user::192.168.0.182:22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user@192.168.0.18222, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user@192.168.0.18222, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@192.168.0.18222
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@192.168.0.18222
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node 192.168.0.18222
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user@192.168.0.18222, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user@192.168.0.18222, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@192.168.0.18222
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@192.168.0.18222
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node 192.168.0.18222
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user@192.168.0.182@22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user@192.168.0.182@22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@192.168.0.182@22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@192.168.0.182@22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node 192.168.0.182@22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user@192.168.0.182@22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user@192.168.0.182@22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@192.168.0.182@22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@192.168.0.182@22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node 192.168.0.182@22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user:192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user:192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user:192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user:192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user:192.168.0.182:22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user:192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user:192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user:192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user:192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user:192.168.0.182:22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user|192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user|192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user|192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user|192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user|192.168.0.182
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user|192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user|192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user|192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user|192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user|192.168.0.182
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user@192.168.0.182|22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user@192.168.0.182|22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@192.168.0.182|22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@192.168.0.182|22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node 192.168.0.182|22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user@192.168.0.182|22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user@192.168.0.182|22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@192.168.0.182|22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@192.168.0.182|22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node 192.168.0.182|22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user@.192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user@.192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@.192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@.192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node .192.168.0.182
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user@.192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user@.192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@.192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@.192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node .192.168.0.182
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user@:192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user@:192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@:192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@:192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node :192.168.0.182:22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user@:192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user@:192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@:192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@:192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node :192.168.0.182:22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user@@192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user@@192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@@192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@@192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node @192.168.0.182
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user@@192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user@@192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@@192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@@192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node @192.168.0.182
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [librpmem.c:80 librpmem_fini]
>
> [EOF]
> node_1_rpmem0.log.match:1 <librpmem>: <1> [$(*)] $(*)
> node_1_rpmem0.log:1 <librpmem>: <1> [out.c:263 out_init] pid 21380: program: /tmp/node1/test_rpmem_addr_ext/rpmem_addr_ext
> node_1_rpmem0.log.match:2 <librpmem>: <1> [$(*)] librpmem version $(nW)
> node_1_rpmem0.log:2 <librpmem>: <1> [out.c:265 out_init] librpmem version 1.1
> node_1_rpmem0.log.match:3 <librpmem>: <1> [$(*)] src version: $(nW)
> node_1_rpmem0.log:3 <librpmem>: <1> [out.c:269 out_init] src version: 1.3+b1-33-g4cefd73
> node_1_rpmem0.log.match:4 <librpmem>: <3> [$(*)]
> node_1_rpmem0.log:4 <librpmem>: <1> [out.c:277 out_init] compiled with support for Valgrind pmemcheck
> FAIL: match: node_1_rpmem0.log.match:4 did not match pattern
> RUNTESTS: stopping: rpmem_addr_ext/TEST0 failed, TEST=all FS=any BUILD=nondebug RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM | 1.0 | test: rpmem_addr_ext/TEST0 fails - Found on 4cefd7357ccce2ef0b64a1b7ce3c1a878d8afffb
> rpmem_addr_ext/TEST0: SETUP (all/pmem/nondebug/sockets/GPSPM)
> rpmem_addr_ext/TEST0: START: rpmem_addr_ext
> [MATCHING FAILED, COMPLETE FILE (node_1_rpmem0.log) BELOW]
> <librpmem>: <1> [out.c:263 out_init] pid 21380: program: /tmp/node1/test_rpmem_addr_ext/rpmem_addr_ext
> <librpmem>: <1> [out.c:265 out_init] librpmem version 1.1
> <librpmem>: <1> [out.c:269 out_init] src version: 1.3+b1-33-g4cefd73
> <librpmem>: <1> [out.c:277 out_init] compiled with support for Valgrind pmemcheck
> <librpmem>: <1> [out.c:282 out_init] compiled with support for Valgrind helgrind
> <librpmem>: <1> [out.c:287 out_init] compiled with support for Valgrind memcheck
> <librpmem>: <1> [out.c:292 out_init] compiled with support for Valgrind drd
> <librpmem>: <3> [librpmem.c:63 librpmem_init]
> <librpmem>: <3> [librpmem.c:68 librpmem_init] Libfabric is fork safe
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user 192.168.0.182 22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user 192.168.0.182 22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user 192.168.0.182 22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user 192.168.0.182 22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user 192.168.0.182 22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user 192.168.0.182 22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user 192.168.0.182 22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user 192.168.0.182 22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user 192.168.0.182 22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user 192.168.0.182 22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user:192.168.0.182::22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user:192.168.0.182::22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user:192.168.0.182::22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user:192.168.0.182::22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user:192.168.0.182::22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user:192.168.0.182::22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user:192.168.0.182::22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user:192.168.0.182::22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user:192.168.0.182::22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user:192.168.0.182::22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user::192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user::192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user::192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user::192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user::192.168.0.182:22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user::192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user::192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user::192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user::192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user::192.168.0.182:22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user@192.168.0.18222, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user@192.168.0.18222, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@192.168.0.18222
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@192.168.0.18222
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node 192.168.0.18222
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user@192.168.0.18222, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user@192.168.0.18222, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@192.168.0.18222
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@192.168.0.18222
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node 192.168.0.18222
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user@192.168.0.182@22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user@192.168.0.182@22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@192.168.0.182@22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@192.168.0.182@22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node 192.168.0.182@22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user@192.168.0.182@22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user@192.168.0.182@22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@192.168.0.182@22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@192.168.0.182@22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node 192.168.0.182@22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user:192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user:192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user:192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user:192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user:192.168.0.182:22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user:192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user:192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user:192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user:192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user:192.168.0.182:22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user|192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user|192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user|192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user|192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user|192.168.0.182
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user|192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user|192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user|192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user|192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node user|192.168.0.182
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user@192.168.0.182|22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user@192.168.0.182|22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@192.168.0.182|22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@192.168.0.182|22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node 192.168.0.182|22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user@192.168.0.182|22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user@192.168.0.182|22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@192.168.0.182|22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@192.168.0.182|22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node 192.168.0.182|22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user@.192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user@.192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@.192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@.192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node .192.168.0.182
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user@.192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user@.192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@.192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@.192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node .192.168.0.182
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user@:192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user@:192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@:192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@:192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node :192.168.0.182:22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user@:192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user@:192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@:192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@:192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node :192.168.0.182:22
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:454 rpmem_create] target user@@192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req create, target user@@192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] create request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@@192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@@192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node @192.168.0.182
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [rpmem.c:524 rpmem_open] target user@@192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c, create_attr 0x7ffdee670240
> <librpmem>: <3> [rpmem.c:361 rpmem_log_args] req open, target user@@192.168.0.182:22, pool_set_name invalid.poolset, pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 32
> <librpmem>: <3> [rpmem.c:363 rpmem_log_args] open request:
> <librpmem>: <3> [rpmem.c:364 rpmem_log_args] target: user@@192.168.0.182:22
> <librpmem>: <3> [rpmem.c:365 rpmem_log_args] pool set: invalid.poolset
> <librpmem>: <3> [rpmem.c:368 rpmem_log_args] nlanes: 32
> <librpmem>: <3> [rpmem.c:394 rpmem_check_args] pool_addr 0x7f707c52d000, pool_size 8388608, nlanes 0x7ffdee67022c
> <librpmem>: <3> [rpmem.c:196 rpmem_common_init] target user@@192.168.0.182:22
> <librpmem>: <3> [rpmem.c:133 rpmem_get_provider] node @192.168.0.182
> <librpmem>: <1> [rpmem.c:215 rpmem_common_init] cannot find provider
> <librpmem>: <3> [librpmem.c:80 librpmem_fini]
>
> [EOF]
> node_1_rpmem0.log.match:1 <librpmem>: <1> [$(*)] $(*)
> node_1_rpmem0.log:1 <librpmem>: <1> [out.c:263 out_init] pid 21380: program: /tmp/node1/test_rpmem_addr_ext/rpmem_addr_ext
> node_1_rpmem0.log.match:2 <librpmem>: <1> [$(*)] librpmem version $(nW)
> node_1_rpmem0.log:2 <librpmem>: <1> [out.c:265 out_init] librpmem version 1.1
> node_1_rpmem0.log.match:3 <librpmem>: <1> [$(*)] src version: $(nW)
> node_1_rpmem0.log:3 <librpmem>: <1> [out.c:269 out_init] src version: 1.3+b1-33-g4cefd73
> node_1_rpmem0.log.match:4 <librpmem>: <3> [$(*)]
> node_1_rpmem0.log:4 <librpmem>: <1> [out.c:277 out_init] compiled with support for Valgrind pmemcheck
> FAIL: match: node_1_rpmem0.log.match:4 did not match pattern
> RUNTESTS: stopping: rpmem_addr_ext/TEST0 failed, TEST=all FS=any BUILD=nondebug RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM | priority | test rpmem addr ext fails found on rpmem addr ext setup all pmem nondebug sockets gpspm rpmem addr ext start rpmem addr ext pid program tmp test rpmem addr ext rpmem addr ext librpmem version src version compiled with support for valgrind pmemcheck compiled with support for valgrind helgrind compiled with support for valgrind memcheck compiled with support for valgrind drd libfabric is fork safe target user pool set name invalid poolset pool addr pool size nlanes create attr req create target user pool set name invalid poolset pool addr pool size nlanes create request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node user cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req open target user pool set name invalid poolset pool addr pool size nlanes open request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node user cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req create target user pool set name invalid poolset pool addr pool size nlanes create request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node user cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req open target user pool set name invalid poolset pool addr pool size nlanes open request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node user cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req create target user pool set name invalid poolset pool addr pool size nlanes create request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node user cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req open target user pool set name invalid poolset pool addr pool size nlanes open request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node user cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req create target user pool set name invalid poolset pool addr pool size nlanes create request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req open target user pool set name invalid poolset pool addr pool size nlanes open request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req create target user pool set name invalid poolset pool addr pool size nlanes create request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req open target user pool set name invalid poolset pool addr pool size nlanes open request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req create target user pool set name invalid poolset pool addr pool size nlanes create request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node user cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req open target user pool set name invalid poolset pool addr pool size nlanes open request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node user cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req create target user pool set name invalid poolset pool addr pool size nlanes create request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node user cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req open target user pool set name invalid poolset pool addr pool size nlanes open request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node user cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req create target user pool set name invalid poolset pool addr pool size nlanes create request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req open target user pool set name invalid poolset pool addr pool size nlanes open request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req create target user pool set name invalid poolset pool addr pool size nlanes create request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req open target user pool set name invalid poolset pool addr pool size nlanes open request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req create target user pool set name invalid poolset pool addr pool size nlanes create request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req open target user pool set name invalid poolset pool addr pool size nlanes open request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req create target user pool set name invalid poolset pool addr pool size nlanes create request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node cannot find provider target user pool set name invalid poolset pool addr pool size nlanes create attr req open target user pool set name invalid poolset pool addr pool size nlanes open request target user pool set invalid poolset nlanes pool addr pool size nlanes target user node cannot find provider node log match node log pid program tmp test rpmem addr ext rpmem addr ext node log match librpmem version nw node log librpmem version node log match src version nw node log src version node log match node log compiled with support for valgrind pmemcheck fail match node log match did not match pattern runtests stopping rpmem addr ext failed test all fs any build nondebug rpmem provider sockets rpmem pm gpspm | 1 |
2,718 | 8,211,851,502 | IssuesEvent | 2018-09-04 14:49:24 | mitmedialab/MediaCloud-Web-Tools | https://api.github.com/repos/mitmedialab/MediaCloud-Web-Tools | closed | button font too thin on safari | architecture bug | Must be some weird font thing with the new material ui. The weight is 200, but it shows up pencil thin. If I bump it to 201 then it is bolded nicely. Lets change this to 300 so button are more readable (and check that it doesn't make Chrome look terrible).
Safari:
<img width="596" alt="media_cloud" src="https://user-images.githubusercontent.com/673178/45030981-9b3ef480-b01b-11e8-8c10-dbb0ee0af7bc.png">
Chrome:
<img width="529" alt="media_cloud" src="https://user-images.githubusercontent.com/673178/45031012-ad209780-b01b-11e8-8691-8a3f18d6e83f.png">
| 1.0 | button font too thin on safari - Must be some weird font thing with the new material ui. The weight is 200, but it shows up pencil thin. If I bump it to 201 then it is bolded nicely. Lets change this to 300 so button are more readable (and check that it doesn't make Chrome look terrible).
Safari:
<img width="596" alt="media_cloud" src="https://user-images.githubusercontent.com/673178/45030981-9b3ef480-b01b-11e8-8c10-dbb0ee0af7bc.png">
Chrome:
<img width="529" alt="media_cloud" src="https://user-images.githubusercontent.com/673178/45031012-ad209780-b01b-11e8-8691-8a3f18d6e83f.png">
| non_priority | button font too thin on safari must be some weird font thing with the new material ui the weight is but it shows up pencil thin if i bump it to then it is bolded nicely lets change this to so button are more readable and check that it doesn t make chrome look terrible safari img width alt media cloud src chrome img width alt media cloud src | 0 |
377,071 | 11,162,976,953 | IssuesEvent | 2019-12-26 20:08:19 | dhenry-KCI/FredCo-Post-Go-Live- | https://api.github.com/repos/dhenry-KCI/FredCo-Post-Go-Live- | opened | Condition on Resuse | Medium Priority | This resuse permit # 202838 has a condition that the res building permit must be issued but there is no resbldg permit attached??

| 1.0 | Condition on Resuse - This resuse permit # 202838 has a condition that the res building permit must be issued but there is no resbldg permit attached??

| priority | condition on resuse this resuse permit has a condition that the res building permit must be issued but there is no resbldg permit attached | 1 |
155,081 | 24,398,210,790 | IssuesEvent | 2022-10-04 21:27:55 | MetaMask/metamask-extension | https://api.github.com/repos/MetaMask/metamask-extension | reopened | [Ext Nav] Create component: ButtonBase | area-UI design-system IA/NAV | ### Description
Create a reusable UI component for `ButtonBase`
### References
[Figma component](https://www.figma.com/file/HKpPKij9V3TpsyMV1TpV7C/Design-System-2.0?node-id=1287%3A11481)
[UI component guidelines](https://www.notion.so/UI-Components-5ccd2bf83eb8441892a0b72c0d8929e1)
[Styling guidelines](https://www.notion.so/Extension-Frontend-Engineering-Guide-b82ddb3e14004b5db3799a9b446294a9#49aa2ef7b7aa40178ff918043df71c34)
Testing guidelines: TBC
### Files needed
- `index.js`
- `index.scss`
- `button-base.js`
- `button-base.stories.js`
- `button-base.test.js`
- `README.mdx`
### Technical details
TBC
### Acceptance criteria
- [ ] Uses color, typography, shadows design tokens
- [ ] Uses semantic html
- [ ] PropTypes have descriptions
- [ ] Has storybook story with controls
- [ ] Has documentation in MDX
- [ ] Has unit tests and 90% coverage
- [ ] Works in Chrome and Firefox
- [ ] Performance tested: no unnecessary re-renders or other performance concerns | 1.0 | [Ext Nav] Create component: ButtonBase - ### Description
Create a reusable UI component for `ButtonBase`
### References
[Figma component](https://www.figma.com/file/HKpPKij9V3TpsyMV1TpV7C/Design-System-2.0?node-id=1287%3A11481)
[UI component guidelines](https://www.notion.so/UI-Components-5ccd2bf83eb8441892a0b72c0d8929e1)
[Styling guidelines](https://www.notion.so/Extension-Frontend-Engineering-Guide-b82ddb3e14004b5db3799a9b446294a9#49aa2ef7b7aa40178ff918043df71c34)
Testing guidelines: TBC
### Files needed
- `index.js`
- `index.scss`
- `button-base.js`
- `button-base.stories.js`
- `button-base.test.js`
- `README.mdx`
### Technical details
TBC
### Acceptance criteria
- [ ] Uses color, typography, shadows design tokens
- [ ] Uses semantic html
- [ ] PropTypes have descriptions
- [ ] Has storybook story with controls
- [ ] Has documentation in MDX
- [ ] Has unit tests and 90% coverage
- [ ] Works in Chrome and Firefox
- [ ] Performance tested: no unnecessary re-renders or other performance concerns | non_priority | create component buttonbase description create a reusable ui component for buttonbase references testing guidelines tbc files needed index js index scss button base js button base stories js button base test js readme mdx technical details tbc acceptance criteria uses color typography shadows design tokens uses semantic html proptypes have descriptions has storybook story with controls has documentation in mdx has unit tests and coverage works in chrome and firefox performance tested no unnecessary re renders or other performance concerns | 0 |
284,739 | 8,749,959,660 | IssuesEvent | 2018-12-13 17:46:32 | apifytech/apify-js | https://api.github.com/repos/apifytech/apify-js | closed | Reuse of tabs in PuppeteerPool | enhancement high priority | Reuse of tabs could reduce CPU usage.
We can add Boolean for this but by default, I would be reusing tabs. But we should discuss the potential use cases where reusing is not desired.
We don't need any new counter as this would just increase the number of usages of each browser limited by `retireInstanceAfterRequestCount`. | 1.0 | Reuse of tabs in PuppeteerPool - Reuse of tabs could reduce CPU usage.
We can add Boolean for this but by default, I would be reusing tabs. But we should discuss the potential use cases where reusing is not desired.
We don't need any new counter as this would just increase the number of usages of each browser limited by `retireInstanceAfterRequestCount`. | priority | reuse of tabs in puppeteerpool reuse of tabs could reduce cpu usage we can add boolean for this but by default i would be reusing tabs but we should discuss the potential use cases where reusing is not desired we don t need any new counter as this would just increase the number of usages of each browser limited by retireinstanceafterrequestcount | 1 |
71,288 | 3,355,044,911 | IssuesEvent | 2015-11-18 15:02:15 | Itseez/opencv | https://api.github.com/repos/Itseez/opencv | closed | Framework Build Failure | affected: 2.4 auto-transferred bug category: ios priority: normal | Transferred from http://code.opencv.org/issues/3268
```
|| Rick Tschudin on 2013-09-17 17:42
|| Priority: Normal
|| Affected: 2.4.6 (latest release)
|| Category: ios
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: ARM / Mac OSX
```
Framework Build Failure
-----------
```
Mac Terminal:
<pre> 2013-09-17
1. cd ~/<my_working _directory>
2. git clone https://github.com/Itseez/opencv.git
3. sudo port selfupdate
4 sudo port install opencv
5. /Users/ricktschudin/Developer
6. sudo ln -s /Applications/Xcode.app/Contents/Developer Developer
7. python opencv/platforms/ios/build_framework.py ios
...
-- Looking for linux/videodev.h - not found
...
jmemansi.o (No such file or directory)
...
libtool failed with exit code 1
</pre>
Help !
I've used a OpenCV framework in another app and it worked,
and I am trying to use it again but it fails to work.
So I am trying to build a new framework.
```
History
-------
##### Anna Kogan on 2013-10-03 08:06
```
Hello Rick,
Thank you for reporting the issue. If you could figure out how it could be fixed, a "contribution":http://www.code.opencv.org/projects/opencv/wiki/How_to_contribute would be very appreciated!
- Description changed from Mac Terminal: 2013-09-17 1. cd
~/<my_worki... to Mac Terminal: <pre> 2013-09-17 1. cd ~/... More
- Status changed from New to Open
``` | 1.0 | Framework Build Failure - Transferred from http://code.opencv.org/issues/3268
```
|| Rick Tschudin on 2013-09-17 17:42
|| Priority: Normal
|| Affected: 2.4.6 (latest release)
|| Category: ios
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: ARM / Mac OSX
```
Framework Build Failure
-----------
```
Mac Terminal:
<pre> 2013-09-17
1. cd ~/<my_working _directory>
2. git clone https://github.com/Itseez/opencv.git
3. sudo port selfupdate
4 sudo port install opencv
5. /Users/ricktschudin/Developer
6. sudo ln -s /Applications/Xcode.app/Contents/Developer Developer
7. python opencv/platforms/ios/build_framework.py ios
...
-- Looking for linux/videodev.h - not found
...
jmemansi.o (No such file or directory)
...
libtool failed with exit code 1
</pre>
Help !
I've used a OpenCV framework in another app and it worked,
and I am trying to use it again but it fails to work.
So I am trying to build a new framework.
```
History
-------
##### Anna Kogan on 2013-10-03 08:06
```
Hello Rick,
Thank you for reporting the issue. If you could figure out how it could be fixed, a "contribution":http://www.code.opencv.org/projects/opencv/wiki/How_to_contribute would be very appreciated!
- Description changed from Mac Terminal: 2013-09-17 1. cd
~/<my_worki... to Mac Terminal: <pre> 2013-09-17 1. cd ~/... More
- Status changed from New to Open
``` | priority | framework build failure transferred from rick tschudin on priority normal affected latest release category ios tracker bug difficulty pr platform arm mac osx framework build failure mac terminal cd git clone sudo port selfupdate sudo port install opencv users ricktschudin developer sudo ln s applications xcode app contents developer developer python opencv platforms ios build framework py ios looking for linux videodev h not found jmemansi o no such file or directory libtool failed with exit code help i ve used a opencv framework in another app and it worked and i am trying to use it again but it fails to work so i am trying to build a new framework history anna kogan on hello rick thank you for reporting the issue if you could figure out how it could be fixed a contribution would be very appreciated description changed from mac terminal cd cd more status changed from new to open | 1 |
243,336 | 18,683,229,851 | IssuesEvent | 2021-11-01 09:06:01 | Picovoice/picovoice | https://api.github.com/repos/Picovoice/picovoice | opened | Picovoice Documentation Issue: Limitations of Personal Acounts | documentation | ### What is the URL of the doc?
https://picovoice.ai/pricing/
https://github.com/Picovoice/rhino/issues/65
### What's the nature of the issue?
I have a non-commercial project in which I am tasked with creating a working product which will be used by individuals. The product I'm tasked to create is a mechanical hand using a raspberry pi that will open and close once someone says "open" or "close". Before the speech command there should be a 'wake word' whose task is to trigger the speech recognition process, this is needed in order to reduce the amount of false positives and reduce the CPU usage which will reduce battery consumption. Thus, your software was great for my needs. Nevertheless, the personal license which is for non-commercial purposes has a few limitation which I wasn't aware of only after spending a day of work on making your product work:
1. Apparently, you only have a 30 day expiration date on rihno and porcupine files generated via the console.
2. The porcupine only allows for popular operating systems such as, linux, mac and windows but is restricted for less common operating systems such as that of the raspberry pi.
These limitations (and there might be others I'm not aware of), disallow me to use your software for my project to my sorrow. If you could provide much more clarity on this topic upfront it would waste much less time and energy, and hopefully remove these restrictions for non-commercial usages. | 1.0 | Picovoice Documentation Issue: Limitations of Personal Acounts - ### What is the URL of the doc?
https://picovoice.ai/pricing/
https://github.com/Picovoice/rhino/issues/65
### What's the nature of the issue?
I have a non-commercial project in which I am tasked with creating a working product which will be used by individuals. The product I'm tasked to create is a mechanical hand using a raspberry pi that will open and close once someone says "open" or "close". Before the speech command there should be a 'wake word' whose task is to trigger the speech recognition process, this is needed in order to reduce the amount of false positives and reduce the CPU usage which will reduce battery consumption. Thus, your software was great for my needs. Nevertheless, the personal license which is for non-commercial purposes has a few limitation which I wasn't aware of only after spending a day of work on making your product work:
1. Apparently, you only have a 30 day expiration date on rihno and porcupine files generated via the console.
2. The porcupine only allows for popular operating systems such as, linux, mac and windows but is restricted for less common operating systems such as that of the raspberry pi.
These limitations (and there might be others I'm not aware of), disallow me to use your software for my project to my sorrow. If you could provide much more clarity on this topic upfront it would waste much less time and energy, and hopefully remove these restrictions for non-commercial usages. | non_priority | picovoice documentation issue limitations of personal acounts what is the url of the doc what s the nature of the issue i have a non commercial project in which i am tasked with creating a working product which will be used by individuals the product i m tasked to create is a mechanical hand using a raspberry pi that will open and close once someone says open or close before the speech command there should be a wake word whose task is to trigger the speech recognition process this is needed in order to reduce the amount of false positives and reduce the cpu usage which will reduce battery consumption thus your software was great for my needs nevertheless the personal license which is for non commercial purposes has a few limitation which i wasn t aware of only after spending a day of work on making your product work apparently you only have a day expiration date on rihno and porcupine files generated via the console the porcupine only allows for popular operating systems such as linux mac and windows but is restricted for less common operating systems such as that of the raspberry pi these limitations and there might be others i m not aware of disallow me to use your software for my project to my sorrow if you could provide much more clarity on this topic upfront it would waste much less time and energy and hopefully remove these restrictions for non commercial usages | 0 |
93,194 | 3,896,718,433 | IssuesEvent | 2016-04-16 00:40:47 | google/paco | https://api.github.com/repos/google/paco | closed | Web ui: Fixed Duration Dates should be flushed when switching to ongoing | Component-Server Component-UI Priority-Medium | As part of cleaning up specs created by the server, when the user switches to Ongoing after having Fixed Duration for a group, we should remove the start and end date. | 1.0 | Web ui: Fixed Duration Dates should be flushed when switching to ongoing - As part of cleaning up specs created by the server, when the user switches to Ongoing after having Fixed Duration for a group, we should remove the start and end date. | priority | web ui fixed duration dates should be flushed when switching to ongoing as part of cleaning up specs created by the server when the user switches to ongoing after having fixed duration for a group we should remove the start and end date | 1 |
140,142 | 31,845,024,173 | IssuesEvent | 2023-09-14 19:13:03 | swyddfa/esbonio | https://api.github.com/repos/swyddfa/esbonio | closed | Extension can hang while checking Python version | bug vscode | ```
[client] Python extension is available
[client] Activating python extension
[client] Running Command: /.../bin/python -c import sys ; print("{0.major}.{0.minor}.{0.micro}".format(sys.version_info))
```
If this command fails (such as when the configured Python does not exist), then the extension will hang as the error is not appropriately handled | 1.0 | Extension can hang while checking Python version - ```
[client] Python extension is available
[client] Activating python extension
[client] Running Command: /.../bin/python -c import sys ; print("{0.major}.{0.minor}.{0.micro}".format(sys.version_info))
```
If this command fails (such as when the configured Python does not exist), then the extension will hang as the error is not appropriately handled | non_priority | extension can hang while checking python version python extension is available activating python extension running command bin python c import sys print major minor micro format sys version info if this command fails such as when the configured python does not exist then the extension will hang as the error is not appropriately handled | 0 |
330,952 | 28,497,463,874 | IssuesEvent | 2023-04-18 15:05:35 | CSOIreland/PxStat | https://api.github.com/repos/CSOIreland/PxStat | closed | [ENHANCEMENT] Map widget - Null values appearing incorrectly | bug released tested fixed | When a feature has a null value is appears on the map as a zero. This is misleading, especially when other features on the map contains negative values. | 1.0 | [ENHANCEMENT] Map widget - Null values appearing incorrectly - When a feature has a null value is appears on the map as a zero. This is misleading, especially when other features on the map contains negative values. | non_priority | map widget null values appearing incorrectly when a feature has a null value is appears on the map as a zero this is misleading especially when other features on the map contains negative values | 0 |
138,716 | 20,672,775,193 | IssuesEvent | 2022-03-10 05:21:44 | spaceone-dev/spaceone-design-system | https://api.github.com/repos/spaceone-dev/spaceone-design-system | closed | [Skeleton] New 'Opacity' Props! | Design Update Priority: Medium | **Problem**
Can't see skeleton when the background is `gray100`
**Solution**
Add new `opacity` property, and set the value `40` and `70`
We can use `40` in white background, `70` in gray background.
**Figma / Jira Link**
[Go to Figma](https://www.figma.com/file/IS6P8y1Wn2nfBC4jGlSiya/?node-id=1942%3A170662)
[Change UI/UX of inventory stats #2203](https://github.com/spaceone-dev/console/issues/2203)
| 1.0 | [Skeleton] New 'Opacity' Props! - **Problem**
Can't see skeleton when the background is `gray100`
**Solution**
Add new `opacity` property, and set the value `40` and `70`
We can use `40` in white background, `70` in gray background.
**Figma / Jira Link**
[Go to Figma](https://www.figma.com/file/IS6P8y1Wn2nfBC4jGlSiya/?node-id=1942%3A170662)
[Change UI/UX of inventory stats #2203](https://github.com/spaceone-dev/console/issues/2203)
| non_priority | new opacity props problem can t see skeleton when the background is solution add new opacity property and set the value and we can use in white background in gray background figma jira link | 0 |
284,322 | 21,414,017,957 | IssuesEvent | 2022-04-22 09:07:27 | alphagov/govuk-design-system | https://api.github.com/repos/alphagov/govuk-design-system | opened | Expand on not using placeholder text | documentation awaiting triage | ## Related documentation
https://design-system.service.gov.uk/components/text-input/
## Suggestion
>All text inputs must have visible labels; placeholder text is not an acceptable replacement for a label as it vanishes when users start typing
This does not recommend against placeholders in general (for example to provide a hint or example) - possibly it should:
- placeholder text is low contrast
- placeholder text is not supported by all screen readers
https://www.deque.com/blog/accessible-forms-the-problem-with-placeholders/
## Evidence (where applicable)
Request on support
| 1.0 | Expand on not using placeholder text - ## Related documentation
https://design-system.service.gov.uk/components/text-input/
## Suggestion
>All text inputs must have visible labels; placeholder text is not an acceptable replacement for a label as it vanishes when users start typing
This does not recommend against placeholders in general (for example to provide a hint or example) - possibly it should:
- placeholder text is low contrast
- placeholder text is not supported by all screen readers
https://www.deque.com/blog/accessible-forms-the-problem-with-placeholders/
## Evidence (where applicable)
Request on support
| non_priority | expand on not using placeholder text related documentation suggestion all text inputs must have visible labels placeholder text is not an acceptable replacement for a label as it vanishes when users start typing this does not recommend against placeholders in general for example to provide a hint or example possibly it should placeholder text is low contrast placeholder text is not supported by all screen readers evidence where applicable request on support | 0 |
605,930 | 18,752,025,815 | IssuesEvent | 2021-11-05 04:11:12 | ppy/osu | https://api.github.com/repos/ppy/osu | closed | Skin editor corner anchors are too sensitive | area:skinning priority:1 | **Describe the bug:**
Seems to only occur with vertically-rotated elements, regardless of which element it is.
**Screenshots or videos showing encountered issue:**
https://drive.google.com/file/d/1QxRbPo483okJxlw1x7aJGt3HmvJdBdNz/view?usp=sharing
| 1.0 | Skin editor corner anchors are too sensitive - **Describe the bug:**
Seems to only occur with vertically-rotated elements, regardless of which element it is.
**Screenshots or videos showing encountered issue:**
https://drive.google.com/file/d/1QxRbPo483okJxlw1x7aJGt3HmvJdBdNz/view?usp=sharing
| priority | skin editor corner anchors are too sensitive describe the bug seems to only occur with vertically rotated elements regardless of which element it is screenshots or videos showing encountered issue | 1 |
74,488 | 9,785,778,303 | IssuesEvent | 2019-06-09 10:50:55 | SoftEtherVPN/SoftEtherVPN | https://api.github.com/repos/SoftEtherVPN/SoftEtherVPN | closed | [Suggestion] Adding a dependency package list | documentation | Will that help newbies (like me) build software easier?
For example, these are the packages I installed during the build process on a Debian 9 OpenVZ VPS.
```
gcc
make
libreadline-dev
libssl-dev
zlib1g-dev
libncurses5-dev
``` | 1.0 | [Suggestion] Adding a dependency package list - Will that help newbies (like me) build software easier?
For example, these are the packages I installed during the build process on a Debian 9 OpenVZ VPS.
```
gcc
make
libreadline-dev
libssl-dev
zlib1g-dev
libncurses5-dev
``` | non_priority | adding a dependency package list will that help newbies like me build software easier for example these are the packages i installed during the build process on a debian openvz vps gcc make libreadline dev libssl dev dev dev | 0 |
115,879 | 14,901,036,163 | IssuesEvent | 2021-01-21 16:01:58 | MetaMask/metamask-extension | https://api.github.com/repos/MetaMask/metamask-extension | opened | [transaction confirmation] Show alert on confirmations for high gas prices | N00-needsDesign | #Description
As a user, I want to be notified if gas amount that is significantly over the known market
#Situation
**GIVEN:** I am confirming a transaction (transaction confirmation, approve, etc)
**WHEN:** when gas is significantly over market
**THEN:** I should get an alert notifying me of high gas prices | 1.0 | [transaction confirmation] Show alert on confirmations for high gas prices - #Description
As a user, I want to be notified if gas amount that is significantly over the known market
#Situation
**GIVEN:** I am confirming a transaction (transaction confirmation, approve, etc)
**WHEN:** when gas is significantly over market
**THEN:** I should get an alert notifying me of high gas prices | non_priority | show alert on confirmations for high gas prices description as a user i want to be notified if gas amount that is significantly over the known market situation given i am confirming a transaction transaction confirmation approve etc when when gas is significantly over market then i should get an alert notifying me of high gas prices | 0 |
24,104 | 7,456,945,700 | IssuesEvent | 2018-03-30 00:41:51 | RedHatOfficial/RedHatOfficial.github.io | https://api.github.com/repos/RedHatOfficial/RedHatOfficial.github.io | closed | Content Not Updating | build improvements | Merging is going well, but the site itself is still not updated. Can we add an automated process to the CI flow that will auto-build the app after each checked update? | 1.0 | Content Not Updating - Merging is going well, but the site itself is still not updated. Can we add an automated process to the CI flow that will auto-build the app after each checked update? | non_priority | content not updating merging is going well but the site itself is still not updated can we add an automated process to the ci flow that will auto build the app after each checked update | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.