Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
11,685
| 14,542,675,865
|
IssuesEvent
|
2020-12-15 15:59:08
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
opened
|
Enable code coverage in SonarCloud
|
P2 enhancement process
|
**Problem**
SonarCloud action fails on PRs due to not having code coverage uploaded to SonarCloud.
**Solution**
- Enable code coverage for all modules.
- Ensure code coverage is above 80% like it is for codecov.io. If it's not, something's wrong with config
**Alternatives**
Figure out a way in Sonar to disable code coverage quality gate. I couldn't find a way.
**Additional Context**
|
1.0
|
Enable code coverage in SonarCloud - **Problem**
SonarCloud action fails on PRs due to not having code coverage uploaded to SonarCloud.
**Solution**
- Enable code coverage for all modules.
- Ensure code coverage is above 80% like it is for codecov.io. If it's not, something's wrong with config
**Alternatives**
Figure out a way in Sonar to disable code coverage quality gate. I couldn't find a way.
**Additional Context**
|
process
|
enable code coverage in sonarcloud problem sonarcloud action fails on prs due to not having code coverage uploaded to sonarcloud solution enable code coverage for all modules ensure code coverage is above like it is for codecov io if it s not something s wrong with config alternatives figure out a way in sonar to disable code coverage quality gate i couldn t find a way additional context
| 1
|
13,188
| 15,613,161,697
|
IssuesEvent
|
2021-03-19 16:07:22
|
bridgetownrb/bridgetown
|
https://api.github.com/repos/bridgetownrb/bridgetown
|
closed
|
Refactor long class lengths
|
process
|
Currently Rubocop is configured to ignore class lengths > 240 lines for the following files:
- document.rb
- site.rb
- commands/serve.rb
- configuration.rb
It should come as no surprise that I hate every time I have to wade into one of those files. :)
Definitely need to refactor those into multiple concerns and/or supporting POROs, starting with Site.
|
1.0
|
Refactor long class lengths - Currently Rubocop is configured to ignore class lengths > 240 lines for the following files:
- document.rb
- site.rb
- commands/serve.rb
- configuration.rb
It should come as no surprise that I hate every time I have to wade into one of those files. :)
Definitely need to refactor those into multiple concerns and/or supporting POROs, starting with Site.
|
process
|
refactor long class lengths currently rubocop is configured to ignore class lengths lines for the following files document rb site rb commands serve rb configuration rb it should come as no surprise that i hate every time i have to wade into one of those files definitely need to refactor those into multiple concerns and or supporting poros starting with site
| 1
|
3,920
| 6,842,629,801
|
IssuesEvent
|
2017-11-12 04:36:09
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
ethprice: Performance
|
apps-ethPrice status-inprocess type-enhancement
|
Two places where ethprice can be greatly improved:
1. The file is opened and parsed fully into memory. Every field is the same size, so it should be possible to simply read the entire database into memory. The best place to fix this is in SFArchive::operator>> for CPriceArray, which would fix it everywhere (actually fixing all the array input and output in SFArchive would be good).
2. The parsing library is slow. In a simple test, about 10% of time was spent in reading and writing the data to the hard drive, and about 90% in parsing poloniex's response.
Throwing this idea in because I don't want to create a new issue: I could also separate latest price from the whole, open/close/latest/first/ etc as is given by poloniex. and have two different price databases one for simple queries, one for more data if I need it.
|
1.0
|
ethprice: Performance - Two places where ethprice can be greatly improved:
1. The file is opened and parsed fully into memory. Every field is the same size, so it should be possible to simply read the entire database into memory. The best place to fix this is in SFArchive::operator>> for CPriceArray, which would fix it everywhere (actually fixing all the array input and output in SFArchive would be good).
2. The parsing library is slow. In a simple test, about 10% of time was spent in reading and writing the data to the hard drive, and about 90% in parsing poloniex's response.
Throwing this idea in because I don't want to create a new issue: I could also separate latest price from the whole, open/close/latest/first/ etc as is given by poloniex. and have two different price databases one for simple queries, one for more data if I need it.
|
process
|
ethprice performance two places where ethprice can be greatly improved the file is opened and parsed fully into memory every field is the same size so it should be possible to simply read the entire database into memory the best place to fix this is in sfarchive operator for cpricearray which would fix it everywhere actually fixing all the array input and output in sfarchive would be good the parsing library is slow in a simple test about of time was spent in reading and writing the data to the hard drive and about in parsing poloniex s response throwing this idea in because i don t want to create a new issue i could also separate latest price from the whole open close latest first etc as is given by poloniex and have two different price databases one for simple queries one for more data if i need it
| 1
|
4,679
| 7,517,580,289
|
IssuesEvent
|
2018-04-12 04:29:26
|
UnbFeelings/unb-feelings-GQA
|
https://api.github.com/repos/UnbFeelings/unb-feelings-GQA
|
closed
|
Planejar as auditorias do ciclo 2
|
document process wiki
|
- [x] Alocar datas;
- [x] Alocar recursos;
- [x] Alocar objetos;
- [x] Criar documento de planejamento (Data do Ciclo);
- [x] Id da issue da auditoria;
- [x] Data da auditoria;
- [x] Objeto da auditoria;
- [x] Auditor da auditoria;
|
1.0
|
Planejar as auditorias do ciclo 2 - - [x] Alocar datas;
- [x] Alocar recursos;
- [x] Alocar objetos;
- [x] Criar documento de planejamento (Data do Ciclo);
- [x] Id da issue da auditoria;
- [x] Data da auditoria;
- [x] Objeto da auditoria;
- [x] Auditor da auditoria;
|
process
|
planejar as auditorias do ciclo alocar datas alocar recursos alocar objetos criar documento de planejamento data do ciclo id da issue da auditoria data da auditoria objeto da auditoria auditor da auditoria
| 1
|
9,338
| 12,341,344,949
|
IssuesEvent
|
2020-05-14 21:43:40
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Duplicate Cron Syntax In "Supported cron syntax"
|
Pri1 devops-cicd-process/tech devops/prod doc-bug
|
0 18 * * Mon,Wed,Fri is listed twice in the following example:
Build every Monday, Wednesday, and Friday at 6:00 PM 0 18 * * Mon,Wed,Fri, 0 18 * * 1,3,5, 0 18 * * Mon,Wed,Fri, or 0 18 * * 1-5/2
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 2ea2c851-bd1e-cddc-b4d0-e9f4112b8565
* Version Independent ID: 07c23fdd-14b5-985b-1c63-3f26f3a216ad
* Content: [Configure schedules to run pipelines - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/scheduled-triggers?view=azure-devops&tabs=yaml#supported-cron-syntax)
* Content Source: [docs/pipelines/process/scheduled-triggers.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/scheduled-triggers.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @steved0x
* Microsoft Alias: **sdanie**
|
1.0
|
Duplicate Cron Syntax In "Supported cron syntax" - 0 18 * * Mon,Wed,Fri is listed twice in the following example:
Build every Monday, Wednesday, and Friday at 6:00 PM 0 18 * * Mon,Wed,Fri, 0 18 * * 1,3,5, 0 18 * * Mon,Wed,Fri, or 0 18 * * 1-5/2
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 2ea2c851-bd1e-cddc-b4d0-e9f4112b8565
* Version Independent ID: 07c23fdd-14b5-985b-1c63-3f26f3a216ad
* Content: [Configure schedules to run pipelines - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/scheduled-triggers?view=azure-devops&tabs=yaml#supported-cron-syntax)
* Content Source: [docs/pipelines/process/scheduled-triggers.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/scheduled-triggers.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @steved0x
* Microsoft Alias: **sdanie**
|
process
|
duplicate cron syntax in supported cron syntax mon wed fri is listed twice in the following example build every monday wednesday and friday at pm mon wed fri mon wed fri or document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id cddc version independent id content content source product devops technology devops cicd process github login microsoft alias sdanie
| 1
|
3,371
| 6,499,440,339
|
IssuesEvent
|
2017-08-22 21:31:38
|
SemanticWebBuilder/SWBP
|
https://api.github.com/repos/SemanticWebBuilder/SWBP
|
opened
|
Recurso ProcessTaskInbox genera exepción al procesar JSP editRessource.jsp
|
bug COMPONENT:ProcessEngine
|
2017-08-22 16:27:28,651 ERROR - Error to process JSP.../swbadmin/jsp/editResource.jsp
org.apache.jasper.JasperException: An exception occurred processing JSP page /swbadmin/jsp/editResource.jsp at line 30
27:
28: SWBHttpServletResponseWrapper resp=new SWBHttpServletResponseWrapper(response);
29:
30: sem.render(request, resp, req);
31:
32: out.print(resp.toString());
33:
Stacktrace:
at org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:588)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:481)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:385)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:329)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:742)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:230)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:710)
at org.apache.catalina.core.ApplicationDispatcher.doInclude(ApplicationDispatcher.java:580)
at org.apache.catalina.core.ApplicationDispatcher.include(ApplicationDispatcher.java:516)
at org.semanticwb.portal.resources.JSPResource.doView(JSPResource.java:91)
at org.semanticwb.portal.api.GenericResource.processRequest(GenericResource.java:141)
at org.semanticwb.portal.api.GenericResource.render(GenericResource.java:124)
at org.semanticwb.portal.api.SWBResourceWindowRender.render(SWBResourceWindowRender.java:80)
at org.semanticwb.portal.api.SWBResourceTraceMgr.renderTraced(SWBResourceTraceMgr.java:167)
at org.semanticwb.portal.TemplateImp.build(TemplateImp.java:1098)
at org.semanticwb.portal.TemplateImp.build(TemplateImp.java:934)
at org.semanticwb.servlet.internal.Distributor._doProcess(Distributor.java:453)
at org.semanticwb.servlet.internal.Distributor.doProcess(Distributor.java:101)
at org.semanticwb.servlet.SWBVirtualHostFilter.processInternalServlet(SWBVirtualHostFilter.java:392)
at org.semanticwb.servlet.SWBVirtualHostFilter.doFilter(SWBVirtualHostFilter.java:319)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:475)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:624)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:341)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:498)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:796)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1368)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.semanticwb.portal.SWBFormMgr.init(SWBFormMgr.java:179)
at org.semanticwb.portal.SWBFormMgr.<init>(SWBFormMgr.java:143)
at org.semanticwb.process.resources.taskinbox.UserTaskInboxResource.doAdmin(UserTaskInboxResource.java:550)
at org.semanticwb.portal.api.GenericResource.processRequest(GenericResource.java:150)
at org.semanticwb.process.resources.taskinbox.UserTaskInboxResource.processRequest(UserTaskInboxResource.java:254)
at org.semanticwb.portal.api.GenericResource.render(GenericResource.java:124)
at org.apache.jsp.swbadmin.jsp.editResource_jsp._jspService(editResource_jsp.java:152)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:742)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:443)
... 38 more
|
1.0
|
Recurso ProcessTaskInbox genera exepción al procesar JSP editRessource.jsp - 2017-08-22 16:27:28,651 ERROR - Error to process JSP.../swbadmin/jsp/editResource.jsp
org.apache.jasper.JasperException: An exception occurred processing JSP page /swbadmin/jsp/editResource.jsp at line 30
27:
28: SWBHttpServletResponseWrapper resp=new SWBHttpServletResponseWrapper(response);
29:
30: sem.render(request, resp, req);
31:
32: out.print(resp.toString());
33:
Stacktrace:
at org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:588)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:481)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:385)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:329)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:742)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:230)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:710)
at org.apache.catalina.core.ApplicationDispatcher.doInclude(ApplicationDispatcher.java:580)
at org.apache.catalina.core.ApplicationDispatcher.include(ApplicationDispatcher.java:516)
at org.semanticwb.portal.resources.JSPResource.doView(JSPResource.java:91)
at org.semanticwb.portal.api.GenericResource.processRequest(GenericResource.java:141)
at org.semanticwb.portal.api.GenericResource.render(GenericResource.java:124)
at org.semanticwb.portal.api.SWBResourceWindowRender.render(SWBResourceWindowRender.java:80)
at org.semanticwb.portal.api.SWBResourceTraceMgr.renderTraced(SWBResourceTraceMgr.java:167)
at org.semanticwb.portal.TemplateImp.build(TemplateImp.java:1098)
at org.semanticwb.portal.TemplateImp.build(TemplateImp.java:934)
at org.semanticwb.servlet.internal.Distributor._doProcess(Distributor.java:453)
at org.semanticwb.servlet.internal.Distributor.doProcess(Distributor.java:101)
at org.semanticwb.servlet.SWBVirtualHostFilter.processInternalServlet(SWBVirtualHostFilter.java:392)
at org.semanticwb.servlet.SWBVirtualHostFilter.doFilter(SWBVirtualHostFilter.java:319)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:475)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:624)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:341)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:498)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:796)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1368)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.semanticwb.portal.SWBFormMgr.init(SWBFormMgr.java:179)
at org.semanticwb.portal.SWBFormMgr.<init>(SWBFormMgr.java:143)
at org.semanticwb.process.resources.taskinbox.UserTaskInboxResource.doAdmin(UserTaskInboxResource.java:550)
at org.semanticwb.portal.api.GenericResource.processRequest(GenericResource.java:150)
at org.semanticwb.process.resources.taskinbox.UserTaskInboxResource.processRequest(UserTaskInboxResource.java:254)
at org.semanticwb.portal.api.GenericResource.render(GenericResource.java:124)
at org.apache.jsp.swbadmin.jsp.editResource_jsp._jspService(editResource_jsp.java:152)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:742)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:443)
... 38 more
|
process
|
recurso processtaskinbox genera exepción al procesar jsp editressource jsp error error to process jsp swbadmin jsp editresource jsp org apache jasper jasperexception an exception occurred processing jsp page swbadmin jsp editresource jsp at line swbhttpservletresponsewrapper resp new swbhttpservletresponsewrapper response sem render request resp req out print resp tostring stacktrace at org apache jasper servlet jspservletwrapper handlejspexception jspservletwrapper java at org apache jasper servlet jspservletwrapper service jspservletwrapper java at org apache jasper servlet jspservlet servicejspfile jspservlet java at org apache jasper servlet jspservlet service jspservlet java at javax servlet http httpservlet service httpservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core applicationdispatcher invoke applicationdispatcher java at org apache catalina core applicationdispatcher doinclude applicationdispatcher java at org apache catalina core applicationdispatcher include applicationdispatcher java at org semanticwb portal resources jspresource doview jspresource java at org semanticwb portal api genericresource processrequest genericresource java at org semanticwb portal api genericresource render genericresource java at org semanticwb portal api swbresourcewindowrender render swbresourcewindowrender java at org semanticwb portal api swbresourcetracemgr rendertraced swbresourcetracemgr java at org semanticwb portal templateimp build templateimp java at org semanticwb portal templateimp build templateimp java at org semanticwb servlet internal distributor doprocess distributor java at org semanticwb servlet internal distributor doprocess distributor java at org semanticwb servlet swbvirtualhostfilter processinternalservlet swbvirtualhostfilter java at org semanticwb servlet swbvirtualhostfilter dofilter swbvirtualhostfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina authenticator authenticatorbase invoke authenticatorbase java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org apache catalina valves abstractaccesslogvalve invoke abstractaccesslogvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote service java at org apache coyote abstractprocessorlight process abstractprocessorlight java at org apache coyote abstractprotocol connectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net socketprocessorbase run socketprocessorbase java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at org apache tomcat util threads taskthread wrappingrunnable run taskthread java at java lang thread run thread java caused by java lang nullpointerexception at org semanticwb portal swbformmgr init swbformmgr java at org semanticwb portal swbformmgr swbformmgr java at org semanticwb process resources taskinbox usertaskinboxresource doadmin usertaskinboxresource java at org semanticwb portal api genericresource processrequest genericresource java at org semanticwb process resources taskinbox usertaskinboxresource processrequest usertaskinboxresource java at org semanticwb portal api genericresource render genericresource java at org apache jsp swbadmin jsp editresource jsp jspservice editresource jsp java at org apache jasper runtime httpjspbase service httpjspbase java at javax servlet http httpservlet service httpservlet java at org apache jasper servlet jspservletwrapper service jspservletwrapper java more
| 1
|
5,985
| 8,805,374,090
|
IssuesEvent
|
2018-12-26 19:13:49
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Allow XSDs to use grammar cache when Xerces override is available.
|
feature preprocess stale
|
The following pull request turned off XSD from the grammar cache. https://github.com/dita-ot/dita-ot/pull/1401
From the changes made as part of that above pull request I'm hoping that we can add some checks to see whether Xerces is returning the proper info when encountering a system ID perform bypassing the grammar cache. The part that's unsure is what we would need to look for to determine whether or not the grammar cache is working as expected.
I'm working on getting the Xerces patch included in DITA OT.
Eric
|
1.0
|
Allow XSDs to use grammar cache when Xerces override is available. - The following pull request turned off XSD from the grammar cache. https://github.com/dita-ot/dita-ot/pull/1401
From the changes made as part of that above pull request I'm hoping that we can add some checks to see whether Xerces is returning the proper info when encountering a system ID perform bypassing the grammar cache. The part that's unsure is what we would need to look for to determine whether or not the grammar cache is working as expected.
I'm working on getting the Xerces patch included in DITA OT.
Eric
|
process
|
allow xsds to use grammar cache when xerces override is available the following pull request turned off xsd from the grammar cache from the changes made as part of that above pull request i m hoping that we can add some checks to see whether xerces is returning the proper info when encountering a system id perform bypassing the grammar cache the part that s unsure is what we would need to look for to determine whether or not the grammar cache is working as expected i m working on getting the xerces patch included in dita ot eric
| 1
|
1,468
| 4,048,675,171
|
IssuesEvent
|
2016-05-23 11:18:45
|
haskell-distributed/distributed-process-simplelocalnet
|
https://api.github.com/repos/haskell-distributed/distributed-process-simplelocalnet
|
closed
|
multicast does not work on Windows
|
bug distributed-process-simplelocalnet
|
_From @edsko on October 23, 2012 8:30_
"bind: failed (Cannot assign requested address (WSAEADDRNOTAVAIL))
_Copied from original issue: haskell-distributed/distributed-process#54_
|
1.0
|
multicast does not work on Windows - _From @edsko on October 23, 2012 8:30_
"bind: failed (Cannot assign requested address (WSAEADDRNOTAVAIL))
_Copied from original issue: haskell-distributed/distributed-process#54_
|
process
|
multicast does not work on windows from edsko on october bind failed cannot assign requested address wsaeaddrnotavail copied from original issue haskell distributed distributed process
| 1
|
16,072
| 20,242,979,962
|
IssuesEvent
|
2022-02-14 11:01:58
|
Project60/org.project60.sepa
|
https://api.github.com/repos/Project60/org.project60.sepa
|
closed
|
payment processor does not work in drupal webform
|
payment processor
|
With CiviCRM 5.29 and CiviSepa 1.4, used the extension for processing contributions. Using the extension in civicrm contribution pages, everything went well.
For some reasons it might be good to use drupal webform for online contributions. Whith that, i get an error
`Notice: Undefined property: CRM_Core_Payment_SDD::$_paymentForm in CRM_Core_Payment->getForm() (line 555 von [...]/sites/all/modules/civicrm/CRM/Core/Payment.php).`
Looking to CiviCRM, the contribution is recorded as contribution, but without creating a Sepa mandate. It seems to "fall back" to a different payment processor with the value of 1, so it might have problems to select the correct payment processor while giving the record from webform to civicrm.
Besides, with the `SEPA DD` processor a transaction id in correct format was created (but still with the wrong payment processor), with the `SEPA DD (new)` this information is missing too.
Anyone who has experienced that, too?
|
1.0
|
payment processor does not work in drupal webform - With CiviCRM 5.29 and CiviSepa 1.4, used the extension for processing contributions. Using the extension in civicrm contribution pages, everything went well.
For some reasons it might be good to use drupal webform for online contributions. Whith that, i get an error
`Notice: Undefined property: CRM_Core_Payment_SDD::$_paymentForm in CRM_Core_Payment->getForm() (line 555 von [...]/sites/all/modules/civicrm/CRM/Core/Payment.php).`
Looking to CiviCRM, the contribution is recorded as contribution, but without creating a Sepa mandate. It seems to "fall back" to a different payment processor with the value of 1, so it might have problems to select the correct payment processor while giving the record from webform to civicrm.
Besides, with the `SEPA DD` processor a transaction id in correct format was created (but still with the wrong payment processor), with the `SEPA DD (new)` this information is missing too.
Anyone who has experienced that, too?
|
process
|
payment processor does not work in drupal webform with civicrm and civisepa used the extension for processing contributions using the extension in civicrm contribution pages everything went well for some reasons it might be good to use drupal webform for online contributions whith that i get an error notice undefined property crm core payment sdd paymentform in crm core payment getform line von sites all modules civicrm crm core payment php looking to civicrm the contribution is recorded as contribution but without creating a sepa mandate it seems to fall back to a different payment processor with the value of so it might have problems to select the correct payment processor while giving the record from webform to civicrm besides with the sepa dd processor a transaction id in correct format was created but still with the wrong payment processor with the sepa dd new this information is missing too anyone who has experienced that too
| 1
|
4,069
| 7,001,608,096
|
IssuesEvent
|
2017-12-18 10:52:05
|
openvstorage/framework
|
https://api.github.com/repos/openvstorage/framework
|
closed
|
DB role refresh bug in gui
|
priority_normal process_wontfix
|
after creation of first partition with all roles, i tried to create a backend. An error message was visible that a DB role should be available on a partition.
After refreshing the page, the DB role was recognized, the error message did not appear and the backend could be created.
|
1.0
|
DB role refresh bug in gui - after creation of first partition with all roles, i tried to create a backend. An error message was visible that a DB role should be available on a partition.
After refreshing the page, the DB role was recognized, the error message did not appear and the backend could be created.
|
process
|
db role refresh bug in gui after creation of first partition with all roles i tried to create a backend an error message was visible that a db role should be available on a partition after refreshing the page the db role was recognized the error message did not appear and the backend could be created
| 1
|
16,988
| 22,351,253,584
|
IssuesEvent
|
2022-06-15 12:16:42
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
closed
|
Subscribe to events of created element scopes
|
team/process-automation
|
Subscribe to the events of the created elements scopes:
- subscribe each flow scope that is created as part of #9390 to the messages and timers using the existing logic (i.e. `CatchEventBehavior.subscribeToEvents`).
- flush the side effect queue, to actually subscribe (necessary for both messages as well timer events).
Watch out to not create event subscriptions multiple times when we create a process instance with multiple tokens.
[See spike](https://github.com/camunda/zeebe/commit/b2cb74629694dfc059bb1cdb91ae3a263d0b0050#diff-993ccf60e85a2003a595381edf26e3cad5c69aa2f3d9d89e658911b0851081f5R105-R113) as example.
Blocked by #9390
|
1.0
|
Subscribe to events of created element scopes - Subscribe to the events of the created elements scopes:
- subscribe each flow scope that is created as part of #9390 to the messages and timers using the existing logic (i.e. `CatchEventBehavior.subscribeToEvents`).
- flush the side effect queue, to actually subscribe (necessary for both messages as well timer events).
Watch out to not create event subscriptions multiple times when we create a process instance with multiple tokens.
[See spike](https://github.com/camunda/zeebe/commit/b2cb74629694dfc059bb1cdb91ae3a263d0b0050#diff-993ccf60e85a2003a595381edf26e3cad5c69aa2f3d9d89e658911b0851081f5R105-R113) as example.
Blocked by #9390
|
process
|
subscribe to events of created element scopes subscribe to the events of the created elements scopes subscribe each flow scope that is created as part of to the messages and timers using the existing logic i e catcheventbehavior subscribetoevents flush the side effect queue to actually subscribe necessary for both messages as well timer events watch out to not create event subscriptions multiple times when we create a process instance with multiple tokens as example blocked by
| 1
|
4,555
| 7,388,196,816
|
IssuesEvent
|
2018-03-16 01:07:52
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Add ProcessStartInfo.ArgumentList
|
api-approved area-System.Diagnostics.Process
|
<sup>Note: Updated into an API proposal, with feedback incorporated.</sup>
# Rationale
On Unix platforms, external executables (binaries) receive their arguments as an _array_ of string _literals_, which makes for robust, predictable passing of arguments (see [Linux system function `execvp()`](https://linux.die.net/man/3/execvp), for instance).
Regrettably, in Windows there is no equivalent mechanism: a pseudo shell "command line" must be passed as a _single string_ encoding _all_ arguments, and it is up to the _target executable itself_ to parse that line into individual arguments.
The `ProcessStartInfo` class currently only supports the _Windows_ approach directly, by exposing a `string Arguments` property that expects the whole command line.
**On Unix platforms, this means that even if you start out with an array of arguments, you must currently artificially assemble its elements into a single pseudo shell command line, only to have CoreFX split that back into an array of individual arguments behind the scenes** so as to be able to invoke the platform-native process-creation function, which takes an _array_ of arguments.
Not only is this an unnecessary burden on the user and inefficient, it is error-prone. It is easy to accidentally assemble a command-line string that does _not_ round-trip as expected.
As a real-world use case, consider the [ongoing quoting woes PowerShell experiences](https://github.com/PowerShell/PowerShell/issues/1995).
At least on Unix platforms, PowerShell should be able to simply pass the arguments that are the result of _its_ parsing _as-is_ to external utilities.
Having the ability to pass an _array_ of arguments would be of benefit on Windows too, as it is fair to assume that the more typical use case is to build the desired arguments as a list / array rather than to piece together a single-string command line with intricate quoting.
(The latter should only be needed if you're passing an preexisting string through or if you're invoking an executable that has custom argument-parsing rules.)
# Proposed API
**Add a `IReadOnlyList<String> ArgumentList` property (conceptually, an array of argument string literals) to the `ProcessStartInfo` class**, to complement the existing `string Arguments` (pseudo shell command line) property, and let each update the other lazily, on demand, when accessed:
* If `.Arguments` was (last) assigned to, do the following when `.ArgumentList` is accessed:
Call [`ParseArgumentsIntoList()`](https://github.com/dotnet/corefx/blob/edecb1207cbe6194cb9a9f1bc862ef8d1ebb3ef4/src/System.Diagnostics.Process/src/System/Diagnostics/Process.Unix.cs#L481), which splits the string into individual arguments based on [the rules for MS C++ programs](https://docs.microsoft.com/en-us/cpp/cpp/parsing-cpp-command-line-arguments) and return the resulting list.
* If `.ArgumentList` was (last) assigned to, do the following when `.Arguments` is accessed:
Synthesize the pseudo shell command line from the individual arguments and assign the result to using the above rules _in reverse_ (enclose in `"..."` if a given argument has embedded whitespace, ..., as already implemented for internal use [in `System.PasteArguments.Paste()`](https://github.com/dotnet/corefx/blob/5d3e42f831ec3d4c5964040e235824f779d5db53/src/Common/src/System/PasteArguments.cs#L16)) and return the resulting `string`.
* As @TSlivede proposes, it's worth extending the conversion algorithm to also double-quote arguments that contain `'` (single quotes) lest they be interpreted as having _syntactic_ function, which to some programs they do (e.g., Ruby, Cygwin).
That way, both `.Arguments` and `.ArgumentList` can be assigned to, and the respective other property contains the equivalent in terms of the official (MS C++) parsing rules.
A `ProcessStartInfo` instance constructed this way can therefore be used on all supported platforms:
* On Windows, pass the `.Arguments` property value to the `CreateProcess()` / `ShellExecuteEx()` Windows API functions, as before.
* On Unix platforms, pass the `.ArgumentList` property value via `.ToArray()` to [`ForkAndExecProcess()`](https://github.com/dotnet/corefx/blob/edecb1207cbe6194cb9a9f1bc862ef8d1ebb3ef4/src/Common/src/Interop/Unix/System.Native/Interop.ForkAndExecProcess.cs#L15).
Additionally, to complement the suggested behind-the-scenes conversion between the array form and the single-string form, **public utility methods should be implemented** that perform these conversions explicitly, on demand.
@AtsushiKan proposes the following signatures:
```csharp
public static IReadOnlyList<String> SplitArguments(String commandLine) { ... }
public static String CombineArguments(IReadOnlyList<String> argumentList) { ... }
```
# Open Questions
* How exactly should the existing, currently (effectively) private utility methods referenced above be surfaced publicly (namespace, class, signature)? Currently, they're in different classes, and one of them is `internal` (`System.PasteArguments`); @AtsushiKan suggests making them public methods of the `System.Diagnostics.Process` class.
# Usage
```csharp
// The array of arguments to pass.
string[] args = { "hello", "sweet world" };
// ---- New ProcessStartInfo.ArgumentList property.
// Assign it to the new .ArgumentList property of a ProcessStartInfo instance.
var psi = new ProcessStartInfo();
psi.ArgumentList = args;
// Accessing .Arguments now returns the pseudo shell command line that is the
// equivalent of the array of arguments:
// @"hello ""sweet world"""
string pseudoShellCommandLine = psi.Arguments;
// ---- New utility methods for conversion on demand.
// EXACT NAMES AND SIGNATURES TBD.
// Arguments array (list) -> pseudo shell command line
string[] args = { "hello", "sweet world" };
string pseudoShellCommandLine = System.Diagnostics.Process.CombineArguments(args);
// Pseudo shell command line -> arguments array (list)
IReadOnlyList<string> argList = System.Diagnostics.Process.SplitArguments(@"hello ""sweet world""");
```
|
1.0
|
Add ProcessStartInfo.ArgumentList - <sup>Note: Updated into an API proposal, with feedback incorporated.</sup>
# Rationale
On Unix platforms, external executables (binaries) receive their arguments as an _array_ of string _literals_, which makes for robust, predictable passing of arguments (see [Linux system function `execvp()`](https://linux.die.net/man/3/execvp), for instance).
Regrettably, in Windows there is no equivalent mechanism: a pseudo shell "command line" must be passed as a _single string_ encoding _all_ arguments, and it is up to the _target executable itself_ to parse that line into individual arguments.
The `ProcessStartInfo` class currently only supports the _Windows_ approach directly, by exposing a `string Arguments` property that expects the whole command line.
**On Unix platforms, this means that even if you start out with an array of arguments, you must currently artificially assemble its elements into a single pseudo shell command line, only to have CoreFX split that back into an array of individual arguments behind the scenes** so as to be able to invoke the platform-native process-creation function, which takes an _array_ of arguments.
Not only is this an unnecessary burden on the user and inefficient, it is error-prone. It is easy to accidentally assemble a command-line string that does _not_ round-trip as expected.
As a real-world use case, consider the [ongoing quoting woes PowerShell experiences](https://github.com/PowerShell/PowerShell/issues/1995).
At least on Unix platforms, PowerShell should be able to simply pass the arguments that are the result of _its_ parsing _as-is_ to external utilities.
Having the ability to pass an _array_ of arguments would be of benefit on Windows too, as it is fair to assume that the more typical use case is to build the desired arguments as a list / array rather than to piece together a single-string command line with intricate quoting.
(The latter should only be needed if you're passing an preexisting string through or if you're invoking an executable that has custom argument-parsing rules.)
# Proposed API
**Add a `IReadOnlyList<String> ArgumentList` property (conceptually, an array of argument string literals) to the `ProcessStartInfo` class**, to complement the existing `string Arguments` (pseudo shell command line) property, and let each update the other lazily, on demand, when accessed:
* If `.Arguments` was (last) assigned to, do the following when `.ArgumentList` is accessed:
Call [`ParseArgumentsIntoList()`](https://github.com/dotnet/corefx/blob/edecb1207cbe6194cb9a9f1bc862ef8d1ebb3ef4/src/System.Diagnostics.Process/src/System/Diagnostics/Process.Unix.cs#L481), which splits the string into individual arguments based on [the rules for MS C++ programs](https://docs.microsoft.com/en-us/cpp/cpp/parsing-cpp-command-line-arguments) and return the resulting list.
* If `.ArgumentList` was (last) assigned to, do the following when `.Arguments` is accessed:
Synthesize the pseudo shell command line from the individual arguments and assign the result to using the above rules _in reverse_ (enclose in `"..."` if a given argument has embedded whitespace, ..., as already implemented for internal use [in `System.PasteArguments.Paste()`](https://github.com/dotnet/corefx/blob/5d3e42f831ec3d4c5964040e235824f779d5db53/src/Common/src/System/PasteArguments.cs#L16)) and return the resulting `string`.
* As @TSlivede proposes, it's worth extending the conversion algorithm to also double-quote arguments that contain `'` (single quotes) lest they be interpreted as having _syntactic_ function, which to some programs they do (e.g., Ruby, Cygwin).
That way, both `.Arguments` and `.ArgumentList` can be assigned to, and the respective other property contains the equivalent in terms of the official (MS C++) parsing rules.
A `ProcessStartInfo` instance constructed this way can therefore be used on all supported platforms:
* On Windows, pass the `.Arguments` property value to the `CreateProcess()` / `ShellExecuteEx()` Windows API functions, as before.
* On Unix platforms, pass the `.ArgumentList` property value via `.ToArray()` to [`ForkAndExecProcess()`](https://github.com/dotnet/corefx/blob/edecb1207cbe6194cb9a9f1bc862ef8d1ebb3ef4/src/Common/src/Interop/Unix/System.Native/Interop.ForkAndExecProcess.cs#L15).
Additionally, to complement the suggested behind-the-scenes conversion between the array form and the single-string form, **public utility methods should be implemented** that perform these conversions explicitly, on demand.
@AtsushiKan proposes the following signatures:
```csharp
public static IReadOnlyList<String> SplitArguments(String commandLine) { ... }
public static String CombineArguments(IReadOnlyList<String> argumentList) { ... }
```
# Open Questions
* How exactly should the existing, currently (effectively) private utility methods referenced above be surfaced publicly (namespace, class, signature)? Currently, they're in different classes, and one of them is `internal` (`System.PasteArguments`); @AtsushiKan suggests making them public methods of the `System.Diagnostics.Process` class.
# Usage
```csharp
// The array of arguments to pass.
string[] args = { "hello", "sweet world" };
// ---- New ProcessStartInfo.ArgumentList property.
// Assign it to the new .ArgumentList property of a ProcessStartInfo instance.
var psi = new ProcessStartInfo();
psi.ArgumentList = args;
// Accessing .Arguments now returns the pseudo shell command line that is the
// equivalent of the array of arguments:
// @"hello ""sweet world"""
string pseudoShellCommandLine = psi.Arguments;
// ---- New utility methods for conversion on demand.
// EXACT NAMES AND SIGNATURES TBD.
// Arguments array (list) -> pseudo shell command line
string[] args = { "hello", "sweet world" };
string pseudoShellCommandLine = System.Diagnostics.Process.CombineArguments(args);
// Pseudo shell command line -> arguments array (list)
IReadOnlyList<string> argList = System.Diagnostics.Process.SplitArguments(@"hello ""sweet world""");
```
|
process
|
add processstartinfo argumentlist note updated into an api proposal with feedback incorporated rationale on unix platforms external executables binaries receive their arguments as an array of string literals which makes for robust predictable passing of arguments see for instance regrettably in windows there is no equivalent mechanism a pseudo shell command line must be passed as a single string encoding all arguments and it is up to the target executable itself to parse that line into individual arguments the processstartinfo class currently only supports the windows approach directly by exposing a string arguments property that expects the whole command line on unix platforms this means that even if you start out with an array of arguments you must currently artificially assemble its elements into a single pseudo shell command line only to have corefx split that back into an array of individual arguments behind the scenes so as to be able to invoke the platform native process creation function which takes an array of arguments not only is this an unnecessary burden on the user and inefficient it is error prone it is easy to accidentally assemble a command line string that does not round trip as expected as a real world use case consider the at least on unix platforms powershell should be able to simply pass the arguments that are the result of its parsing as is to external utilities having the ability to pass an array of arguments would be of benefit on windows too as it is fair to assume that the more typical use case is to build the desired arguments as a list array rather than to piece together a single string command line with intricate quoting the latter should only be needed if you re passing an preexisting string through or if you re invoking an executable that has custom argument parsing rules proposed api add a ireadonlylist argumentlist property conceptually an array of argument string literals to the processstartinfo class to complement the existing string arguments pseudo shell command line property and let each update the other lazily on demand when accessed if arguments was last assigned to do the following when argumentlist is accessed call which splits the string into individual arguments based on and return the resulting list if argumentlist was last assigned to do the following when arguments is accessed synthesize the pseudo shell command line from the individual arguments and assign the result to using the above rules in reverse enclose in if a given argument has embedded whitespace as already implemented for internal use and return the resulting string as tslivede proposes it s worth extending the conversion algorithm to also double quote arguments that contain single quotes lest they be interpreted as having syntactic function which to some programs they do e g ruby cygwin that way both arguments and argumentlist can be assigned to and the respective other property contains the equivalent in terms of the official ms c parsing rules a processstartinfo instance constructed this way can therefore be used on all supported platforms on windows pass the arguments property value to the createprocess shellexecuteex windows api functions as before on unix platforms pass the argumentlist property value via toarray to additionally to complement the suggested behind the scenes conversion between the array form and the single string form public utility methods should be implemented that perform these conversions explicitly on demand atsushikan proposes the following signatures csharp public static ireadonlylist splitarguments string commandline public static string combinearguments ireadonlylist argumentlist open questions how exactly should the existing currently effectively private utility methods referenced above be surfaced publicly namespace class signature currently they re in different classes and one of them is internal system pastearguments atsushikan suggests making them public methods of the system diagnostics process class usage csharp the array of arguments to pass string args hello sweet world new processstartinfo argumentlist property assign it to the new argumentlist property of a processstartinfo instance var psi new processstartinfo psi argumentlist args accessing arguments now returns the pseudo shell command line that is the equivalent of the array of arguments hello sweet world string pseudoshellcommandline psi arguments new utility methods for conversion on demand exact names and signatures tbd arguments array list pseudo shell command line string args hello sweet world string pseudoshellcommandline system diagnostics process combinearguments args pseudo shell command line arguments array list ireadonlylist arglist system diagnostics process splitarguments hello sweet world
| 1
|
62,589
| 17,085,055,139
|
IssuesEvent
|
2021-07-08 10:41:35
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
opened
|
R2DBC execution should wrap R2dbcException in DataAccessException
|
C: Functionality E: All Editions P: High T: Defect T: Incompatible change
|
All jOOQ query executions produce a `org.jooq.exception.DataAccessException` in case of failures, but this isn't true for R2DBC execution, which leaks R2DBC exceptions, such as `io.r2dbc.spi.R2dbcBadGrammarException`.
This was an oversight in the jOOQ 3.15.0 release, and will be changed incompatibly. While we currently use R2DBC to execute things reactively via `ResultQuery.subscribe()` etc., there's no such guarantee for this to be the preferred and/or only option in the future. From a forwards compatibility perspective, we should have unified behaviour with respect to exceptions for JDBC and reactive execution: Everything wrapped in `DataAccessException`
|
1.0
|
R2DBC execution should wrap R2dbcException in DataAccessException - All jOOQ query executions produce a `org.jooq.exception.DataAccessException` in case of failures, but this isn't true for R2DBC execution, which leaks R2DBC exceptions, such as `io.r2dbc.spi.R2dbcBadGrammarException`.
This was an oversight in the jOOQ 3.15.0 release, and will be changed incompatibly. While we currently use R2DBC to execute things reactively via `ResultQuery.subscribe()` etc., there's no such guarantee for this to be the preferred and/or only option in the future. From a forwards compatibility perspective, we should have unified behaviour with respect to exceptions for JDBC and reactive execution: Everything wrapped in `DataAccessException`
|
non_process
|
execution should wrap in dataaccessexception all jooq query executions produce a org jooq exception dataaccessexception in case of failures but this isn t true for execution which leaks exceptions such as io spi this was an oversight in the jooq release and will be changed incompatibly while we currently use to execute things reactively via resultquery subscribe etc there s no such guarantee for this to be the preferred and or only option in the future from a forwards compatibility perspective we should have unified behaviour with respect to exceptions for jdbc and reactive execution everything wrapped in dataaccessexception
| 0
|
32,726
| 6,105,088,899
|
IssuesEvent
|
2017-06-20 22:37:14
|
telegramdesktop/tdesktop
|
https://api.github.com/repos/telegramdesktop/tdesktop
|
reopened
|
https instead of git for Qt repo link
|
documentation easy pick
|
`init-repository` fails if `git` is used for origin remote.
<details><summary>What happens if I try to init repo:</summary>
<pre>
=> perl ./init-repository --module-subset=qtbase,qtimageformats
+ git submodule init qtbase qtimageformats
Submodule 'qtbase' (git://code.qt.io/qt/qtbase.git) registered for path 'qtbase'
Submodule 'qtimageformats' (git://code.qt.io/qt/qtimageformats.git) registered for path 'qtimageformats'
+ git config commit.template /Users/stek/qt5/.commit-template
+ git clone --no-checkout git://code.qt.io/qt/qt5/../qtbase.git qtbase
Cloning into 'qtbase'...
fatal: remote error: no such repository: /qt/qt5/../qtbase.git
git clone --no-checkout git://code.qt.io/qt/qt5/../qtbase.git qtbase exited with status 32768 at ./init-repository line 198.
Qt::InitRepository::exe(Qt::InitRepository=HASH(0x7f97af02bf40), "git", "clone", "--no-checkout", "git://code.qt.io/qt/qt5/../qtbase.git", "qtbase") called at ./init-repository line 534
Qt::InitRepository::git_clone_one_submodule(Qt::InitRepository=HASH(0x7f97af02bf40), "qtbase", "qt5/../qtbase.git", 0) called at ./init-repository line 408
Qt::InitRepository::git_clone_all_submodules(Qt::InitRepository=HASH(0x7f97af02bf40), "qt5", 0, "qtbase", "qtimageformats") called at ./init-repository line 644
Qt::InitRepository::run(Qt::InitRepository=HASH(0x7f97af02bf40)) called at ./init-repository line 655
</pre>
</details>
However, if I clone repo via `https://code.qt.io/qt/qt5.git` / change `origin` remote to it, everything works perfectly fine.
I'm not so confident, but I think `..` isn't supported for `git://`
|
1.0
|
https instead of git for Qt repo link - `init-repository` fails if `git` is used for origin remote.
<details><summary>What happens if I try to init repo:</summary>
<pre>
=> perl ./init-repository --module-subset=qtbase,qtimageformats
+ git submodule init qtbase qtimageformats
Submodule 'qtbase' (git://code.qt.io/qt/qtbase.git) registered for path 'qtbase'
Submodule 'qtimageformats' (git://code.qt.io/qt/qtimageformats.git) registered for path 'qtimageformats'
+ git config commit.template /Users/stek/qt5/.commit-template
+ git clone --no-checkout git://code.qt.io/qt/qt5/../qtbase.git qtbase
Cloning into 'qtbase'...
fatal: remote error: no such repository: /qt/qt5/../qtbase.git
git clone --no-checkout git://code.qt.io/qt/qt5/../qtbase.git qtbase exited with status 32768 at ./init-repository line 198.
Qt::InitRepository::exe(Qt::InitRepository=HASH(0x7f97af02bf40), "git", "clone", "--no-checkout", "git://code.qt.io/qt/qt5/../qtbase.git", "qtbase") called at ./init-repository line 534
Qt::InitRepository::git_clone_one_submodule(Qt::InitRepository=HASH(0x7f97af02bf40), "qtbase", "qt5/../qtbase.git", 0) called at ./init-repository line 408
Qt::InitRepository::git_clone_all_submodules(Qt::InitRepository=HASH(0x7f97af02bf40), "qt5", 0, "qtbase", "qtimageformats") called at ./init-repository line 644
Qt::InitRepository::run(Qt::InitRepository=HASH(0x7f97af02bf40)) called at ./init-repository line 655
</pre>
</details>
However, if I clone repo via `https://code.qt.io/qt/qt5.git` / change `origin` remote to it, everything works perfectly fine.
I'm not so confident, but I think `..` isn't supported for `git://`
|
non_process
|
https instead of git for qt repo link init repository fails if git is used for origin remote what happens if i try to init repo perl init repository module subset qtbase qtimageformats git submodule init qtbase qtimageformats submodule qtbase git code qt io qt qtbase git registered for path qtbase submodule qtimageformats git code qt io qt qtimageformats git registered for path qtimageformats git config commit template users stek commit template git clone no checkout git code qt io qt qtbase git qtbase cloning into qtbase fatal remote error no such repository qt qtbase git git clone no checkout git code qt io qt qtbase git qtbase exited with status at init repository line qt initrepository exe qt initrepository hash git clone no checkout git code qt io qt qtbase git qtbase called at init repository line qt initrepository git clone one submodule qt initrepository hash qtbase qtbase git called at init repository line qt initrepository git clone all submodules qt initrepository hash qtbase qtimageformats called at init repository line qt initrepository run qt initrepository hash called at init repository line however if i clone repo via change origin remote to it everything works perfectly fine i m not so confident but i think isn t supported for git
| 0
|
13,961
| 8,415,721,629
|
IssuesEvent
|
2018-10-13 17:33:45
|
cswinter/LocustDB
|
https://api.github.com/repos/cswinter/LocustDB
|
opened
|
Parallelize merge
|
performance
|
Final pass for merging query results is done by a single thread. Parallelizing this would give large speed up for queries with high cardinality group bys.
|
True
|
Parallelize merge - Final pass for merging query results is done by a single thread. Parallelizing this would give large speed up for queries with high cardinality group bys.
|
non_process
|
parallelize merge final pass for merging query results is done by a single thread parallelizing this would give large speed up for queries with high cardinality group bys
| 0
|
82,437
| 10,281,947,611
|
IssuesEvent
|
2019-08-26 09:47:00
|
tomkerkhove/promitor
|
https://api.github.com/repos/tomkerkhove/promitor
|
closed
|
Provide documentation how runtime YAML can be overriden via environment variable
|
documentation
|
Provide documentation how runtime YAML can be overriden via environment variable.
This is possible given environment variables are used in favor of YAML config, if they are specified.
Relates to #431
|
1.0
|
Provide documentation how runtime YAML can be overriden via environment variable - Provide documentation how runtime YAML can be overriden via environment variable.
This is possible given environment variables are used in favor of YAML config, if they are specified.
Relates to #431
|
non_process
|
provide documentation how runtime yaml can be overriden via environment variable provide documentation how runtime yaml can be overriden via environment variable this is possible given environment variables are used in favor of yaml config if they are specified relates to
| 0
|
16,948
| 22,303,067,547
|
IssuesEvent
|
2022-06-13 10:29:06
|
arcus-azure/arcus.messaging
|
https://api.github.com/repos/arcus-azure/arcus.messaging
|
opened
|
Remove deprecated `CanProcessMessage<>` with message context
|
good first issue area:message-processing breaking-change
|
**Is your feature request related to a problem? Please describe.**
Previously, we were only filtering messages based on message context but since we added message body filtering and custom message deserialization, we added a dedicated `CanProcessMessageBasedOnContext<>`.
**Describe the solution you'd like**
Remove the deprecated `CanProcessMessage` method from the `MessageHandler` in the `Arcus.Messaging.Abstractions` project.
|
1.0
|
Remove deprecated `CanProcessMessage<>` with message context - **Is your feature request related to a problem? Please describe.**
Previously, we were only filtering messages based on message context but since we added message body filtering and custom message deserialization, we added a dedicated `CanProcessMessageBasedOnContext<>`.
**Describe the solution you'd like**
Remove the deprecated `CanProcessMessage` method from the `MessageHandler` in the `Arcus.Messaging.Abstractions` project.
|
process
|
remove deprecated canprocessmessage with message context is your feature request related to a problem please describe previously we were only filtering messages based on message context but since we added message body filtering and custom message deserialization we added a dedicated canprocessmessagebasedoncontext describe the solution you d like remove the deprecated canprocessmessage method from the messagehandler in the arcus messaging abstractions project
| 1
|
32,628
| 6,877,707,776
|
IssuesEvent
|
2017-11-20 09:11:19
|
contao/core-bundle
|
https://api.github.com/repos/contao/core-bundle
|
closed
|
File::createIfNotExists is never used
|
defect
|
In Contao 4, the second argument for `File::__construct()` has been dropped. Thus files will not be created by default any more, when using `new File(…)`.
However in Contao 3 when the file was created, its directory was also created if it does not exist yet. This was done in `File::createIfNotExists` (see [File.php#L447-L451](https://github.com/contao/core/blob/3.5.30/system/modules/core/library/Contao/File.php#L447-L451)). In Contao 4 this function is never called anywhere any more though (but still present: [File.php#L408-L438](https://github.com/contao/core-bundle/blob/4.4.7/src/Resources/contao/library/Contao/File.php#L408-L438)), thus this functionality is not there any more in Contao 4.
Now the directory for a new file has to be created manually, before writing a file. I am not sure if this is intentional?
|
1.0
|
File::createIfNotExists is never used - In Contao 4, the second argument for `File::__construct()` has been dropped. Thus files will not be created by default any more, when using `new File(…)`.
However in Contao 3 when the file was created, its directory was also created if it does not exist yet. This was done in `File::createIfNotExists` (see [File.php#L447-L451](https://github.com/contao/core/blob/3.5.30/system/modules/core/library/Contao/File.php#L447-L451)). In Contao 4 this function is never called anywhere any more though (but still present: [File.php#L408-L438](https://github.com/contao/core-bundle/blob/4.4.7/src/Resources/contao/library/Contao/File.php#L408-L438)), thus this functionality is not there any more in Contao 4.
Now the directory for a new file has to be created manually, before writing a file. I am not sure if this is intentional?
|
non_process
|
file createifnotexists is never used in contao the second argument for file construct has been dropped thus files will not be created by default any more when using new file … however in contao when the file was created its directory was also created if it does not exist yet this was done in file createifnotexists see in contao this function is never called anywhere any more though but still present thus this functionality is not there any more in contao now the directory for a new file has to be created manually before writing a file i am not sure if this is intentional
| 0
|
10,635
| 8,658,221,290
|
IssuesEvent
|
2018-11-28 00:00:27
|
flutter/website
|
https://api.github.com/repos/flutter/website
|
closed
|
External link checking in builds is flakey
|
infrastructure team: flakes
|
_From @devoncarew on June 1, 2018 18:31_
See:
```
https://travis-ci.org/flutter/website/builds/386776246#L977
```
They flake when trying to validate external links, which can occasionally 404.
_Copied from original issue: flutter/flutter#18118_
|
1.0
|
External link checking in builds is flakey - _From @devoncarew on June 1, 2018 18:31_
See:
```
https://travis-ci.org/flutter/website/builds/386776246#L977
```
They flake when trying to validate external links, which can occasionally 404.
_Copied from original issue: flutter/flutter#18118_
|
non_process
|
external link checking in builds is flakey from devoncarew on june see they flake when trying to validate external links which can occasionally copied from original issue flutter flutter
| 0
|
6,810
| 9,956,228,340
|
IssuesEvent
|
2019-07-05 13:23:18
|
EthVM/EthVM
|
https://api.github.com/repos/EthVM/EthVM
|
closed
|
Parametrize window period on Kafka Processing
|
bug enhancement project:processing
|
* **I'm submitting a ...**
- [X] feature request
* **Feature Request**
- In processing MainNet we obtained the following exception:
```
[2019-06-03 07:02:48,998] WARN WorkerSinkTask{id=postgres-block-sink-0} Commit of offsets timed out (org.apache.kafka.connect.runtime.WorkerSinkTask)
```
Which means that if for some reason we wait too long to obtain new data, the windows are expired. We need to parametrize that in our Kafka processing with an Env variable (so different networks can have different settings).
I'll mark this as an enhancement and as a bug, as we need this change for processing properly MainNet.
|
1.0
|
Parametrize window period on Kafka Processing - * **I'm submitting a ...**
- [X] feature request
* **Feature Request**
- In processing MainNet we obtained the following exception:
```
[2019-06-03 07:02:48,998] WARN WorkerSinkTask{id=postgres-block-sink-0} Commit of offsets timed out (org.apache.kafka.connect.runtime.WorkerSinkTask)
```
Which means that if for some reason we wait too long to obtain new data, the windows are expired. We need to parametrize that in our Kafka processing with an Env variable (so different networks can have different settings).
I'll mark this as an enhancement and as a bug, as we need this change for processing properly MainNet.
|
process
|
parametrize window period on kafka processing i m submitting a feature request feature request in processing mainnet we obtained the following exception warn workersinktask id postgres block sink commit of offsets timed out org apache kafka connect runtime workersinktask which means that if for some reason we wait too long to obtain new data the windows are expired we need to parametrize that in our kafka processing with an env variable so different networks can have different settings i ll mark this as an enhancement and as a bug as we need this change for processing properly mainnet
| 1
|
16,379
| 21,101,001,527
|
IssuesEvent
|
2022-04-04 14:29:44
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Android] Resources > HTML breakage is observed when resources configured with Rich text editor is shared
|
Bug P1 Android Process: Tested dev
|
**Steps:**
1. Install the Android app
2. Signup/login and enroll into study
3. Navigate to resources
4. Open any resource with RTE configured
5. Click on share and choose email
6. Observe the file attached
**AR:** HTML breakage is observed when resources configured with Rich text editor is shared
**ER:** HTML breakage should not be observed when resources configured with Rich text editor is shared

|
1.0
|
[Android] Resources > HTML breakage is observed when resources configured with Rich text editor is shared - **Steps:**
1. Install the Android app
2. Signup/login and enroll into study
3. Navigate to resources
4. Open any resource with RTE configured
5. Click on share and choose email
6. Observe the file attached
**AR:** HTML breakage is observed when resources configured with Rich text editor is shared
**ER:** HTML breakage should not be observed when resources configured with Rich text editor is shared

|
process
|
resources html breakage is observed when resources configured with rich text editor is shared steps install the android app signup login and enroll into study navigate to resources open any resource with rte configured click on share and choose email observe the file attached ar html breakage is observed when resources configured with rich text editor is shared er html breakage should not be observed when resources configured with rich text editor is shared
| 1
|
3,162
| 5,539,419,378
|
IssuesEvent
|
2017-03-22 06:28:51
|
nus-mtp/e-tutorial
|
https://api.github.com/repos/nus-mtp/e-tutorial
|
closed
|
Remove userid from lobby url
|
question requirement
|
Appears to be unnecessary now that auth is implemented? Or is it necessary?
|
1.0
|
Remove userid from lobby url - Appears to be unnecessary now that auth is implemented? Or is it necessary?
|
non_process
|
remove userid from lobby url appears to be unnecessary now that auth is implemented or is it necessary
| 0
|
296,055
| 25,524,654,843
|
IssuesEvent
|
2022-11-29 00:29:28
|
durgapal/gh-actions-npm-audit
|
https://api.github.com/repos/durgapal/gh-actions-npm-audit
|
opened
|
npm audit found vulnerabilities
|
vulnerability test
|
```
# npm audit report
async 2.0.0 - 2.6.3
Severity: high
Prototype Pollution in async - https://github.com/advisories/GHSA-fwr7-v2mv-hh25
Depends on vulnerable versions of lodash
fix available via `npm audit fix`
node_modules/async
mongoose 0.0.3 - 0.0.6 || 1.7.2 - 5.13.8 || 6.0.0-rc0 - 6.0.3
Depends on vulnerable versions of async
Depends on vulnerable versions of bson
Depends on vulnerable versions of mongodb
Depends on vulnerable versions of mpath
Depends on vulnerable versions of mquery
node_modules/mongoose
base64url <3.0.0
Severity: moderate
Out-of-bounds Read in base64url - https://github.com/advisories/GHSA-rvg8-pwq2-xj7q
fix available via `npm audit fix`
node_modules/base64url
ecdsa-sig-formatter 1.0.9
Depends on vulnerable versions of base64url
node_modules/ecdsa-sig-formatter
jwa <=1.1.5
Depends on vulnerable versions of base64url
Depends on vulnerable versions of ecdsa-sig-formatter
node_modules/jwa
jws <=3.1.4
Depends on vulnerable versions of base64url
Depends on vulnerable versions of jwa
node_modules/jws
jsonwebtoken <=4.2.2
Depends on vulnerable versions of jws
node_modules/jsonwebtoken
bson <=1.1.3
Severity: high
Deserialization of Untrusted Data in bson - https://github.com/advisories/GHSA-4jwp-vfvf-657p
Deserialization of Untrusted Data in bson - https://github.com/advisories/GHSA-v8w9-2789-6hhr
fix available via `npm audit fix`
node_modules/bson
mongodb-core <=3.1.1
Depends on vulnerable versions of bson
node_modules/mongodb-core
mongodb <=3.1.12
Depends on vulnerable versions of mongodb-core
node_modules/mongodb
clean-css <4.1.11
Regular Expression Denial of Service in clean-css - https://github.com/advisories/GHSA-wxhq-pm8v-cw75
fix available via `npm audit fix --force`
Will install jade@1.9.2, which is a breaking change
node_modules/clean-css
jade >=0.30.0
Depends on vulnerable versions of clean-css
Depends on vulnerable versions of constantinople
Depends on vulnerable versions of mkdirp
Depends on vulnerable versions of transformers
node_modules/jade
constantinople <3.1.1
Severity: critical
Sandbox Bypass Leading to Arbitrary Code Execution in constantinople - https://github.com/advisories/GHSA-4vmm-mhcq-4x9j
fix available via `npm audit fix --force`
Will install jade@1.9.2, which is a breaking change
node_modules/constantinople
dicer *
Severity: high
Crash in HeaderParser in dicer - https://github.com/advisories/GHSA-wm7h-9275-46v2
No fix available
node_modules/dicer
busboy <=0.3.1
Depends on vulnerable versions of dicer
node_modules/busboy
express-fileupload <=1.3.1
Depends on vulnerable versions of busboy
node_modules/express-fileupload
multer <=2.0.0-rc.3
Depends on vulnerable versions of busboy
Depends on vulnerable versions of mkdirp
node_modules/multer
helmet-csp 1.2.2 - 2.9.0
Severity: moderate
Configuration Override in helmet-csp - https://github.com/advisories/GHSA-c3m8-x3cg-qm2c
fix available via `npm audit fix`
node_modules/helmet-csp
helmet 2.1.2 - 3.20.1
Depends on vulnerable versions of helmet-csp
node_modules/helmet
js-yaml <=3.13.0
Severity: high
Denial of Service in js-yaml - https://github.com/advisories/GHSA-2pr6-76vf-7546
Code Injection in js-yaml - https://github.com/advisories/GHSA-8j8c-7jfh-h6hx
fix available via `npm audit fix`
node_modules/js-yaml
lodash <=4.17.20
Severity: critical
Prototype Pollution in lodash - https://github.com/advisories/GHSA-jf85-cpcp-j695
Regular Expression Denial of Service (ReDoS) in lodash - https://github.com/advisories/GHSA-x5rq-j2xg-h7qm
Prototype Pollution in lodash - https://github.com/advisories/GHSA-p6mc-m468-83gw
Prototype Pollution in lodash - https://github.com/advisories/GHSA-4xc9-xhrj-v574
Command Injection in lodash - https://github.com/advisories/GHSA-35jh-r3h4-6jhm
Regular Expression Denial of Service (ReDoS) in lodash - https://github.com/advisories/GHSA-29mw-wpgm-hmr9
fix available via `npm audit fix`
node_modules/lodash
express-validator 0.2.0 - 6.4.1
Depends on vulnerable versions of lodash
Depends on vulnerable versions of validator
node_modules/express-validator
mime <1.4.1
Severity: moderate
mime Regular Expression Denial of Service when mime lookup performed on untrusted user input - https://github.com/advisories/GHSA-wrvr-8mpx-r7pp
fix available via `npm audit fix --force`
Will install express@4.18.2, which is outside the stated dependency range
node_modules/mime
send <=0.15.6
Depends on vulnerable versions of mime
node_modules/send
express 3.0.0-alpha1 - 4.15.5 || 5.0.0-alpha.1 - 5.0.0-alpha.6
Depends on vulnerable versions of send
Depends on vulnerable versions of serve-static
node_modules/express
serve-static <=1.12.6
Depends on vulnerable versions of send
node_modules/serve-static
minimatch <3.0.5
Severity: high
minimatch ReDoS vulnerability - https://github.com/advisories/GHSA-f8q6-p94x-37v3
fix available via `npm audit fix`
node_modules/minimatch
glob 3.0.0 - 5.0.14
Depends on vulnerable versions of minimatch
node_modules/glob
minimist <=1.2.5
Severity: critical
Prototype Pollution in minimist - https://github.com/advisories/GHSA-xvch-5gv4-984h
Prototype Pollution in minimist - https://github.com/advisories/GHSA-vh95-rmgr-6w4m
fix available via `npm audit fix --force`
Will install jade@1.9.2, which is a breaking change
node_modules/minimist
mkdirp 0.4.1 - 0.5.1
Depends on vulnerable versions of minimist
node_modules/mkdirp
mv
Depends on vulnerable versions of mkdirp
node_modules/mv
moment <=2.29.3
Severity: high
Path Traversal: 'dir/../../filename' in moment.locale - https://github.com/advisories/GHSA-8hfj-j24r-96c4
Moment.js vulnerable to Inefficient Regular Expression Complexity - https://github.com/advisories/GHSA-wc69-rhjr-hc9g
fix available via `npm audit fix`
node_modules/moment
bunyan
Depends on vulnerable versions of moment
node_modules/bunyan
morgan <1.9.1
Severity: moderate
Code Injection in morgan - https://github.com/advisories/GHSA-gwg9-rgvj-4h5j
fix available via `npm audit fix`
node_modules/morgan
mpath <=0.8.3
Severity: critical
Type confusion in mpath - https://github.com/advisories/GHSA-p92x-r36w-9395
Prototype Pollution in mpath - https://github.com/advisories/GHSA-h466-j336-74wx
fix available via `npm audit fix`
node_modules/mpath
mquery <3.2.3
Severity: moderate
Code Injection in mquery - https://github.com/advisories/GHSA-45q2-34rf-mr94
fix available via `npm audit fix`
node_modules/mquery
node-serialize *
Severity: critical
Code Execution through IIFE in node-serialize - https://github.com/advisories/GHSA-q4v7-4rhw-9hqm
No fix available
node_modules/node-serialize
uglify-js <=2.5.0
Severity: critical
Incorrect Handling of Non-Boolean Comparisons During Minification in uglify-js - https://github.com/advisories/GHSA-34r7-q49f-h37c
Regular Expression Denial of Service in uglify-js - https://github.com/advisories/GHSA-c9f4-xj24-8jqx
fix available via `npm audit fix --force`
Will install jade@1.9.2, which is a breaking change
node_modules/transformers/node_modules/uglify-js
transformers 2.0.0 - 3.0.1
Depends on vulnerable versions of uglify-js
node_modules/transformers
validator <13.7.0
Severity: moderate
Inefficient Regular Expression Complexity in validator.js - https://github.com/advisories/GHSA-qgmg-gppg-76g5
fix available via `npm audit fix`
node_modules/validator
40 vulnerabilities (1 low, 18 moderate, 11 high, 10 critical)
To address issues that do not require attention, run:
npm audit fix
To address all issues possible (including breaking changes), run:
npm audit fix --force
Some issues need review, and may require choosing
a different dependency.
```
|
1.0
|
npm audit found vulnerabilities - ```
# npm audit report
async 2.0.0 - 2.6.3
Severity: high
Prototype Pollution in async - https://github.com/advisories/GHSA-fwr7-v2mv-hh25
Depends on vulnerable versions of lodash
fix available via `npm audit fix`
node_modules/async
mongoose 0.0.3 - 0.0.6 || 1.7.2 - 5.13.8 || 6.0.0-rc0 - 6.0.3
Depends on vulnerable versions of async
Depends on vulnerable versions of bson
Depends on vulnerable versions of mongodb
Depends on vulnerable versions of mpath
Depends on vulnerable versions of mquery
node_modules/mongoose
base64url <3.0.0
Severity: moderate
Out-of-bounds Read in base64url - https://github.com/advisories/GHSA-rvg8-pwq2-xj7q
fix available via `npm audit fix`
node_modules/base64url
ecdsa-sig-formatter 1.0.9
Depends on vulnerable versions of base64url
node_modules/ecdsa-sig-formatter
jwa <=1.1.5
Depends on vulnerable versions of base64url
Depends on vulnerable versions of ecdsa-sig-formatter
node_modules/jwa
jws <=3.1.4
Depends on vulnerable versions of base64url
Depends on vulnerable versions of jwa
node_modules/jws
jsonwebtoken <=4.2.2
Depends on vulnerable versions of jws
node_modules/jsonwebtoken
bson <=1.1.3
Severity: high
Deserialization of Untrusted Data in bson - https://github.com/advisories/GHSA-4jwp-vfvf-657p
Deserialization of Untrusted Data in bson - https://github.com/advisories/GHSA-v8w9-2789-6hhr
fix available via `npm audit fix`
node_modules/bson
mongodb-core <=3.1.1
Depends on vulnerable versions of bson
node_modules/mongodb-core
mongodb <=3.1.12
Depends on vulnerable versions of mongodb-core
node_modules/mongodb
clean-css <4.1.11
Regular Expression Denial of Service in clean-css - https://github.com/advisories/GHSA-wxhq-pm8v-cw75
fix available via `npm audit fix --force`
Will install jade@1.9.2, which is a breaking change
node_modules/clean-css
jade >=0.30.0
Depends on vulnerable versions of clean-css
Depends on vulnerable versions of constantinople
Depends on vulnerable versions of mkdirp
Depends on vulnerable versions of transformers
node_modules/jade
constantinople <3.1.1
Severity: critical
Sandbox Bypass Leading to Arbitrary Code Execution in constantinople - https://github.com/advisories/GHSA-4vmm-mhcq-4x9j
fix available via `npm audit fix --force`
Will install jade@1.9.2, which is a breaking change
node_modules/constantinople
dicer *
Severity: high
Crash in HeaderParser in dicer - https://github.com/advisories/GHSA-wm7h-9275-46v2
No fix available
node_modules/dicer
busboy <=0.3.1
Depends on vulnerable versions of dicer
node_modules/busboy
express-fileupload <=1.3.1
Depends on vulnerable versions of busboy
node_modules/express-fileupload
multer <=2.0.0-rc.3
Depends on vulnerable versions of busboy
Depends on vulnerable versions of mkdirp
node_modules/multer
helmet-csp 1.2.2 - 2.9.0
Severity: moderate
Configuration Override in helmet-csp - https://github.com/advisories/GHSA-c3m8-x3cg-qm2c
fix available via `npm audit fix`
node_modules/helmet-csp
helmet 2.1.2 - 3.20.1
Depends on vulnerable versions of helmet-csp
node_modules/helmet
js-yaml <=3.13.0
Severity: high
Denial of Service in js-yaml - https://github.com/advisories/GHSA-2pr6-76vf-7546
Code Injection in js-yaml - https://github.com/advisories/GHSA-8j8c-7jfh-h6hx
fix available via `npm audit fix`
node_modules/js-yaml
lodash <=4.17.20
Severity: critical
Prototype Pollution in lodash - https://github.com/advisories/GHSA-jf85-cpcp-j695
Regular Expression Denial of Service (ReDoS) in lodash - https://github.com/advisories/GHSA-x5rq-j2xg-h7qm
Prototype Pollution in lodash - https://github.com/advisories/GHSA-p6mc-m468-83gw
Prototype Pollution in lodash - https://github.com/advisories/GHSA-4xc9-xhrj-v574
Command Injection in lodash - https://github.com/advisories/GHSA-35jh-r3h4-6jhm
Regular Expression Denial of Service (ReDoS) in lodash - https://github.com/advisories/GHSA-29mw-wpgm-hmr9
fix available via `npm audit fix`
node_modules/lodash
express-validator 0.2.0 - 6.4.1
Depends on vulnerable versions of lodash
Depends on vulnerable versions of validator
node_modules/express-validator
mime <1.4.1
Severity: moderate
mime Regular Expression Denial of Service when mime lookup performed on untrusted user input - https://github.com/advisories/GHSA-wrvr-8mpx-r7pp
fix available via `npm audit fix --force`
Will install express@4.18.2, which is outside the stated dependency range
node_modules/mime
send <=0.15.6
Depends on vulnerable versions of mime
node_modules/send
express 3.0.0-alpha1 - 4.15.5 || 5.0.0-alpha.1 - 5.0.0-alpha.6
Depends on vulnerable versions of send
Depends on vulnerable versions of serve-static
node_modules/express
serve-static <=1.12.6
Depends on vulnerable versions of send
node_modules/serve-static
minimatch <3.0.5
Severity: high
minimatch ReDoS vulnerability - https://github.com/advisories/GHSA-f8q6-p94x-37v3
fix available via `npm audit fix`
node_modules/minimatch
glob 3.0.0 - 5.0.14
Depends on vulnerable versions of minimatch
node_modules/glob
minimist <=1.2.5
Severity: critical
Prototype Pollution in minimist - https://github.com/advisories/GHSA-xvch-5gv4-984h
Prototype Pollution in minimist - https://github.com/advisories/GHSA-vh95-rmgr-6w4m
fix available via `npm audit fix --force`
Will install jade@1.9.2, which is a breaking change
node_modules/minimist
mkdirp 0.4.1 - 0.5.1
Depends on vulnerable versions of minimist
node_modules/mkdirp
mv
Depends on vulnerable versions of mkdirp
node_modules/mv
moment <=2.29.3
Severity: high
Path Traversal: 'dir/../../filename' in moment.locale - https://github.com/advisories/GHSA-8hfj-j24r-96c4
Moment.js vulnerable to Inefficient Regular Expression Complexity - https://github.com/advisories/GHSA-wc69-rhjr-hc9g
fix available via `npm audit fix`
node_modules/moment
bunyan
Depends on vulnerable versions of moment
node_modules/bunyan
morgan <1.9.1
Severity: moderate
Code Injection in morgan - https://github.com/advisories/GHSA-gwg9-rgvj-4h5j
fix available via `npm audit fix`
node_modules/morgan
mpath <=0.8.3
Severity: critical
Type confusion in mpath - https://github.com/advisories/GHSA-p92x-r36w-9395
Prototype Pollution in mpath - https://github.com/advisories/GHSA-h466-j336-74wx
fix available via `npm audit fix`
node_modules/mpath
mquery <3.2.3
Severity: moderate
Code Injection in mquery - https://github.com/advisories/GHSA-45q2-34rf-mr94
fix available via `npm audit fix`
node_modules/mquery
node-serialize *
Severity: critical
Code Execution through IIFE in node-serialize - https://github.com/advisories/GHSA-q4v7-4rhw-9hqm
No fix available
node_modules/node-serialize
uglify-js <=2.5.0
Severity: critical
Incorrect Handling of Non-Boolean Comparisons During Minification in uglify-js - https://github.com/advisories/GHSA-34r7-q49f-h37c
Regular Expression Denial of Service in uglify-js - https://github.com/advisories/GHSA-c9f4-xj24-8jqx
fix available via `npm audit fix --force`
Will install jade@1.9.2, which is a breaking change
node_modules/transformers/node_modules/uglify-js
transformers 2.0.0 - 3.0.1
Depends on vulnerable versions of uglify-js
node_modules/transformers
validator <13.7.0
Severity: moderate
Inefficient Regular Expression Complexity in validator.js - https://github.com/advisories/GHSA-qgmg-gppg-76g5
fix available via `npm audit fix`
node_modules/validator
40 vulnerabilities (1 low, 18 moderate, 11 high, 10 critical)
To address issues that do not require attention, run:
npm audit fix
To address all issues possible (including breaking changes), run:
npm audit fix --force
Some issues need review, and may require choosing
a different dependency.
```
|
non_process
|
npm audit found vulnerabilities npm audit report async severity high prototype pollution in async depends on vulnerable versions of lodash fix available via npm audit fix node modules async mongoose depends on vulnerable versions of async depends on vulnerable versions of bson depends on vulnerable versions of mongodb depends on vulnerable versions of mpath depends on vulnerable versions of mquery node modules mongoose severity moderate out of bounds read in fix available via npm audit fix node modules ecdsa sig formatter depends on vulnerable versions of node modules ecdsa sig formatter jwa depends on vulnerable versions of depends on vulnerable versions of ecdsa sig formatter node modules jwa jws depends on vulnerable versions of depends on vulnerable versions of jwa node modules jws jsonwebtoken depends on vulnerable versions of jws node modules jsonwebtoken bson severity high deserialization of untrusted data in bson deserialization of untrusted data in bson fix available via npm audit fix node modules bson mongodb core depends on vulnerable versions of bson node modules mongodb core mongodb depends on vulnerable versions of mongodb core node modules mongodb clean css regular expression denial of service in clean css fix available via npm audit fix force will install jade which is a breaking change node modules clean css jade depends on vulnerable versions of clean css depends on vulnerable versions of constantinople depends on vulnerable versions of mkdirp depends on vulnerable versions of transformers node modules jade constantinople severity critical sandbox bypass leading to arbitrary code execution in constantinople fix available via npm audit fix force will install jade which is a breaking change node modules constantinople dicer severity high crash in headerparser in dicer no fix available node modules dicer busboy depends on vulnerable versions of dicer node modules busboy express fileupload depends on vulnerable versions of busboy node modules express fileupload multer rc depends on vulnerable versions of busboy depends on vulnerable versions of mkdirp node modules multer helmet csp severity moderate configuration override in helmet csp fix available via npm audit fix node modules helmet csp helmet depends on vulnerable versions of helmet csp node modules helmet js yaml severity high denial of service in js yaml code injection in js yaml fix available via npm audit fix node modules js yaml lodash severity critical prototype pollution in lodash regular expression denial of service redos in lodash prototype pollution in lodash prototype pollution in lodash command injection in lodash regular expression denial of service redos in lodash fix available via npm audit fix node modules lodash express validator depends on vulnerable versions of lodash depends on vulnerable versions of validator node modules express validator mime severity moderate mime regular expression denial of service when mime lookup performed on untrusted user input fix available via npm audit fix force will install express which is outside the stated dependency range node modules mime send depends on vulnerable versions of mime node modules send express alpha alpha depends on vulnerable versions of send depends on vulnerable versions of serve static node modules express serve static depends on vulnerable versions of send node modules serve static minimatch severity high minimatch redos vulnerability fix available via npm audit fix node modules minimatch glob depends on vulnerable versions of minimatch node modules glob minimist severity critical prototype pollution in minimist prototype pollution in minimist fix available via npm audit fix force will install jade which is a breaking change node modules minimist mkdirp depends on vulnerable versions of minimist node modules mkdirp mv depends on vulnerable versions of mkdirp node modules mv moment severity high path traversal dir filename in moment locale moment js vulnerable to inefficient regular expression complexity fix available via npm audit fix node modules moment bunyan depends on vulnerable versions of moment node modules bunyan morgan severity moderate code injection in morgan fix available via npm audit fix node modules morgan mpath severity critical type confusion in mpath prototype pollution in mpath fix available via npm audit fix node modules mpath mquery severity moderate code injection in mquery fix available via npm audit fix node modules mquery node serialize severity critical code execution through iife in node serialize no fix available node modules node serialize uglify js severity critical incorrect handling of non boolean comparisons during minification in uglify js regular expression denial of service in uglify js fix available via npm audit fix force will install jade which is a breaking change node modules transformers node modules uglify js transformers depends on vulnerable versions of uglify js node modules transformers validator severity moderate inefficient regular expression complexity in validator js fix available via npm audit fix node modules validator vulnerabilities low moderate high critical to address issues that do not require attention run npm audit fix to address all issues possible including breaking changes run npm audit fix force some issues need review and may require choosing a different dependency
| 0
|
10,304
| 13,155,204,200
|
IssuesEvent
|
2020-08-10 08:26:20
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Support service workers
|
STATE: Need response SYSTEM: URL processing SYSTEM: client side processing SYSTEM: script processing TYPE: enhancement
|
<!--
If you have all reproduction steps with a complete sample app, please share as many details as possible in the sections below.
Make sure that you tried using the latest TestCafe version (https://github.com/DevExpress/testcafe/releases), where this behavior might have been already addressed.
Before submitting an issue, please check CONTRIBUTING.md and existing issues in this repository (https://github.com/DevExpress/testcafe/issues) in case a similar issue exists or was already addressed. This may save your time (and ours).
-->
### What is your Test Scenario?
<!-- Describe what you'd like to test. -->
I have an app using msw, a mock service worker which provides seamless mocking between node & browser using a service worker. In the browser, it does this by mocking the fetch calls in a service worker.
When I run my unit tests (in jest) or when I run my app in the browser, they both can share the same mock endpoints. I am now trying to run the tests in testcafe but running into what i believe could be a conflict between `transport-worker` and `mockServiceWorker` which is causing this problem. The details of the bug can be found here https://github.com/mswjs/msw/issues/259 (I posted it in both projects just in case)
### What is the Current behavior?
<!-- Describe the behavior you see and consider invalid. -->
mockServiceWorker is not able to intercept fetches.
### What is the Expected behavior?
<!-- Describe what you expected to happen. -->
mockServiceWorker is able to intercept fetches.
### What is your web application and your TestCafe test code?
<!-- Share a public accessible link to your application or provide a simple app which we can run. -->
https://github.com/benmonro/msw-server/tree/feature/testcafe
<details>
<summary>Your website URL (or attach your complete example):</summary>
just running locally in above repo.
<!-- Provide your website URL or attach a sample. Note: if your website requires any additional access procedures like authentication, please ask the website owner to send us a written confirmation at [support@devexpress.com](mailto:support@devexpress.com) in a free text form. It will allow the DevExpress staff to remotely access the website and its internal resources for research, testing, and debugging purposes. -->
</details>
<details>
<summary>Your complete test code (or attach your test files):</summary>
<!-- Paste your test code here: -->
```js
```
</details>
<details>
<summary>Your complete configuration file (if any):</summary>
<!-- Paste your complete test config file here (even if it is huge): -->
```
```
</details>
<details>
<summary>Your complete test report:</summary>
<!-- Paste your complete result test report here (even if it is huge): -->
```
```
</details>
<details>
<summary>Screenshots:</summary>
<!-- If applicable, add screenshots to help explain the issue. -->
```
```
</details>
### Steps to Reproduce:
<!-- Describe what we should do to reproduce the behavior you encountered. -->
1. Go to my website ...
3. Execute this command...
4. See the error...
### Your Environment details:
* testcafe version: <!-- run `testcafe -v` -->
* node.js version: <!-- run `node -v` -->
* command-line arguments: <!-- example: "testcafe ie,chrome -e test.js" -->
* browser name and version: <!-- example: IE 11, Chrome 69, Firefox 100, etc. -->
* platform and version: <!-- example: "macOS 10.14, Windows, Linux Ubuntu 18.04.1, iOS 12 -->
* other: <!-- any notes you consider important -->
|
3.0
|
Support service workers - <!--
If you have all reproduction steps with a complete sample app, please share as many details as possible in the sections below.
Make sure that you tried using the latest TestCafe version (https://github.com/DevExpress/testcafe/releases), where this behavior might have been already addressed.
Before submitting an issue, please check CONTRIBUTING.md and existing issues in this repository (https://github.com/DevExpress/testcafe/issues) in case a similar issue exists or was already addressed. This may save your time (and ours).
-->
### What is your Test Scenario?
<!-- Describe what you'd like to test. -->
I have an app using msw, a mock service worker which provides seamless mocking between node & browser using a service worker. In the browser, it does this by mocking the fetch calls in a service worker.
When I run my unit tests (in jest) or when I run my app in the browser, they both can share the same mock endpoints. I am now trying to run the tests in testcafe but running into what i believe could be a conflict between `transport-worker` and `mockServiceWorker` which is causing this problem. The details of the bug can be found here https://github.com/mswjs/msw/issues/259 (I posted it in both projects just in case)
### What is the Current behavior?
<!-- Describe the behavior you see and consider invalid. -->
mockServiceWorker is not able to intercept fetches.
### What is the Expected behavior?
<!-- Describe what you expected to happen. -->
mockServiceWorker is able to intercept fetches.
### What is your web application and your TestCafe test code?
<!-- Share a public accessible link to your application or provide a simple app which we can run. -->
https://github.com/benmonro/msw-server/tree/feature/testcafe
<details>
<summary>Your website URL (or attach your complete example):</summary>
just running locally in above repo.
<!-- Provide your website URL or attach a sample. Note: if your website requires any additional access procedures like authentication, please ask the website owner to send us a written confirmation at [support@devexpress.com](mailto:support@devexpress.com) in a free text form. It will allow the DevExpress staff to remotely access the website and its internal resources for research, testing, and debugging purposes. -->
</details>
<details>
<summary>Your complete test code (or attach your test files):</summary>
<!-- Paste your test code here: -->
```js
```
</details>
<details>
<summary>Your complete configuration file (if any):</summary>
<!-- Paste your complete test config file here (even if it is huge): -->
```
```
</details>
<details>
<summary>Your complete test report:</summary>
<!-- Paste your complete result test report here (even if it is huge): -->
```
```
</details>
<details>
<summary>Screenshots:</summary>
<!-- If applicable, add screenshots to help explain the issue. -->
```
```
</details>
### Steps to Reproduce:
<!-- Describe what we should do to reproduce the behavior you encountered. -->
1. Go to my website ...
3. Execute this command...
4. See the error...
### Your Environment details:
* testcafe version: <!-- run `testcafe -v` -->
* node.js version: <!-- run `node -v` -->
* command-line arguments: <!-- example: "testcafe ie,chrome -e test.js" -->
* browser name and version: <!-- example: IE 11, Chrome 69, Firefox 100, etc. -->
* platform and version: <!-- example: "macOS 10.14, Windows, Linux Ubuntu 18.04.1, iOS 12 -->
* other: <!-- any notes you consider important -->
|
process
|
support service workers if you have all reproduction steps with a complete sample app please share as many details as possible in the sections below make sure that you tried using the latest testcafe version where this behavior might have been already addressed before submitting an issue please check contributing md and existing issues in this repository in case a similar issue exists or was already addressed this may save your time and ours what is your test scenario i have an app using msw a mock service worker which provides seamless mocking between node browser using a service worker in the browser it does this by mocking the fetch calls in a service worker when i run my unit tests in jest or when i run my app in the browser they both can share the same mock endpoints i am now trying to run the tests in testcafe but running into what i believe could be a conflict between transport worker and mockserviceworker which is causing this problem the details of the bug can be found here i posted it in both projects just in case what is the current behavior mockserviceworker is not able to intercept fetches what is the expected behavior mockserviceworker is able to intercept fetches what is your web application and your testcafe test code your website url or attach your complete example just running locally in above repo your complete test code or attach your test files js your complete configuration file if any your complete test report screenshots steps to reproduce go to my website execute this command see the error your environment details testcafe version node js version command line arguments browser name and version platform and version other
| 1
|
432,758
| 12,497,812,227
|
IssuesEvent
|
2020-06-01 17:09:10
|
ChainSafe/gossamer
|
https://api.github.com/repos/ChainSafe/gossamer
|
opened
|
update "Custom Services" document
|
Priority: 3 - Medium docs
|
<!---
PLEASE READ CAREFULLY
-->
## Possible Solution
<!---
Not obligatory, but this is the place to suggest the underlying cause and
possible fix for the bug, if you have one, or ideas on how to implement the
fix. We'll be sure to credit your ideas in the commit log, or better yet,
submit a PR and you'll get credit for the whole thing.
-->
- update “Custom Services” document to include an introduction to custom node services
- see existing page at https://chainsafe.github.io/gossamer/custom-services/
## Checklist
<!---
Each empty square brackets below is a checkbox. Replace [ ] with [x] to check
the box after completing the task.
--->
- [x] I have read [CONTRIBUTING](CONTRIBUTING.md) and [CODE_OF_CONDUCT](CODE_OF_CONDUCT.md)
- [x] I have provided as much information as possible and necessary
- [ ] I am planning to submit a pull request to fix this issue myself
<!--- Modified from trufflesuite/ganache -->
|
1.0
|
update "Custom Services" document - <!---
PLEASE READ CAREFULLY
-->
## Possible Solution
<!---
Not obligatory, but this is the place to suggest the underlying cause and
possible fix for the bug, if you have one, or ideas on how to implement the
fix. We'll be sure to credit your ideas in the commit log, or better yet,
submit a PR and you'll get credit for the whole thing.
-->
- update “Custom Services” document to include an introduction to custom node services
- see existing page at https://chainsafe.github.io/gossamer/custom-services/
## Checklist
<!---
Each empty square brackets below is a checkbox. Replace [ ] with [x] to check
the box after completing the task.
--->
- [x] I have read [CONTRIBUTING](CONTRIBUTING.md) and [CODE_OF_CONDUCT](CODE_OF_CONDUCT.md)
- [x] I have provided as much information as possible and necessary
- [ ] I am planning to submit a pull request to fix this issue myself
<!--- Modified from trufflesuite/ganache -->
|
non_process
|
update custom services document please read carefully possible solution not obligatory but this is the place to suggest the underlying cause and possible fix for the bug if you have one or ideas on how to implement the fix we ll be sure to credit your ideas in the commit log or better yet submit a pr and you ll get credit for the whole thing update “custom services” document to include an introduction to custom node services see existing page at checklist each empty square brackets below is a checkbox replace with to check the box after completing the task i have read contributing md and code of conduct md i have provided as much information as possible and necessary i am planning to submit a pull request to fix this issue myself
| 0
|
267,803
| 20,246,367,702
|
IssuesEvent
|
2022-02-14 14:06:43
|
FDSN/StationXML
|
https://api.github.com/repos/FDSN/StationXML
|
opened
|
warn of potential deprecations and schema updates
|
documentation
|
Before releasing the documentation updates, would be good to review all issues that are "schema change" issues to create a list of future changes. Perhaps add warnings to any elements that are likely to change or be removed in the 2.0 revision.
|
1.0
|
warn of potential deprecations and schema updates - Before releasing the documentation updates, would be good to review all issues that are "schema change" issues to create a list of future changes. Perhaps add warnings to any elements that are likely to change or be removed in the 2.0 revision.
|
non_process
|
warn of potential deprecations and schema updates before releasing the documentation updates would be good to review all issues that are schema change issues to create a list of future changes perhaps add warnings to any elements that are likely to change or be removed in the revision
| 0
|
207,556
| 7,131,119,803
|
IssuesEvent
|
2018-01-22 09:50:46
|
Jumpscale/go-raml
|
https://api.github.com/repos/Jumpscale/go-raml
|
closed
|
Include commit hash and real version in binary
|
priority_critical state_inprogress
|

something need to be bumped, so we know doc in line with tool
|
1.0
|
Include commit hash and real version in binary - 
something need to be bumped, so we know doc in line with tool
|
non_process
|
include commit hash and real version in binary something need to be bumped so we know doc in line with tool
| 0
|
28,079
| 6,942,642,849
|
IssuesEvent
|
2017-12-05 01:05:52
|
18F/doi-extractives-data
|
https://api.github.com/repos/18F/doi-extractives-data
|
closed
|
Training: updating site content
|
training type: non-code task workflow:backlog workflow:blocked
|
As a member of the ONRR team, I want to be able to update site content (like case studies) as-needed.
|
1.0
|
Training: updating site content - As a member of the ONRR team, I want to be able to update site content (like case studies) as-needed.
|
non_process
|
training updating site content as a member of the onrr team i want to be able to update site content like case studies as needed
| 0
|
18,336
| 24,457,074,468
|
IssuesEvent
|
2022-10-07 07:48:46
|
pystatgen/sgkit
|
https://api.github.com/repos/pystatgen/sgkit
|
closed
|
Remove support for Python 3.7
|
process + tools upstream
|
[NumPy 1.22](https://numpy.org/devdocs/release/1.22.0-notes.html) no longer supports Python 3.7, so we should consider removing support for it in the next release of sgkit.
This would be consistent with https://numpy.org/neps/nep-0029-deprecation_policy.html, which says Python 3.7 can be dropped from Dec 26, 2021.
|
1.0
|
Remove support for Python 3.7 - [NumPy 1.22](https://numpy.org/devdocs/release/1.22.0-notes.html) no longer supports Python 3.7, so we should consider removing support for it in the next release of sgkit.
This would be consistent with https://numpy.org/neps/nep-0029-deprecation_policy.html, which says Python 3.7 can be dropped from Dec 26, 2021.
|
process
|
remove support for python no longer supports python so we should consider removing support for it in the next release of sgkit this would be consistent with which says python can be dropped from dec
| 1
|
577,089
| 17,103,171,729
|
IssuesEvent
|
2021-07-09 14:06:00
|
Automattic/woocommerce-payments
|
https://api.github.com/repos/Automattic/woocommerce-payments
|
closed
|
WP 5.6 - Test with core JQuery update
|
good first issue priority: high size: small
|
WordPress is rolling the version of jQuery bundled with it. Let's test compatibility. https://make.wordpress.org/core/2020/06/29/updating-jquery-version-shipped-with-wordpress/
Thankfully, WCPay barely uses jQuery, so this should be quick.
|
1.0
|
WP 5.6 - Test with core JQuery update - WordPress is rolling the version of jQuery bundled with it. Let's test compatibility. https://make.wordpress.org/core/2020/06/29/updating-jquery-version-shipped-with-wordpress/
Thankfully, WCPay barely uses jQuery, so this should be quick.
|
non_process
|
wp test with core jquery update wordpress is rolling the version of jquery bundled with it let s test compatibility thankfully wcpay barely uses jquery so this should be quick
| 0
|
9,619
| 12,555,967,230
|
IssuesEvent
|
2020-06-07 08:05:50
|
ForumPostAssistant/FPA
|
https://api.github.com/repos/ForumPostAssistant/FPA
|
closed
|
Change ordinal date format to cardinal days and abbreviated month format.
|
In Process
|
The current FPA report displays the date as an ordinal number with the full month name

Most style guides for writers recommend the use of cardinal numbers in date formats.
Because different localities use different date representation formats (dd/mm/yyyy, mm/dd/yy, yyyy/mm/dd, etc.) a purely numerical representation of the month should be avoided.
I suggest the date format j-M-Y at line 4722 instead
In the illustrated example above, this would display as "2-Jun-2020". Small saving in total character count; less likely to be misinterpreted by the reader.
|
1.0
|
Change ordinal date format to cardinal days and abbreviated month format. - The current FPA report displays the date as an ordinal number with the full month name

Most style guides for writers recommend the use of cardinal numbers in date formats.
Because different localities use different date representation formats (dd/mm/yyyy, mm/dd/yy, yyyy/mm/dd, etc.) a purely numerical representation of the month should be avoided.
I suggest the date format j-M-Y at line 4722 instead
In the illustrated example above, this would display as "2-Jun-2020". Small saving in total character count; less likely to be misinterpreted by the reader.
|
process
|
change ordinal date format to cardinal days and abbreviated month format the current fpa report displays the date as an ordinal number with the full month name most style guides for writers recommend the use of cardinal numbers in date formats because different localities use different date representation formats dd mm yyyy mm dd yy yyyy mm dd etc a purely numerical representation of the month should be avoided i suggest the date format j m y at line instead in the illustrated example above this would display as jun small saving in total character count less likely to be misinterpreted by the reader
| 1
|
544,015
| 15,888,752,272
|
IssuesEvent
|
2021-04-10 08:49:40
|
zeoflow/flow-kit
|
https://api.github.com/repos/zeoflow/flow-kit
|
closed
|
RecyclerView Extensions
|
@feature @invalid @priority-medium
|
A recycler view with useful utils such as logger and view holder incorporated.
We also happily accept [pull requests](https://github.com/zeoflow/flow-kit/pulls).
|
1.0
|
RecyclerView Extensions - A recycler view with useful utils such as logger and view holder incorporated.
We also happily accept [pull requests](https://github.com/zeoflow/flow-kit/pulls).
|
non_process
|
recyclerview extensions a recycler view with useful utils such as logger and view holder incorporated we also happily accept
| 0
|
17,775
| 23,702,472,062
|
IssuesEvent
|
2022-08-29 20:25:27
|
googleapis/python-bigquery
|
https://api.github.com/repos/googleapis/python-bigquery
|
closed
|
Revert the temporarily adjustment of a failing system test
|
api: bigquery type: process testing
|
Once #61 is fixed, the changes from #67 **must** be reverted.
|
1.0
|
Revert the temporarily adjustment of a failing system test - Once #61 is fixed, the changes from #67 **must** be reverted.
|
process
|
revert the temporarily adjustment of a failing system test once is fixed the changes from must be reverted
| 1
|
8,942
| 12,056,761,468
|
IssuesEvent
|
2020-04-15 14:54:06
|
MHRA/products
|
https://api.github.com/repos/MHRA/products
|
opened
|
Separate deleting blobs process from deleting from index process
|
BUG :bug: EPIC - Auto Batch Process :oncoming_automobile: Enhancement 💫
|
## User want
As a _technical user_
I want _to properly understand error states_
So that _I am able to debug where edge cases are failing_
## Acceptance Criteria
- [ ] New workflow for deletes:
- Receive message to delete document
- Look up in index
- Delete blob
- If successful, dispatch message to delete in index
- Receive message to delete index
- Delete index
- Mark as done
### Customer acceptance criteria
- [ ] _If a blob doesn't exist but a search index entry does, the blob not being found should trigger a 404._
## Data - Potential impact
**Size**
S
**Value**
S
**Effort**
S
### Exit Criteria met
- [x] Backlog
- [ ] Discovery
- [ ] DUXD
- [ ] Development
- [ ] Quality Assurance
- [ ] Release and Validate
|
1.0
|
Separate deleting blobs process from deleting from index process - ## User want
As a _technical user_
I want _to properly understand error states_
So that _I am able to debug where edge cases are failing_
## Acceptance Criteria
- [ ] New workflow for deletes:
- Receive message to delete document
- Look up in index
- Delete blob
- If successful, dispatch message to delete in index
- Receive message to delete index
- Delete index
- Mark as done
### Customer acceptance criteria
- [ ] _If a blob doesn't exist but a search index entry does, the blob not being found should trigger a 404._
## Data - Potential impact
**Size**
S
**Value**
S
**Effort**
S
### Exit Criteria met
- [x] Backlog
- [ ] Discovery
- [ ] DUXD
- [ ] Development
- [ ] Quality Assurance
- [ ] Release and Validate
|
process
|
separate deleting blobs process from deleting from index process user want as a technical user i want to properly understand error states so that i am able to debug where edge cases are failing acceptance criteria new workflow for deletes receive message to delete document look up in index delete blob if successful dispatch message to delete in index receive message to delete index delete index mark as done customer acceptance criteria if a blob doesn t exist but a search index entry does the blob not being found should trigger a data potential impact size s value s effort s exit criteria met backlog discovery duxd development quality assurance release and validate
| 1
|
638,550
| 20,730,364,978
|
IssuesEvent
|
2022-03-14 08:52:13
|
aiidateam/aiida-core
|
https://api.github.com/repos/aiidateam/aiida-core
|
closed
|
Dropping of unknown configuration keys can lead to inconsistent config state
|
type/bug topic/config priority/quality-of-life
|
### Describe the bug
When an unknown key is encountered in the configuration, it is simply dropped instead of going through the config migrations. This can lead to an inconsistent configuration that is not recoverable without manually editing it.
### Steps to reproduce
Steps to reproduce the behavior:
1. `pip install aiida-core==1.4.1`
2. Set up a new profile. The config will contain
```json
"CONFIG_VERSION": {
"CURRENT": 4,
"OLDEST_COMPATIBLE": 3
}
```
and something like
```json
"tmpfoo": {
"PROFILE_UUID": "d925405ed3f14935b8fc04dba127ba18",
"AIIDADB_ENGINE": "postgresql_psycopg2",
"AIIDADB_BACKEND": "django",
"AIIDADB_NAME": "tmpfoo",
"AIIDADB_PORT": 5432,
"AIIDADB_HOST": "localhost",
"AIIDADB_USER": "tmpfoo",
"AIIDADB_PASS": "1234",
"broker_protocol": "amqp",
"broker_username": "guest",
"broker_password": "guest",
"broker_host": "127.0.0.1",
"broker_port": 5672,
"broker_virtual_host": "",
"AIIDADB_REPOSITORY_URI": "file:///home/a-dogres/.aiida/repository/tmpfoo",
"options": {},
"default_user_email": ""
}
```
3. `pip install aiida-core==1.3.1`
4. Run a command, e.g. plain `verdi`. The config will _still_ contain
```json
"CONFIG_VERSION": {
"CURRENT": 4,
"OLDEST_COMPATIBLE": 3
}
```
but now with
```json"
tmpfoo": {
"PROFILE_UUID": "d925405ed3f14935b8fc04dba127ba18",
"AIIDADB_ENGINE": "postgresql_psycopg2",
"AIIDADB_BACKEND": "django",
"AIIDADB_NAME": "tmpfoo",
"AIIDADB_PORT": 5432,
"AIIDADB_HOST": "localhost",
"AIIDADB_USER": "tmpfoo",
"AIIDADB_PASS": "1234",
"AIIDADB_REPOSITORY_URI": "file:///home/a-dogres/.aiida/repository/tmpfoo",
"options": {},
"default_user_email": ""
}
```
5. `pip install aiida-core==1.4.1`
6. Try running something
```python
In [1]: from aiida.engine import workfunction
In [2]: @workfunction
...: def foo():
...: pass
...:
In [3]: foo()
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-3-c19b6d9633cf> in <module>
----> 1 foo()
~/.virtualenvs/tmp-59f8558ca0453c0/lib/python3.7/site-packages/aiida/engine/processes/functions.py in decorated_function(*args, **kwargs)
179 def decorated_function(*args, **kwargs):
180 """This wrapper function is the actual function that is called."""
--> 181 result, _ = run_get_node(*args, **kwargs)
182 return result
183
~/.virtualenvs/tmp-59f8558ca0453c0/lib/python3.7/site-packages/aiida/engine/processes/functions.py in run_get_node(*args, **kwargs)
118 """
119 manager = get_manager()
--> 120 runner = manager.create_runner(with_persistence=False)
121 inputs = process_class.create_inputs(*args, **kwargs)
122
~/.virtualenvs/tmp-59f8558ca0453c0/lib/python3.7/site-packages/aiida/manage/manager.py in create_runner(self, with_persistence, **kwargs)
283 if 'communicator' not in settings:
284 # Only call get_communicator if we have to as it will lazily create
--> 285 settings['communicator'] = self.get_communicator()
286
287 if with_persistence and 'persister' not in settings:
~/.virtualenvs/tmp-59f8558ca0453c0/lib/python3.7/site-packages/aiida/manage/manager.py in get_communicator(self)
161 """
162 if self._communicator is None:
--> 163 self._communicator = self.create_communicator()
164
165 return self._communicator
~/.virtualenvs/tmp-59f8558ca0453c0/lib/python3.7/site-packages/aiida/manage/manager.py in create_communicator(self, task_prefetch_count, with_orm)
183 task_prefetch_count = self.get_config().get_option('daemon.worker_process_slots', profile.name)
184
--> 185 url = profile.get_rmq_url()
186 prefix = profile.rmq_prefix
187
~/.virtualenvs/tmp-59f8558ca0453c0/lib/python3.7/site-packages/aiida/manage/configuration/profile.py in get_rmq_url(self)
342 from aiida.manage.external.rmq import get_rmq_url
343 return get_rmq_url(
--> 344 protocol=self.broker_protocol,
345 username=self.broker_username,
346 password=self.broker_password,
~/.virtualenvs/tmp-59f8558ca0453c0/lib/python3.7/site-packages/aiida/manage/configuration/profile.py in broker_protocol(self)
192 @property
193 def broker_protocol(self):
--> 194 return self._attributes[self.KEY_BROKER_PROTOCOL]
195
196 @broker_protocol.setter
KeyError: 'broker_protocol'
```
### Expected behavior
The unknown keys should either remain in place, or the `CURRENT_VERSION` of the config needs to be changed to match its content.
### Some thoughts
The _original_ intent of the `CURRENT_VERSION` and `OLDEST_COMPATIBLE` in the config is that an older version of AiiDA can happily run on a newer config version, as long as all the keys _it_ expects are still there and unchanged. As such, the config doesn't need migrating, and once we go back to the newer AiiDA version no migration is necessary.
Since there were quite some changes to all that since way-back-when, I'm not sure this is still the case. Thinking of it, adding a _new_ profile while on the old AiiDA version (or doing any other kind of change) is also bound to confuse the newer AiiDA version. Maybe then the "compatible versions" concepts needs to be dropped, but we should always make sure to go through migrations (in both directions)?
|
1.0
|
Dropping of unknown configuration keys can lead to inconsistent config state - ### Describe the bug
When an unknown key is encountered in the configuration, it is simply dropped instead of going through the config migrations. This can lead to an inconsistent configuration that is not recoverable without manually editing it.
### Steps to reproduce
Steps to reproduce the behavior:
1. `pip install aiida-core==1.4.1`
2. Set up a new profile. The config will contain
```json
"CONFIG_VERSION": {
"CURRENT": 4,
"OLDEST_COMPATIBLE": 3
}
```
and something like
```json
"tmpfoo": {
"PROFILE_UUID": "d925405ed3f14935b8fc04dba127ba18",
"AIIDADB_ENGINE": "postgresql_psycopg2",
"AIIDADB_BACKEND": "django",
"AIIDADB_NAME": "tmpfoo",
"AIIDADB_PORT": 5432,
"AIIDADB_HOST": "localhost",
"AIIDADB_USER": "tmpfoo",
"AIIDADB_PASS": "1234",
"broker_protocol": "amqp",
"broker_username": "guest",
"broker_password": "guest",
"broker_host": "127.0.0.1",
"broker_port": 5672,
"broker_virtual_host": "",
"AIIDADB_REPOSITORY_URI": "file:///home/a-dogres/.aiida/repository/tmpfoo",
"options": {},
"default_user_email": ""
}
```
3. `pip install aiida-core==1.3.1`
4. Run a command, e.g. plain `verdi`. The config will _still_ contain
```json
"CONFIG_VERSION": {
"CURRENT": 4,
"OLDEST_COMPATIBLE": 3
}
```
but now with
```json"
tmpfoo": {
"PROFILE_UUID": "d925405ed3f14935b8fc04dba127ba18",
"AIIDADB_ENGINE": "postgresql_psycopg2",
"AIIDADB_BACKEND": "django",
"AIIDADB_NAME": "tmpfoo",
"AIIDADB_PORT": 5432,
"AIIDADB_HOST": "localhost",
"AIIDADB_USER": "tmpfoo",
"AIIDADB_PASS": "1234",
"AIIDADB_REPOSITORY_URI": "file:///home/a-dogres/.aiida/repository/tmpfoo",
"options": {},
"default_user_email": ""
}
```
5. `pip install aiida-core==1.4.1`
6. Try running something
```python
In [1]: from aiida.engine import workfunction
In [2]: @workfunction
...: def foo():
...: pass
...:
In [3]: foo()
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-3-c19b6d9633cf> in <module>
----> 1 foo()
~/.virtualenvs/tmp-59f8558ca0453c0/lib/python3.7/site-packages/aiida/engine/processes/functions.py in decorated_function(*args, **kwargs)
179 def decorated_function(*args, **kwargs):
180 """This wrapper function is the actual function that is called."""
--> 181 result, _ = run_get_node(*args, **kwargs)
182 return result
183
~/.virtualenvs/tmp-59f8558ca0453c0/lib/python3.7/site-packages/aiida/engine/processes/functions.py in run_get_node(*args, **kwargs)
118 """
119 manager = get_manager()
--> 120 runner = manager.create_runner(with_persistence=False)
121 inputs = process_class.create_inputs(*args, **kwargs)
122
~/.virtualenvs/tmp-59f8558ca0453c0/lib/python3.7/site-packages/aiida/manage/manager.py in create_runner(self, with_persistence, **kwargs)
283 if 'communicator' not in settings:
284 # Only call get_communicator if we have to as it will lazily create
--> 285 settings['communicator'] = self.get_communicator()
286
287 if with_persistence and 'persister' not in settings:
~/.virtualenvs/tmp-59f8558ca0453c0/lib/python3.7/site-packages/aiida/manage/manager.py in get_communicator(self)
161 """
162 if self._communicator is None:
--> 163 self._communicator = self.create_communicator()
164
165 return self._communicator
~/.virtualenvs/tmp-59f8558ca0453c0/lib/python3.7/site-packages/aiida/manage/manager.py in create_communicator(self, task_prefetch_count, with_orm)
183 task_prefetch_count = self.get_config().get_option('daemon.worker_process_slots', profile.name)
184
--> 185 url = profile.get_rmq_url()
186 prefix = profile.rmq_prefix
187
~/.virtualenvs/tmp-59f8558ca0453c0/lib/python3.7/site-packages/aiida/manage/configuration/profile.py in get_rmq_url(self)
342 from aiida.manage.external.rmq import get_rmq_url
343 return get_rmq_url(
--> 344 protocol=self.broker_protocol,
345 username=self.broker_username,
346 password=self.broker_password,
~/.virtualenvs/tmp-59f8558ca0453c0/lib/python3.7/site-packages/aiida/manage/configuration/profile.py in broker_protocol(self)
192 @property
193 def broker_protocol(self):
--> 194 return self._attributes[self.KEY_BROKER_PROTOCOL]
195
196 @broker_protocol.setter
KeyError: 'broker_protocol'
```
### Expected behavior
The unknown keys should either remain in place, or the `CURRENT_VERSION` of the config needs to be changed to match its content.
### Some thoughts
The _original_ intent of the `CURRENT_VERSION` and `OLDEST_COMPATIBLE` in the config is that an older version of AiiDA can happily run on a newer config version, as long as all the keys _it_ expects are still there and unchanged. As such, the config doesn't need migrating, and once we go back to the newer AiiDA version no migration is necessary.
Since there were quite some changes to all that since way-back-when, I'm not sure this is still the case. Thinking of it, adding a _new_ profile while on the old AiiDA version (or doing any other kind of change) is also bound to confuse the newer AiiDA version. Maybe then the "compatible versions" concepts needs to be dropped, but we should always make sure to go through migrations (in both directions)?
|
non_process
|
dropping of unknown configuration keys can lead to inconsistent config state describe the bug when an unknown key is encountered in the configuration it is simply dropped instead of going through the config migrations this can lead to an inconsistent configuration that is not recoverable without manually editing it steps to reproduce steps to reproduce the behavior pip install aiida core set up a new profile the config will contain json config version current oldest compatible and something like json tmpfoo profile uuid aiidadb engine postgresql aiidadb backend django aiidadb name tmpfoo aiidadb port aiidadb host localhost aiidadb user tmpfoo aiidadb pass broker protocol amqp broker username guest broker password guest broker host broker port broker virtual host aiidadb repository uri file home a dogres aiida repository tmpfoo options default user email pip install aiida core run a command e g plain verdi the config will still contain json config version current oldest compatible but now with json tmpfoo profile uuid aiidadb engine postgresql aiidadb backend django aiidadb name tmpfoo aiidadb port aiidadb host localhost aiidadb user tmpfoo aiidadb pass aiidadb repository uri file home a dogres aiida repository tmpfoo options default user email pip install aiida core try running something python in from aiida engine import workfunction in workfunction def foo pass in foo keyerror traceback most recent call last in foo virtualenvs tmp lib site packages aiida engine processes functions py in decorated function args kwargs def decorated function args kwargs this wrapper function is the actual function that is called result run get node args kwargs return result virtualenvs tmp lib site packages aiida engine processes functions py in run get node args kwargs manager get manager runner manager create runner with persistence false inputs process class create inputs args kwargs virtualenvs tmp lib site packages aiida manage manager py in create runner self with persistence kwargs if communicator not in settings only call get communicator if we have to as it will lazily create settings self get communicator if with persistence and persister not in settings virtualenvs tmp lib site packages aiida manage manager py in get communicator self if self communicator is none self communicator self create communicator return self communicator virtualenvs tmp lib site packages aiida manage manager py in create communicator self task prefetch count with orm task prefetch count self get config get option daemon worker process slots profile name url profile get rmq url prefix profile rmq prefix virtualenvs tmp lib site packages aiida manage configuration profile py in get rmq url self from aiida manage external rmq import get rmq url return get rmq url protocol self broker protocol username self broker username password self broker password virtualenvs tmp lib site packages aiida manage configuration profile py in broker protocol self property def broker protocol self return self attributes broker protocol setter keyerror broker protocol expected behavior the unknown keys should either remain in place or the current version of the config needs to be changed to match its content some thoughts the original intent of the current version and oldest compatible in the config is that an older version of aiida can happily run on a newer config version as long as all the keys it expects are still there and unchanged as such the config doesn t need migrating and once we go back to the newer aiida version no migration is necessary since there were quite some changes to all that since way back when i m not sure this is still the case thinking of it adding a new profile while on the old aiida version or doing any other kind of change is also bound to confuse the newer aiida version maybe then the compatible versions concepts needs to be dropped but we should always make sure to go through migrations in both directions
| 0
|
1,022
| 3,481,013,651
|
IssuesEvent
|
2015-12-29 13:04:57
|
kerubistan/kerub
|
https://api.github.com/repos/kerubistan/kerub
|
closed
|
periodically check disconnected hosts
|
component:data processing enhancement priority: normal
|
disconnected hosts should be regularily checked, if they are available again, they should be connected again
|
1.0
|
periodically check disconnected hosts - disconnected hosts should be regularily checked, if they are available again, they should be connected again
|
process
|
periodically check disconnected hosts disconnected hosts should be regularily checked if they are available again they should be connected again
| 1
|
118,522
| 15,304,702,773
|
IssuesEvent
|
2021-02-24 17:12:40
|
codeforboston/safe-water
|
https://api.github.com/repos/codeforboston/safe-water
|
closed
|
Create v1 of the operation dashboard
|
Design
|
Create a first version of the dashboard. Should:
- then be embedded on https://www.crwa.org/water-quality-data.html
- use #116 as map layer
- use #114 (or #110) as data layer
- follow the design proposed in #113 (read also the issue for details)
- be inspired by current dashboards (http://arcg.is/1DGqG5 and http://arcg.is/0v1z5S)
|
1.0
|
Create v1 of the operation dashboard - Create a first version of the dashboard. Should:
- then be embedded on https://www.crwa.org/water-quality-data.html
- use #116 as map layer
- use #114 (or #110) as data layer
- follow the design proposed in #113 (read also the issue for details)
- be inspired by current dashboards (http://arcg.is/1DGqG5 and http://arcg.is/0v1z5S)
|
non_process
|
create of the operation dashboard create a first version of the dashboard should then be embedded on use as map layer use or as data layer follow the design proposed in read also the issue for details be inspired by current dashboards and
| 0
|
529
| 2,999,832,849
|
IssuesEvent
|
2015-07-23 21:06:13
|
zhengj2007/BFO-test
|
https://api.github.com/repos/zhengj2007/BFO-test
|
opened
|
Have bfo-owl-devel and bfo-devel teams review reference document, flag consensus/contentious
|
imported Priority-Critical Type-BFO2-Process
|
_From [alanruttenberg@gmail.com](https://code.google.com/u/alanruttenberg@gmail.com/) on May 08, 2012 00:45:17_
We need to have people know what the current reference says and to critically review it. Then whoever has an opinion needs to say which parts they are comfortable with, which uncomfortable, and which they have questions about.
This will be used for release notes for the reference document, as well as to plan strated in prerelease bfo
_Original issue: http://code.google.com/p/bfo/issues/detail?id=25_
|
1.0
|
Have bfo-owl-devel and bfo-devel teams review reference document, flag consensus/contentious - _From [alanruttenberg@gmail.com](https://code.google.com/u/alanruttenberg@gmail.com/) on May 08, 2012 00:45:17_
We need to have people know what the current reference says and to critically review it. Then whoever has an opinion needs to say which parts they are comfortable with, which uncomfortable, and which they have questions about.
This will be used for release notes for the reference document, as well as to plan strated in prerelease bfo
_Original issue: http://code.google.com/p/bfo/issues/detail?id=25_
|
process
|
have bfo owl devel and bfo devel teams review reference document flag consensus contentious from on may we need to have people know what the current reference says and to critically review it then whoever has an opinion needs to say which parts they are comfortable with which uncomfortable and which they have questions about this will be used for release notes for the reference document as well as to plan strated in prerelease bfo original issue
| 1
|
12,244
| 14,744,076,653
|
IssuesEvent
|
2021-01-07 14:48:50
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Towne - problems with SA - not urgent, just really annoying
|
anc-process anp-1.5 anp-emergency release ant-bug
|
In GitLab by @kdjstudios on Dec 30, 2019, 13:11
**Submitted by:** Deb Crown <dcrown@towneanswering.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/10101052
**Server:** External
**Client/Site:** Towne
**Account:** NA
**Issue:**
So, today is billing day and when I go into ‘draft invoice’ two things happen:
I go into the invoice and make an adjustment and – when I click the green check box – I get a window that asks ‘By making this adjustment you may also want to check to ensure if a late fee is necessary. Please click Okay to continue or click cancel to revert your change.’
What the heck?????? This appears even when there is no late fee on the invoice!!
I then click ‘ok’ and SA goes into eternally trying to make it happen. I can stare at the circle going around and around….it never stops.
I can make it stop by clicking on the ‘back’ arrow and it takes me to the prior screen.
I then have to go back into the draft invoice to see if the adjustment applied.
|
1.0
|
Towne - problems with SA - not urgent, just really annoying - In GitLab by @kdjstudios on Dec 30, 2019, 13:11
**Submitted by:** Deb Crown <dcrown@towneanswering.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/10101052
**Server:** External
**Client/Site:** Towne
**Account:** NA
**Issue:**
So, today is billing day and when I go into ‘draft invoice’ two things happen:
I go into the invoice and make an adjustment and – when I click the green check box – I get a window that asks ‘By making this adjustment you may also want to check to ensure if a late fee is necessary. Please click Okay to continue or click cancel to revert your change.’
What the heck?????? This appears even when there is no late fee on the invoice!!
I then click ‘ok’ and SA goes into eternally trying to make it happen. I can stare at the circle going around and around….it never stops.
I can make it stop by clicking on the ‘back’ arrow and it takes me to the prior screen.
I then have to go back into the draft invoice to see if the adjustment applied.
|
process
|
towne problems with sa not urgent just really annoying in gitlab by kdjstudios on dec submitted by deb crown helpdesk server external client site towne account na issue so today is billing day and when i go into ‘draft invoice’ two things happen i go into the invoice and make an adjustment and – when i click the green check box – i get a window that asks ‘by making this adjustment you may also want to check to ensure if a late fee is necessary please click okay to continue or click cancel to revert your change ’ what the heck this appears even when there is no late fee on the invoice i then click ‘ok’ and sa goes into eternally trying to make it happen i can stare at the circle going around and around… it never stops i can make it stop by clicking on the ‘back’ arrow and it takes me to the prior screen i then have to go back into the draft invoice to see if the adjustment applied
| 1
|
20,872
| 27,659,217,233
|
IssuesEvent
|
2023-03-12 10:28:52
|
calaldees/KaraKara
|
https://api.github.com/repos/calaldees/KaraKara
|
opened
|
Songs with overlapping lyrics are rendered badly
|
feature processmedia2
|
Some songs (at least, one of the songs I added >.>;; ) have lines which overlap by a few milliseconds, which renders _very_ ugly when we try to show the current and next lines together (it shows 4 lines which jump around the screen)
We should at least have a warning so that these can be detected and fixed manually
|
1.0
|
Songs with overlapping lyrics are rendered badly - Some songs (at least, one of the songs I added >.>;; ) have lines which overlap by a few milliseconds, which renders _very_ ugly when we try to show the current and next lines together (it shows 4 lines which jump around the screen)
We should at least have a warning so that these can be detected and fixed manually
|
process
|
songs with overlapping lyrics are rendered badly some songs at least one of the songs i added have lines which overlap by a few milliseconds which renders very ugly when we try to show the current and next lines together it shows lines which jump around the screen we should at least have a warning so that these can be detected and fixed manually
| 1
|
146,563
| 5,624,085,207
|
IssuesEvent
|
2017-04-04 16:14:59
|
minio/minio
|
https://api.github.com/repos/minio/minio
|
closed
|
browser: Fix missing space between colon and number in Minio Browser
|
priority: low
|
On Minio Browser (play.minio.io) on the top bar were the "**Used:**88.88GB" title is, there is no space between colon and the number. There should be one. I provide screenshot for better visualization of the issue.

|
1.0
|
browser: Fix missing space between colon and number in Minio Browser - On Minio Browser (play.minio.io) on the top bar were the "**Used:**88.88GB" title is, there is no space between colon and the number. There should be one. I provide screenshot for better visualization of the issue.

|
non_process
|
browser fix missing space between colon and number in minio browser on minio browser play minio io on the top bar were the used title is there is no space between colon and the number there should be one i provide screenshot for better visualization of the issue
| 0
|
373,941
| 11,052,987,297
|
IssuesEvent
|
2019-12-10 10:27:24
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
drudgereport.com - see bug description
|
browser-fenix engine-gecko priority-important type-trackingprotection
|
<!-- @browser: Firefox Mobile 70.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:70.0) Gecko/70.0 Firefox/70.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://drudgereport.com/
**Browser / Version**: Firefox Mobile 70.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: images do not show up. also I can't seem to find a simple toggle switch that allows me to display ads one block whatever is preventing the images from showing on this website
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
Submitted in the name of `@bga123`
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
drudgereport.com - see bug description - <!-- @browser: Firefox Mobile 70.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:70.0) Gecko/70.0 Firefox/70.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://drudgereport.com/
**Browser / Version**: Firefox Mobile 70.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: images do not show up. also I can't seem to find a simple toggle switch that allows me to display ads one block whatever is preventing the images from showing on this website
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
Submitted in the name of `@bga123`
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
drudgereport com see bug description url browser version firefox mobile operating system android tested another browser yes problem type something else description images do not show up also i can t seem to find a simple toggle switch that allows me to display ads one block whatever is preventing the images from showing on this website steps to reproduce browser configuration none submitted in the name of from with ❤️
| 0
|
154,886
| 5,938,707,466
|
IssuesEvent
|
2017-05-25 01:21:49
|
DCLP/dclpxsltbox
|
https://api.github.com/repos/DCLP/dclpxsltbox
|
closed
|
date repetition in some bibliographic HTML
|
in work priority: low (defer) review tweak XSLT
|
refactoring of one aspect of #171, which is now closed.
In some bibliographic citations where the issue number is the same as the date year, the HTML output is redundant from the human point of you. CRAI is one publication series that will produce this sort of problem, for example in the HTML output for TM we read:
> 78209. Jean-Luc FOURNET and Jean GASCOU, "Un lot d'archives inédit de Lycopolis (Égypte) à l'Académie des Inscriptions et Belles-Lettres.," , 2008 (2008), pp. 1041-1074. CRAI.
We could undertake to suppress the display of the date -- e.g., "(2008)" when its value exactly matches the issue number, but view this as a very minor matter of formatting. No one should be confused by the current behavior. Recommend defer as lowest possible priority.
|
1.0
|
date repetition in some bibliographic HTML - refactoring of one aspect of #171, which is now closed.
In some bibliographic citations where the issue number is the same as the date year, the HTML output is redundant from the human point of you. CRAI is one publication series that will produce this sort of problem, for example in the HTML output for TM we read:
> 78209. Jean-Luc FOURNET and Jean GASCOU, "Un lot d'archives inédit de Lycopolis (Égypte) à l'Académie des Inscriptions et Belles-Lettres.," , 2008 (2008), pp. 1041-1074. CRAI.
We could undertake to suppress the display of the date -- e.g., "(2008)" when its value exactly matches the issue number, but view this as a very minor matter of formatting. No one should be confused by the current behavior. Recommend defer as lowest possible priority.
|
non_process
|
date repetition in some bibliographic html refactoring of one aspect of which is now closed in some bibliographic citations where the issue number is the same as the date year the html output is redundant from the human point of you crai is one publication series that will produce this sort of problem for example in the html output for tm we read jean luc fournet and jean gascou un lot d archives inédit de lycopolis égypte à l académie des inscriptions et belles lettres pp crai we could undertake to suppress the display of the date e g when its value exactly matches the issue number but view this as a very minor matter of formatting no one should be confused by the current behavior recommend defer as lowest possible priority
| 0
|
18,190
| 24,238,068,744
|
IssuesEvent
|
2022-09-27 02:34:27
|
gobuffalo/buffalo
|
https://api.github.com/repos/gobuffalo/buffalo
|
closed
|
make the current version v1 and configure branch structure
|
process
|
Make the current version `v1` and configure branch structure for the future development
* [x] check, fix, and merge recent PRs/issues if any
* [x] release `v1.0.0` with the current version
* [x] make a new branch to maintain `v1`
* [x] configure `main` as the active development branch for the next major update
* [x] deprecate `development` branch
* [x] setup branch protection and other automation configurations
---
> > This is the reason I want to mark the current version of the buffalo core library (https://github.com/gobuffalo/buffalo) as v1 so the users can use the current version on their production with the belief of the version will last be stable even though the buffalo grow further. When the developers use v0, they could be more careful and will have concerns about possible breaking changes. Meanwhile, I believe the current version is already ready for production, and making it v1 will help both the users to use it in production and us to develop the next version with no strong concern about breaking changes.
>
> Yes. I think we should do that.
OK, then I will work on it (the core library versioning) this week. (actually, I am more passionate about the library part :-)
_Originally posted by @sio4 in https://github.com/gobuffalo/cli/issues/207#issuecomment-1223494259_
|
1.0
|
make the current version v1 and configure branch structure - Make the current version `v1` and configure branch structure for the future development
* [x] check, fix, and merge recent PRs/issues if any
* [x] release `v1.0.0` with the current version
* [x] make a new branch to maintain `v1`
* [x] configure `main` as the active development branch for the next major update
* [x] deprecate `development` branch
* [x] setup branch protection and other automation configurations
---
> > This is the reason I want to mark the current version of the buffalo core library (https://github.com/gobuffalo/buffalo) as v1 so the users can use the current version on their production with the belief of the version will last be stable even though the buffalo grow further. When the developers use v0, they could be more careful and will have concerns about possible breaking changes. Meanwhile, I believe the current version is already ready for production, and making it v1 will help both the users to use it in production and us to develop the next version with no strong concern about breaking changes.
>
> Yes. I think we should do that.
OK, then I will work on it (the core library versioning) this week. (actually, I am more passionate about the library part :-)
_Originally posted by @sio4 in https://github.com/gobuffalo/cli/issues/207#issuecomment-1223494259_
|
process
|
make the current version and configure branch structure make the current version and configure branch structure for the future development check fix and merge recent prs issues if any release with the current version make a new branch to maintain configure main as the active development branch for the next major update deprecate development branch setup branch protection and other automation configurations this is the reason i want to mark the current version of the buffalo core library as so the users can use the current version on their production with the belief of the version will last be stable even though the buffalo grow further when the developers use they could be more careful and will have concerns about possible breaking changes meanwhile i believe the current version is already ready for production and making it will help both the users to use it in production and us to develop the next version with no strong concern about breaking changes yes i think we should do that ok then i will work on it the core library versioning this week actually i am more passionate about the library part originally posted by in
| 1
|
120,845
| 10,136,795,928
|
IssuesEvent
|
2019-08-02 13:51:04
|
owncloud/client
|
https://api.github.com/repos/owncloud/client
|
closed
|
When a new folder is created it always syncs twice, even on success
|
ReadyToTest bug p4-low
|
Create a new sync folder. Wait for the initial sync to finish. A second sync will be triggered.
The log shows:
```
Compare etag with previous etag: last: "\"foo\"", received: "foo" -> CHANGED
```
That means there's still a place where etag parsing isn't done consistently. This appears now because I fixed a different (more rare) case where etag parsing wasn't done.
|
1.0
|
When a new folder is created it always syncs twice, even on success - Create a new sync folder. Wait for the initial sync to finish. A second sync will be triggered.
The log shows:
```
Compare etag with previous etag: last: "\"foo\"", received: "foo" -> CHANGED
```
That means there's still a place where etag parsing isn't done consistently. This appears now because I fixed a different (more rare) case where etag parsing wasn't done.
|
non_process
|
when a new folder is created it always syncs twice even on success create a new sync folder wait for the initial sync to finish a second sync will be triggered the log shows compare etag with previous etag last foo received foo changed that means there s still a place where etag parsing isn t done consistently this appears now because i fixed a different more rare case where etag parsing wasn t done
| 0
|
17,446
| 23,268,009,900
|
IssuesEvent
|
2022-08-04 19:27:01
|
googleapis/google-cloud-ruby
|
https://api.github.com/repos/googleapis/google-cloud-ruby
|
closed
|
Your .repo-metadata.json files have a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* release_level must be equal to one of the allowed values in google-cloud-ai_platform/.repo-metadata.json
* release_level must be equal to one of the allowed values in google-cloud-certificate_manager-v1/.repo-metadata.json
* release_level must be equal to one of the allowed values in google-cloud-certificate_manager/.repo-metadata.json
* api_shortname 'dns' invalid in google-cloud-dns/.repo-metadata.json
* release_level must be equal to one of the allowed values in google-cloud-domains-v1/.repo-metadata.json
* api_shortname field missing from google-cloud-location/.repo-metadata.json
* release_level must be equal to one of the allowed values in google-cloud-notebooks-v1/.repo-metadata.json
* release_level must be equal to one of the allowed values in google-cloud-optimization-v1/.repo-metadata.json
* api_shortname field missing from google-cloud-optimization-v1/.repo-metadata.json
* release_level must be equal to one of the allowed values in google-cloud-optimization/.repo-metadata.json
* api_shortname field missing from google-cloud-resource_manager/.repo-metadata.json
* api_shortname field missing from grafeas-v1/.repo-metadata.json
* api_shortname field missing from grafeas/.repo-metadata.json
* must have required property 'library_type' in stackdriver/.repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json files have a problem 🤒 - You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* release_level must be equal to one of the allowed values in google-cloud-ai_platform/.repo-metadata.json
* release_level must be equal to one of the allowed values in google-cloud-certificate_manager-v1/.repo-metadata.json
* release_level must be equal to one of the allowed values in google-cloud-certificate_manager/.repo-metadata.json
* api_shortname 'dns' invalid in google-cloud-dns/.repo-metadata.json
* release_level must be equal to one of the allowed values in google-cloud-domains-v1/.repo-metadata.json
* api_shortname field missing from google-cloud-location/.repo-metadata.json
* release_level must be equal to one of the allowed values in google-cloud-notebooks-v1/.repo-metadata.json
* release_level must be equal to one of the allowed values in google-cloud-optimization-v1/.repo-metadata.json
* api_shortname field missing from google-cloud-optimization-v1/.repo-metadata.json
* release_level must be equal to one of the allowed values in google-cloud-optimization/.repo-metadata.json
* api_shortname field missing from google-cloud-resource_manager/.repo-metadata.json
* api_shortname field missing from grafeas-v1/.repo-metadata.json
* api_shortname field missing from grafeas/.repo-metadata.json
* must have required property 'library_type' in stackdriver/.repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json files have a problem 🤒 you have a problem with your repo metadata json files result of scan 📈 release level must be equal to one of the allowed values in google cloud ai platform repo metadata json release level must be equal to one of the allowed values in google cloud certificate manager repo metadata json release level must be equal to one of the allowed values in google cloud certificate manager repo metadata json api shortname dns invalid in google cloud dns repo metadata json release level must be equal to one of the allowed values in google cloud domains repo metadata json api shortname field missing from google cloud location repo metadata json release level must be equal to one of the allowed values in google cloud notebooks repo metadata json release level must be equal to one of the allowed values in google cloud optimization repo metadata json api shortname field missing from google cloud optimization repo metadata json release level must be equal to one of the allowed values in google cloud optimization repo metadata json api shortname field missing from google cloud resource manager repo metadata json api shortname field missing from grafeas repo metadata json api shortname field missing from grafeas repo metadata json must have required property library type in stackdriver repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
| 1
|
11,710
| 14,546,352,274
|
IssuesEvent
|
2020-12-15 21:05:48
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Rules for incrementing $(Rev:r) could be described more clearly
|
Pri2 devops-cicd-process/tech devops/prod doc-enhancement
|
For example, consider this paragraph:
> Use $(Rev:r) to ensure that every completed build has a unique name. When a build is completed, if nothing else in the build number has changed, the Rev integer value is incremented by one.
, which implies that the value is incremented _after_ a build is completed, which is not accurate because the incremented value is visible in the pipeline run as soon as the pipeline run is started, so the increment must happen _before_ the build started.
Moreover, this line ties `$(Rev:r)` to a date, as do many examples on the page:
> 2 (The third run on this day will be 3, and so on.)
, while the algorithm behind appears to just count the number of times the part of the build number without `$(Rev:r)` was used. For example, if I use `ABC-$(Rev:r)`, then my build number would be `ABC-1`, `ABC-2`, etc. In more practical terms, if I format build number towards the upcoming version `2.1.0` as `2-1-0-$(Rev:r)`, then I get automatically incremented build numbers towards that version, which is quite useful in naming artifacts.
What is great about it, which docs don't mention, is that after I release `2.1.0`, I can change build number to `2-2-0-$(Rev:r)` on _the same branch_ and it will start counting from `1` again. Notice, that the version in this case goes into the build number, not like the following example from the page shows.
I also noticed that if build number does not include `$(Rev:r)`, it will still be incremented behind the scenes, which also would be worth to mention. For example, using `name: ABC` in the pipeline and running it `2` times will yield `ABC` in both cases. If after this I change it to `name: ABC-$(Rev:r)`, it will yield `ABC-3`, which indicates that `ABC` was counted the first two times, even though it was not in the build number string.
Lastly, the example at the bottom of the page seems to be using the build number to form a semantic version string:
> MyRunNumber: '1.0.0-CI-$(Build.BuildNumber)'
If that's the case, the syntax above makes the pre-release version name to include everything after the first dash, which means that the pre-release does not have a stable name, like `beta` or `CI`. Again, if this is intended to resemble a semantic version, a plus sign would work better to indicate build metadata, like this - `1.0.0-CI+$(Build.BuildNumber)`
https://semver.org/#spec-item-9
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a57f8545-bb15-3a71-1876-3a9ec1a59b93
* Version Independent ID: 28c87c8d-c28d-7493-0c7c-8c38b04fbcd7
* Content: [Run (build) number - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/run-number?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/run-number.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/run-number.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Rules for incrementing $(Rev:r) could be described more clearly - For example, consider this paragraph:
> Use $(Rev:r) to ensure that every completed build has a unique name. When a build is completed, if nothing else in the build number has changed, the Rev integer value is incremented by one.
, which implies that the value is incremented _after_ a build is completed, which is not accurate because the incremented value is visible in the pipeline run as soon as the pipeline run is started, so the increment must happen _before_ the build started.
Moreover, this line ties `$(Rev:r)` to a date, as do many examples on the page:
> 2 (The third run on this day will be 3, and so on.)
, while the algorithm behind appears to just count the number of times the part of the build number without `$(Rev:r)` was used. For example, if I use `ABC-$(Rev:r)`, then my build number would be `ABC-1`, `ABC-2`, etc. In more practical terms, if I format build number towards the upcoming version `2.1.0` as `2-1-0-$(Rev:r)`, then I get automatically incremented build numbers towards that version, which is quite useful in naming artifacts.
What is great about it, which docs don't mention, is that after I release `2.1.0`, I can change build number to `2-2-0-$(Rev:r)` on _the same branch_ and it will start counting from `1` again. Notice, that the version in this case goes into the build number, not like the following example from the page shows.
I also noticed that if build number does not include `$(Rev:r)`, it will still be incremented behind the scenes, which also would be worth to mention. For example, using `name: ABC` in the pipeline and running it `2` times will yield `ABC` in both cases. If after this I change it to `name: ABC-$(Rev:r)`, it will yield `ABC-3`, which indicates that `ABC` was counted the first two times, even though it was not in the build number string.
Lastly, the example at the bottom of the page seems to be using the build number to form a semantic version string:
> MyRunNumber: '1.0.0-CI-$(Build.BuildNumber)'
If that's the case, the syntax above makes the pre-release version name to include everything after the first dash, which means that the pre-release does not have a stable name, like `beta` or `CI`. Again, if this is intended to resemble a semantic version, a plus sign would work better to indicate build metadata, like this - `1.0.0-CI+$(Build.BuildNumber)`
https://semver.org/#spec-item-9
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a57f8545-bb15-3a71-1876-3a9ec1a59b93
* Version Independent ID: 28c87c8d-c28d-7493-0c7c-8c38b04fbcd7
* Content: [Run (build) number - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/run-number?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/run-number.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/run-number.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
rules for incrementing rev r could be described more clearly for example consider this paragraph use rev r to ensure that every completed build has a unique name when a build is completed if nothing else in the build number has changed the rev integer value is incremented by one which implies that the value is incremented after a build is completed which is not accurate because the incremented value is visible in the pipeline run as soon as the pipeline run is started so the increment must happen before the build started moreover this line ties rev r to a date as do many examples on the page the third run on this day will be and so on while the algorithm behind appears to just count the number of times the part of the build number without rev r was used for example if i use abc rev r then my build number would be abc abc etc in more practical terms if i format build number towards the upcoming version as rev r then i get automatically incremented build numbers towards that version which is quite useful in naming artifacts what is great about it which docs don t mention is that after i release i can change build number to rev r on the same branch and it will start counting from again notice that the version in this case goes into the build number not like the following example from the page shows i also noticed that if build number does not include rev r it will still be incremented behind the scenes which also would be worth to mention for example using name abc in the pipeline and running it times will yield abc in both cases if after this i change it to name abc rev r it will yield abc which indicates that abc was counted the first two times even though it was not in the build number string lastly the example at the bottom of the page seems to be using the build number to form a semantic version string myrunnumber ci build buildnumber if that s the case the syntax above makes the pre release version name to include everything after the first dash which means that the pre release does not have a stable name like beta or ci again if this is intended to resemble a semantic version a plus sign would work better to indicate build metadata like this ci build buildnumber document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
13,750
| 16,502,892,037
|
IssuesEvent
|
2021-05-25 15:57:23
|
googleapis/google-auth-library-python
|
https://api.github.com/repos/googleapis/google-auth-library-python
|
closed
|
'testing/constraints-*.txt' not used
|
priority: p2 type: process
|
From #757.
In [`noxfile.py`](https://github.com/googleapis/google-auth-library-python/blob/a9234423cb2b69068fc0d30a5a0ee86a599ab8b7/noxfile.py#L87-L89), the constraints are not being used, which is why the one populated file, [`testing/constraints-3.6.txt](https://github.com/googleapis/google-auth-library-python/blob/master/testing/constraints-3.6.txt) is in an invalid state (multiple entries for `rsa`, maybe more issues).
|
1.0
|
'testing/constraints-*.txt' not used - From #757.
In [`noxfile.py`](https://github.com/googleapis/google-auth-library-python/blob/a9234423cb2b69068fc0d30a5a0ee86a599ab8b7/noxfile.py#L87-L89), the constraints are not being used, which is why the one populated file, [`testing/constraints-3.6.txt](https://github.com/googleapis/google-auth-library-python/blob/master/testing/constraints-3.6.txt) is in an invalid state (multiple entries for `rsa`, maybe more issues).
|
process
|
testing constraints txt not used from in the constraints are not being used which is why the one populated file is in an invalid state multiple entries for rsa maybe more issues
| 1
|
9,750
| 11,801,803,610
|
IssuesEvent
|
2020-03-18 20:12:54
|
digitalcreations/MaxTo
|
https://api.github.com/repos/digitalcreations/MaxTo
|
closed
|
Breaks Assetto Corsa Launcher
|
compatibility
|
**Describe the bug**
Interferes with Assetto Corsa Launcher, which tries to create a full screen window, but fails (gets resized), then repositions and the application crashes
**To Reproduce**
Steps to reproduce the behavior:
1. Install Assetto Corsa
2. Start the game
3. Sometimes a race needs to be started and exited
4. See error
**Expected behavior**
Game window allowed to enter full screen dimensions
**Screenshots**
did not make any yet
**System information:**
- Windows version: 10
- MaxTo version 2.0.1
**Additional context**
n/a
|
True
|
Breaks Assetto Corsa Launcher - **Describe the bug**
Interferes with Assetto Corsa Launcher, which tries to create a full screen window, but fails (gets resized), then repositions and the application crashes
**To Reproduce**
Steps to reproduce the behavior:
1. Install Assetto Corsa
2. Start the game
3. Sometimes a race needs to be started and exited
4. See error
**Expected behavior**
Game window allowed to enter full screen dimensions
**Screenshots**
did not make any yet
**System information:**
- Windows version: 10
- MaxTo version 2.0.1
**Additional context**
n/a
|
non_process
|
breaks assetto corsa launcher describe the bug interferes with assetto corsa launcher which tries to create a full screen window but fails gets resized then repositions and the application crashes to reproduce steps to reproduce the behavior install assetto corsa start the game sometimes a race needs to be started and exited see error expected behavior game window allowed to enter full screen dimensions screenshots did not make any yet system information windows version maxto version additional context n a
| 0
|
256,501
| 8,127,747,943
|
IssuesEvent
|
2018-08-17 09:12:11
|
aowen87/BAR
|
https://api.github.com/repos/aowen87/BAR
|
closed
|
XDMF reader confusing attributes with same dim as mesh as arrays
|
Bug Likelihood: 2 - Rare Priority: Normal Severity: 4 - Crash / Wrong Results
|
Quote Modify Hi all,
I have been working on implementing HDF5 with XDMF as the new I/O for the code I use. I believe I have came across a bug in the VisIt XDMF Plug-in but I could be wrong.
Whenever a dimension of the data in the attribute matches the last dimension of the mesh in the grid it is on, VisIt views the attribute as an array. As an example, I have recreated the issue using explicit values given in XML, however this is the same issue as when I point to HDF5 datasets. In the below example (attached as well) the Attribute "U" is seen as an array in VisIt, since the last dimension of the mesh grid it is on, 9, is also a dimension of the Attribute. When this dimension matching does not occur however, everything works as expected, as in for the "V" attribute.
I believe this is an issue in the VisIt plug-in because I could not cause this issue in p*r*view. I have used VisIt 2.9 and VisIt 2.7 and the same issue was present. If anyone has encountered this issue before or has an idea how to fix it, it would be greatly appreciated. Below is the example XDMF file that causes the error and the file information from VisIt.
Troublesome XDMF File
<pre>
<?xml version="1.0" ?>
<!DOCTYPE Xdmf SYSTEM "Xdmf.dtd" []>
<Xdmf Version="2.0">
<Domain>
<Grid Name="ucell" GridType="Uniform">
<Time TimeType="Single" Value="0.00000000E+00"/>
<Topology TopologyType="2DRectMesh" NumberOfElements="9 10"/>
<Geometry GeometryType="VXVY">
<DataItem Dimensions="10" NumberType="Float" Precision="8" Format="XML">
0 1 2 3 4 5 6 7 8 9
</DataItem>
<DataItem Dimensions="9" NumberType="Float" Precision="8" Format="XML">
0 1 2 3 4 5 6 7 8
</DataItem>
</Geometry>
<Attribute Name="U" AttributeType="Scalar" Center="Cell">
<DataItem Format="XML" Dimensions="8 9">
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
</DataItem>
</Attribute>
</Grid>
<Grid Name="vcell" GridType="Uniform">
<Time TimeType="Single" Value="0.00000000E+00"/>
<Topology TopologyType="2DRectMesh" NumberOfElements="10 9"/>
<Geometry GeometryType="VXVY">
<DataItem Dimensions="9" NumberType="Float" Precision="8" Format="XML">
0 1 2 3 4 5 6 7 8
</DataItem>
<DataItem Dimensions="10" NumberType="Float" Precision="8" Format="XML">
0 1 2 3 4 5 6 7 8 9
</DataItem>
</Geometry>
<Attribute Name="V" AttributeType="Scalar" Center="Cell">
<DataItem Format="XML" Dimensions="9 8">
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
</DataItem>
</Attribute>
</Grid>
</Domain>
</Xdmf>
<pre>
VisIt File Information
Scalars:
Name = "vcell/V"
Mesh is = "vcell"
Centering = zone centered.
The extents are not set.
This variable does not contain enumerated values.
Arrays:
Name = "ucell/U"
Mesh is = "ucell"
Centering = zone centered.
The extents are not set.
Number of variables = 8
Components are: ucell/U-1, ucell/U-2, ucell/U-3, ucell/U-4, ucell/U-5, ucell/U-6, ucell/U-7, ucell/U-8
Expressions:
ucell/U-1 (scalar): array_decompose(ucell/U,0)
ucell/U-2 (scalar): array_decompose(ucell/U,1)
ucell/U-3 (scalar): array_decompose(ucell/U,2)
ucell/U-4 (scalar): array_decompose(ucell/U,3)
ucell/U-5 (scalar): array_decompose(ucell/U,4)
ucell/U-6 (scalar): array_decompose(ucell/U,5)
ucell/U-7 (scalar): array_decompose(ucell/U,6)
ucell/U-8 (scalar): array_decompose(ucell/U,7)
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2355
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Normal
Subject: XDMF reader confusing attributes with same dim as mesh as arrays
Assigned to: Kathleen Biagas
Category:
Target version: 2.12.3
Author: Mark Miller
Start: 08/12/2015
Due date:
% Done: 100
Estimated time: 1.0
Created: 08/12/2015 02:08 pm
Updated: 06/13/2017 02:01 pm
Likelihood: 2 - Rare
Severity: 4 - Crash / Wrong Results
Found in version: 2.8.2
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Quote Modify Hi all,
I have been working on implementing HDF5 with XDMF as the new I/O for the code I use. I believe I have came across a bug in the VisIt XDMF Plug-in but I could be wrong.
Whenever a dimension of the data in the attribute matches the last dimension of the mesh in the grid it is on, VisIt views the attribute as an array. As an example, I have recreated the issue using explicit values given in XML, however this is the same issue as when I point to HDF5 datasets. In the below example (attached as well) the Attribute "U" is seen as an array in VisIt, since the last dimension of the mesh grid it is on, 9, is also a dimension of the Attribute. When this dimension matching does not occur however, everything works as expected, as in for the "V" attribute.
I believe this is an issue in the VisIt plug-in because I could not cause this issue in p*r*view. I have used VisIt 2.9 and VisIt 2.7 and the same issue was present. If anyone has encountered this issue before or has an idea how to fix it, it would be greatly appreciated. Below is the example XDMF file that causes the error and the file information from VisIt.
Troublesome XDMF File
<pre>
<?xml version="1.0" ?>
<!DOCTYPE Xdmf SYSTEM "Xdmf.dtd" []>
<Xdmf Version="2.0">
<Domain>
<Grid Name="ucell" GridType="Uniform">
<Time TimeType="Single" Value="0.00000000E+00"/>
<Topology TopologyType="2DRectMesh" NumberOfElements="9 10"/>
<Geometry GeometryType="VXVY">
<DataItem Dimensions="10" NumberType="Float" Precision="8" Format="XML">
0 1 2 3 4 5 6 7 8 9
</DataItem>
<DataItem Dimensions="9" NumberType="Float" Precision="8" Format="XML">
0 1 2 3 4 5 6 7 8
</DataItem>
</Geometry>
<Attribute Name="U" AttributeType="Scalar" Center="Cell">
<DataItem Format="XML" Dimensions="8 9">
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
</DataItem>
</Attribute>
</Grid>
<Grid Name="vcell" GridType="Uniform">
<Time TimeType="Single" Value="0.00000000E+00"/>
<Topology TopologyType="2DRectMesh" NumberOfElements="10 9"/>
<Geometry GeometryType="VXVY">
<DataItem Dimensions="9" NumberType="Float" Precision="8" Format="XML">
0 1 2 3 4 5 6 7 8
</DataItem>
<DataItem Dimensions="10" NumberType="Float" Precision="8" Format="XML">
0 1 2 3 4 5 6 7 8 9
</DataItem>
</Geometry>
<Attribute Name="V" AttributeType="Scalar" Center="Cell">
<DataItem Format="XML" Dimensions="9 8">
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
</DataItem>
</Attribute>
</Grid>
</Domain>
</Xdmf>
<pre>
VisIt File Information
Scalars:
Name = "vcell/V"
Mesh is = "vcell"
Centering = zone centered.
The extents are not set.
This variable does not contain enumerated values.
Arrays:
Name = "ucell/U"
Mesh is = "ucell"
Centering = zone centered.
The extents are not set.
Number of variables = 8
Components are: ucell/U-1, ucell/U-2, ucell/U-3, ucell/U-4, ucell/U-5, ucell/U-6, ucell/U-7, ucell/U-8
Expressions:
ucell/U-1 (scalar): array_decompose(ucell/U,0)
ucell/U-2 (scalar): array_decompose(ucell/U,1)
ucell/U-3 (scalar): array_decompose(ucell/U,2)
ucell/U-4 (scalar): array_decompose(ucell/U,3)
ucell/U-5 (scalar): array_decompose(ucell/U,4)
ucell/U-6 (scalar): array_decompose(ucell/U,5)
ucell/U-7 (scalar): array_decompose(ucell/U,6)
ucell/U-8 (scalar): array_decompose(ucell/U,7)
Comments:
I fix the reader's logic for computing number of components for cell-centered variables.After the change, reading this xmf file, File Information:Scalars: Name = "ucell/U" Mesh is = "ucell" Centering = zone centered. The extents are not set. This variable does not contain enumerated values.
|
1.0
|
XDMF reader confusing attributes with same dim as mesh as arrays - Quote Modify Hi all,
I have been working on implementing HDF5 with XDMF as the new I/O for the code I use. I believe I have came across a bug in the VisIt XDMF Plug-in but I could be wrong.
Whenever a dimension of the data in the attribute matches the last dimension of the mesh in the grid it is on, VisIt views the attribute as an array. As an example, I have recreated the issue using explicit values given in XML, however this is the same issue as when I point to HDF5 datasets. In the below example (attached as well) the Attribute "U" is seen as an array in VisIt, since the last dimension of the mesh grid it is on, 9, is also a dimension of the Attribute. When this dimension matching does not occur however, everything works as expected, as in for the "V" attribute.
I believe this is an issue in the VisIt plug-in because I could not cause this issue in p*r*view. I have used VisIt 2.9 and VisIt 2.7 and the same issue was present. If anyone has encountered this issue before or has an idea how to fix it, it would be greatly appreciated. Below is the example XDMF file that causes the error and the file information from VisIt.
Troublesome XDMF File
<pre>
<?xml version="1.0" ?>
<!DOCTYPE Xdmf SYSTEM "Xdmf.dtd" []>
<Xdmf Version="2.0">
<Domain>
<Grid Name="ucell" GridType="Uniform">
<Time TimeType="Single" Value="0.00000000E+00"/>
<Topology TopologyType="2DRectMesh" NumberOfElements="9 10"/>
<Geometry GeometryType="VXVY">
<DataItem Dimensions="10" NumberType="Float" Precision="8" Format="XML">
0 1 2 3 4 5 6 7 8 9
</DataItem>
<DataItem Dimensions="9" NumberType="Float" Precision="8" Format="XML">
0 1 2 3 4 5 6 7 8
</DataItem>
</Geometry>
<Attribute Name="U" AttributeType="Scalar" Center="Cell">
<DataItem Format="XML" Dimensions="8 9">
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
</DataItem>
</Attribute>
</Grid>
<Grid Name="vcell" GridType="Uniform">
<Time TimeType="Single" Value="0.00000000E+00"/>
<Topology TopologyType="2DRectMesh" NumberOfElements="10 9"/>
<Geometry GeometryType="VXVY">
<DataItem Dimensions="9" NumberType="Float" Precision="8" Format="XML">
0 1 2 3 4 5 6 7 8
</DataItem>
<DataItem Dimensions="10" NumberType="Float" Precision="8" Format="XML">
0 1 2 3 4 5 6 7 8 9
</DataItem>
</Geometry>
<Attribute Name="V" AttributeType="Scalar" Center="Cell">
<DataItem Format="XML" Dimensions="9 8">
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
</DataItem>
</Attribute>
</Grid>
</Domain>
</Xdmf>
<pre>
VisIt File Information
Scalars:
Name = "vcell/V"
Mesh is = "vcell"
Centering = zone centered.
The extents are not set.
This variable does not contain enumerated values.
Arrays:
Name = "ucell/U"
Mesh is = "ucell"
Centering = zone centered.
The extents are not set.
Number of variables = 8
Components are: ucell/U-1, ucell/U-2, ucell/U-3, ucell/U-4, ucell/U-5, ucell/U-6, ucell/U-7, ucell/U-8
Expressions:
ucell/U-1 (scalar): array_decompose(ucell/U,0)
ucell/U-2 (scalar): array_decompose(ucell/U,1)
ucell/U-3 (scalar): array_decompose(ucell/U,2)
ucell/U-4 (scalar): array_decompose(ucell/U,3)
ucell/U-5 (scalar): array_decompose(ucell/U,4)
ucell/U-6 (scalar): array_decompose(ucell/U,5)
ucell/U-7 (scalar): array_decompose(ucell/U,6)
ucell/U-8 (scalar): array_decompose(ucell/U,7)
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2355
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Normal
Subject: XDMF reader confusing attributes with same dim as mesh as arrays
Assigned to: Kathleen Biagas
Category:
Target version: 2.12.3
Author: Mark Miller
Start: 08/12/2015
Due date:
% Done: 100
Estimated time: 1.0
Created: 08/12/2015 02:08 pm
Updated: 06/13/2017 02:01 pm
Likelihood: 2 - Rare
Severity: 4 - Crash / Wrong Results
Found in version: 2.8.2
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Quote Modify Hi all,
I have been working on implementing HDF5 with XDMF as the new I/O for the code I use. I believe I have came across a bug in the VisIt XDMF Plug-in but I could be wrong.
Whenever a dimension of the data in the attribute matches the last dimension of the mesh in the grid it is on, VisIt views the attribute as an array. As an example, I have recreated the issue using explicit values given in XML, however this is the same issue as when I point to HDF5 datasets. In the below example (attached as well) the Attribute "U" is seen as an array in VisIt, since the last dimension of the mesh grid it is on, 9, is also a dimension of the Attribute. When this dimension matching does not occur however, everything works as expected, as in for the "V" attribute.
I believe this is an issue in the VisIt plug-in because I could not cause this issue in p*r*view. I have used VisIt 2.9 and VisIt 2.7 and the same issue was present. If anyone has encountered this issue before or has an idea how to fix it, it would be greatly appreciated. Below is the example XDMF file that causes the error and the file information from VisIt.
Troublesome XDMF File
<pre>
<?xml version="1.0" ?>
<!DOCTYPE Xdmf SYSTEM "Xdmf.dtd" []>
<Xdmf Version="2.0">
<Domain>
<Grid Name="ucell" GridType="Uniform">
<Time TimeType="Single" Value="0.00000000E+00"/>
<Topology TopologyType="2DRectMesh" NumberOfElements="9 10"/>
<Geometry GeometryType="VXVY">
<DataItem Dimensions="10" NumberType="Float" Precision="8" Format="XML">
0 1 2 3 4 5 6 7 8 9
</DataItem>
<DataItem Dimensions="9" NumberType="Float" Precision="8" Format="XML">
0 1 2 3 4 5 6 7 8
</DataItem>
</Geometry>
<Attribute Name="U" AttributeType="Scalar" Center="Cell">
<DataItem Format="XML" Dimensions="8 9">
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
10 20 30 40 50 60 70 80 90
</DataItem>
</Attribute>
</Grid>
<Grid Name="vcell" GridType="Uniform">
<Time TimeType="Single" Value="0.00000000E+00"/>
<Topology TopologyType="2DRectMesh" NumberOfElements="10 9"/>
<Geometry GeometryType="VXVY">
<DataItem Dimensions="9" NumberType="Float" Precision="8" Format="XML">
0 1 2 3 4 5 6 7 8
</DataItem>
<DataItem Dimensions="10" NumberType="Float" Precision="8" Format="XML">
0 1 2 3 4 5 6 7 8 9
</DataItem>
</Geometry>
<Attribute Name="V" AttributeType="Scalar" Center="Cell">
<DataItem Format="XML" Dimensions="9 8">
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
10 20 30 40 50 60 70 80
</DataItem>
</Attribute>
</Grid>
</Domain>
</Xdmf>
<pre>
VisIt File Information
Scalars:
Name = "vcell/V"
Mesh is = "vcell"
Centering = zone centered.
The extents are not set.
This variable does not contain enumerated values.
Arrays:
Name = "ucell/U"
Mesh is = "ucell"
Centering = zone centered.
The extents are not set.
Number of variables = 8
Components are: ucell/U-1, ucell/U-2, ucell/U-3, ucell/U-4, ucell/U-5, ucell/U-6, ucell/U-7, ucell/U-8
Expressions:
ucell/U-1 (scalar): array_decompose(ucell/U,0)
ucell/U-2 (scalar): array_decompose(ucell/U,1)
ucell/U-3 (scalar): array_decompose(ucell/U,2)
ucell/U-4 (scalar): array_decompose(ucell/U,3)
ucell/U-5 (scalar): array_decompose(ucell/U,4)
ucell/U-6 (scalar): array_decompose(ucell/U,5)
ucell/U-7 (scalar): array_decompose(ucell/U,6)
ucell/U-8 (scalar): array_decompose(ucell/U,7)
Comments:
I fix the reader's logic for computing number of components for cell-centered variables.After the change, reading this xmf file, File Information:Scalars: Name = "ucell/U" Mesh is = "ucell" Centering = zone centered. The extents are not set. This variable does not contain enumerated values.
|
non_process
|
xdmf reader confusing attributes with same dim as mesh as arrays quote modify hi all i have been working on implementing with xdmf as the new i o for the code i use i believe i have came across a bug in the visit xdmf plug in but i could be wrong whenever a dimension of the data in the attribute matches the last dimension of the mesh in the grid it is on visit views the attribute as an array as an example i have recreated the issue using explicit values given in xml however this is the same issue as when i point to datasets in the below example attached as well the attribute u is seen as an array in visit since the last dimension of the mesh grid it is on is also a dimension of the attribute when this dimension matching does not occur however everything works as expected as in for the v attribute i believe this is an issue in the visit plug in because i could not cause this issue in p r view i have used visit and visit and the same issue was present if anyone has encountered this issue before or has an idea how to fix it it would be greatly appreciated below is the example xdmf file that causes the error and the file information from visit troublesome xdmf file visit file information scalars name vcell v mesh is vcell centering zone centered the extents are not set this variable does not contain enumerated values arrays name ucell u mesh is ucell centering zone centered the extents are not set number of variables components are ucell u ucell u ucell u ucell u ucell u ucell u ucell u ucell u expressions ucell u scalar array decompose ucell u ucell u scalar array decompose ucell u ucell u scalar array decompose ucell u ucell u scalar array decompose ucell u ucell u scalar array decompose ucell u ucell u scalar array decompose ucell u ucell u scalar array decompose ucell u ucell u scalar array decompose ucell u redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority normal subject xdmf reader confusing attributes with same dim as mesh as arrays assigned to kathleen biagas category target version author mark miller start due date done estimated time created pm updated pm likelihood rare severity crash wrong results found in version impact expected use os all support group any description quote modify hi all i have been working on implementing with xdmf as the new i o for the code i use i believe i have came across a bug in the visit xdmf plug in but i could be wrong whenever a dimension of the data in the attribute matches the last dimension of the mesh in the grid it is on visit views the attribute as an array as an example i have recreated the issue using explicit values given in xml however this is the same issue as when i point to datasets in the below example attached as well the attribute u is seen as an array in visit since the last dimension of the mesh grid it is on is also a dimension of the attribute when this dimension matching does not occur however everything works as expected as in for the v attribute i believe this is an issue in the visit plug in because i could not cause this issue in p r view i have used visit and visit and the same issue was present if anyone has encountered this issue before or has an idea how to fix it it would be greatly appreciated below is the example xdmf file that causes the error and the file information from visit troublesome xdmf file visit file information scalars name vcell v mesh is vcell centering zone centered the extents are not set this variable does not contain enumerated values arrays name ucell u mesh is ucell centering zone centered the extents are not set number of variables components are ucell u ucell u ucell u ucell u ucell u ucell u ucell u ucell u expressions ucell u scalar array decompose ucell u ucell u scalar array decompose ucell u ucell u scalar array decompose ucell u ucell u scalar array decompose ucell u ucell u scalar array decompose ucell u ucell u scalar array decompose ucell u ucell u scalar array decompose ucell u ucell u scalar array decompose ucell u comments i fix the reader s logic for computing number of components for cell centered variables after the change reading this xmf file file information scalars name ucell u mesh is ucell centering zone centered the extents are not set this variable does not contain enumerated values
| 0
|
738,977
| 25,575,001,342
|
IssuesEvent
|
2022-11-30 21:13:31
|
AspectOfJerry/Discord-JerryBot
|
https://api.github.com/repos/AspectOfJerry/Discord-JerryBot
|
closed
|
Create a module containing other modules
|
enhancement priority
|
Create a module that includes every other custom modules. This would simplify module importing.
```
const {..., ..., ...} = require('./JerryUtils')
```
instead of individually importing them
|
1.0
|
Create a module containing other modules - Create a module that includes every other custom modules. This would simplify module importing.
```
const {..., ..., ...} = require('./JerryUtils')
```
instead of individually importing them
|
non_process
|
create a module containing other modules create a module that includes every other custom modules this would simplify module importing const require jerryutils instead of individually importing them
| 0
|
14,303
| 17,289,724,712
|
IssuesEvent
|
2021-07-24 13:29:10
|
Jeffail/benthos
|
https://api.github.com/repos/Jeffail/benthos
|
closed
|
IIS content type response is making http client to fail
|
annoying inputs outputs processors
|
When the client is calling legacy API that returns Content-Type as for example: **application/json; charset=utf-8,application/json** MIME verification fails and hence all the operation, no matter if the response will be used or not.
The issue occurs on https://github.com/Jeffail/benthos/blob/master/lib/util/http/client/type.go#L430 but curiously MediaType is correctly returned, I think is some workaround in Golang library.
Anyway, as an enhancement, I think of the following solutions:
1- Trim all after **;**, is normal that ASPNet service return ContentType + codification, and for Go is garbage
2- Special case for mediaType != nil and ignoring the error
3- Defaulting to application/json in case of error and try to read and parse body
|
1.0
|
IIS content type response is making http client to fail - When the client is calling legacy API that returns Content-Type as for example: **application/json; charset=utf-8,application/json** MIME verification fails and hence all the operation, no matter if the response will be used or not.
The issue occurs on https://github.com/Jeffail/benthos/blob/master/lib/util/http/client/type.go#L430 but curiously MediaType is correctly returned, I think is some workaround in Golang library.
Anyway, as an enhancement, I think of the following solutions:
1- Trim all after **;**, is normal that ASPNet service return ContentType + codification, and for Go is garbage
2- Special case for mediaType != nil and ignoring the error
3- Defaulting to application/json in case of error and try to read and parse body
|
process
|
iis content type response is making http client to fail when the client is calling legacy api that returns content type as for example application json charset utf application json mime verification fails and hence all the operation no matter if the response will be used or not the issue occurs on but curiously mediatype is correctly returned i think is some workaround in golang library anyway as an enhancement i think of the following solutions trim all after is normal that aspnet service return contenttype codification and for go is garbage special case for mediatype nil and ignoring the error defaulting to application json in case of error and try to read and parse body
| 1
|
9,630
| 12,576,526,815
|
IssuesEvent
|
2020-06-09 08:02:53
|
kubeflow/manifests
|
https://api.github.com/repos/kubeflow/manifests
|
closed
|
All top level directories in kubeflow/manifests should have OWNERs file
|
area/kustomize kind/feature kind/process lifecycle/stale priority/p1
|
Every top level directory in kubeflow/manifests should have an OWNERs file that specifies appropriate OWNERs for that package.
Reviews should rarely hit the root OWNERs file
https://github.com/kubeflow/manifests/blob/master/OWNERS
Once we have top level OWNERs files in all directories we should probably significantly reduce the number of global approvers in the root file
https://github.com/kubeflow/manifests/blob/master/OWNERS
/cc @krishnadurai
|
1.0
|
All top level directories in kubeflow/manifests should have OWNERs file - Every top level directory in kubeflow/manifests should have an OWNERs file that specifies appropriate OWNERs for that package.
Reviews should rarely hit the root OWNERs file
https://github.com/kubeflow/manifests/blob/master/OWNERS
Once we have top level OWNERs files in all directories we should probably significantly reduce the number of global approvers in the root file
https://github.com/kubeflow/manifests/blob/master/OWNERS
/cc @krishnadurai
|
process
|
all top level directories in kubeflow manifests should have owners file every top level directory in kubeflow manifests should have an owners file that specifies appropriate owners for that package reviews should rarely hit the root owners file once we have top level owners files in all directories we should probably significantly reduce the number of global approvers in the root file cc krishnadurai
| 1
|
434,024
| 12,512,955,861
|
IssuesEvent
|
2020-06-03 00:23:26
|
eclipse-ee4j/glassfish
|
https://api.github.com/repos/eclipse-ee4j/glassfish
|
closed
|
ejb lookup faild when resource adapter with remote interface deployed
|
Component: ejb_container ERR: Assignee Priority: Minor Stale Type: Bug
|
I'm using Sun Java System Application Server 9.1_02 (build b04-fcs)
I also reproduced this problem using GlassFish v3 (build 44)
Scenerio:
I have two Enterprise Applications (EAR) TEST-MODULE (EAR_1) and SMS-ACTIVE (EAR_2)
one Web Application (WAR_1) and one Connector Module (RAR_1)
EAR_1 : TEST-MODULE have one ejb (EJB_1):
@Stateless(mappedName = "ejb/sms/active/module/EJBCActivityTest")
@Remote(
{ActivityProcessableRemote.class}
)
public class EJBCActivityTest implements ActivityProcessableRemote {
public void doInit()
{ Logger.getLogger(this.getClass().getName()).info("doInit"); }
}
EAR_2 : SMS-ACTIVE have one ejb (EJB_2):
@Stateless(mappedName = "ejb/sms/active/SMSMainBean")
@TransactionAttribute(value = TransactionAttributeType.NOT_SUPPORTED)
public class SMSMainBean implements SMSBeanRemote
public void onTextMessage(String message) {
InitialContext ctx;
try
{ ctx = new javax.naming.InitialContext(); Object lookup = ctx .lookup("ejb/sms/active/module/EJBCActivityTest"); ActivityProcessableRemote activity = (ActivityProcessableRemote) lookup; activity.doInit(); } catch (NamingException e) { e.printStackTrace(); }
}
}
WAR_1 : Servlet (SERVLET_1) with <load-on-startup>1</load-on-startup> in
web.xml (this servlet is packaged in SMS-ACTIVE.ear)
ublic final class LifeCycleServlet extends javax.servlet.http.HttpServlet
implements javax.servlet.Servlet {
private transient java.util.logging.Logger logger;
@Override
public void init() throws ServletException {
// start first part of code
getLogger().info("--------- part 1");
InitialContext ctx;
try { ctx = new javax.naming.InitialContext(); Object lookup = ctx .lookup("ejb/sms/active/module/EJBCActivityTest"); ActivityProcessableRemote activity = (ActivityProcessableRemote) lookup; activity.doInit(); }
catch (NamingException e)
{ e.printStackTrace(); }
// end first part of code
// start second part of code
getLogger().info("--------- part 2");
try { ctx = new javax.naming.InitialContext(); Object lookup = ctx .lookup("ejb/sms/active/SMSMainBean"); SMSBeanRemote activity = (SMSBeanRemote) lookup; activity.onTextMessage("message"); } catch (NamingException e) { e.printStackTrace(); }
// end second part of code
super.init();
}
public java.util.logging.Logger getLogger()
{ return logger == null ? logger = Logger .getLogger(LifeCycleServlet.class.getName()) : logger; }
}
So, in first part of (see servlet code listening) this code servlet call EJB_1
from SERVLET_1 and in second part it call EJB_2 which call EJB_1\.
First part of this code always successes.
But in some cases the second part of this code crashed with exception:
javax.naming.NamingException: ejb ref resolution error for remote business
interfacepl.smi.sms.active.process.ActivityProcessableRemote [Root exception is
java.lang.IllegalArgumentException: object is not an instance of declaring class]
at com.sun.ejb.EJBUtils.lookupRemote30BusinessObject(EJBUtils.java:425)
(...)
Caused by: java.lang.IllegalArgumentException: object is not an instance of
declaring class
(...)
I described the cases when this code crashed here:
Case 1.
Step 1\. deploy RAR_1 with ActivityProcessableRemote.class (remote interface of
EJB_1) on glassfish
Step 2\. deploy EAR_1
Step 3\. deploy EAR_2
Step 3\. deploy WAR_1 – init method of SERVLET_1 is called
OUTCOME:
First part of code: OK
Second part of code: Exception.
Case 2.
Step 1\. deploy EAR_1
Step 2\. deploy EAR_2
Step 3\. deploy RAR_1 with ActivityProcessableRemote.class (remote interface of
EJB_1) on glassfish
Step 4\. deploy WAR_1 – init method of SERVLET_1 is called
OUTCOME:
First part of code: OK
Second part of code: OK.
Case 3.
Step 1\. deploy EAR_1
Step 2\. deploy EAR_2
Step 3\. deploy WAR_1 – init method of SERVLET_1 is called
(RAR_1 is not on the glassfish)
OUTCOME:
First part of code: OK
Second part of code: OK.
Case 4.
Step 1\. deploy RAR_1 _without_ definition of ActivityProcessableRemote interface
Step 2\. deploy EAR_1
Step 3\. deploy EAR_2
Step 3\. deploy WAR_1 – init method of SERVLET_1 is called
OUTCOME:
First part of code: OK
Second part of code: OK.
Conclusion:
Problem occurred only when definition of ejb interface are deployed in RAR_1
before EAR_2 is deployed.
Full exception:
[#|2009-04-22T05:54:52.542+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=16;_ThreadName=httpWorkerThread-4848-0;_RequestID=ecfcfc68-5b91-4c18-97c4-064f5ad358bb;|
javax.naming.NamingException: ejb ref resolution error for remote business
interfacepl.smi.sms.active.process.ActivityProcessableRemote [Root exception is
java.lang.IllegalArgumentException: object is not an instance of declaring class]
at com.sun.ejb.EJBUtils.lookupRemote30BusinessObject(EJBUtils.java:425)
at
com.sun.ejb.containers.RemoteBusinessObjectFactory.getObjectInstance(RemoteBusinessObjectFactory.java:74)
at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:304)
at com.sun.enterprise.naming.SerialContext.lookup(SerialContext.java:403)
at javax.naming.InitialContext.lookup(InitialContext.java:392)
at pl.smi.sms.active.manager.SMSMainBean.onTextMessage(SMSMainBean.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.rFull exception:
[#|2009-04-22T05:54:52.542+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=16;_ThreadName=httpWorkerThread-4848-0;_RequestID=ecfcfc68-5b91-4c18-97c4-064f5ad358bb;|
javax.naming.NamingException: ejb ref resolution error for remote business
interfacepl.smi.sms.active.process.ActivityProcessableRemote [Root exception is
java.lang.IllegalArgumentException: object is not an instance of declaring class]
at com.sun.ejb.EJBUtils.lookupRemote30BusinessObject(EJBUtils.java:425)
at
com.sun.ejb.containers.RemoteBusinessObjectFactory.getObjectInstance(RemoteBusinessObjectFactory.java:74)
at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:304)
at com.sun.enterprise.naming.SerialContext.lookup(SerialContext.java:403)
at
javaxeflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
com.sun.enterprise.security.application.EJBSecurityManager.runMethod(EJBSecurityManager.java:1067)
at com.sun.enterprise.security.SecurityUtil.invoke(SecurityUtil.java:176)
at
com.sun.ejb.containers.BaseContainer.invokeTargetBeanMethod(BaseContainer.java:2895)
at com.sun.ejb.containers.BaseContainer.intercept(BaseContainer.java:3986)
at
com.sun.ejb.containers.EJBObjectInvocationHandler.invoke(EJBObjectInvocationHandler.java:203)
at
com.sun.ejb.containers.EJBObjectInvocationHandlerDelegate.invoke(EJBObjectInvocationHandlerDelegate.java:77)
at $Proxy23.onTextMessage(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
com.sun.corba.ee.impl.presentation.rmi.ReflectiveTie._invoke(ReflectiveTie.java:154)
at
com.sun.corba.ee.impl.protocol.CorbaServerRequestDispatcherImpl.dispatchToServant(CorbaServerRequestDispatcherImpl.java:687)
at
com.sun.corba.ee.impl.protocol.CorbaServerRequestDispatcherImpl.dispatch(CorbaServerRequestDispatcherImpl.java:227)
at
com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.handleRequestRequest(CorbaMessageMediatorImpl.java:1846)
at
com.sun.corba.ee.impl.protocol.SharedCDRClientRequestDispatcherImpl.marshalingComplete(SharedCDRClientRequestDispatcherImpl.java:183)
at
com.sun.corba.ee.impl.protocol.CorbaClientDelegateImpl.invoke(CorbaClientDelegateImpl.java:219)
at
com.sun.corba.ee.impl.presentation.rmi.StubInvocationHandlerImpl.privateInvoke(StubInvocationHandlerImpl.java:192)
at
com.sun.corba.ee.impl.presentation.rmi.StubInvocationHandlerImpl.invoke(StubInvocationHandlerImpl.java:152)
at
com.sun.corba.ee.impl.presentation.rmi.bcel.BCELStubBase.invoke(BCELStubBase.java:225)
at
pl.smi.sms.active.manager.__SMSBeanRemote_Remote_DynamicStub.onTextMessage(pl/smi/sms/active/manager/__SMSBeanRemote_Remote_DynamicStub.java)
at
pl.smi.sms.active.manager._SMSBeanRemote_Wrapper.onTextMessage(pl/smi/sms/active/manager/_SMSBeanRemote_Wrapper.java)
at pl.smi.sms.active.LifeCycleServlet.init(LifeCycleServlet.java:48)
at javax.servlet.GenericServlet.init(GenericServlet.java:254)
at
org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1178)
at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1007)
at
org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4808)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:5196)
at com.sun.enterprise.web.WebModule.start(WebModule.java:326)
at
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:973)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:957)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:688)
at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1584)
at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1222)
at
com.sun.enterprise.server.WebModuleDeployEventListener.moduleDeployed(WebModuleDeployEventListener.java:182)
at
com.sun.enterprise.server.WebModuleDeployEventListener.moduleDeployed(WebModuleDeployEventListener.java:278)
at
com.sun.enterprise.admin.event.AdminEventMulticaster.invokeModuleDeployEventListener(AdminEventMulticaster.java:974)
at
com.sun.enterprise.admin.event.AdminEventMulticaster.handleModuleDeployEvent(AdminEventMulticaster.java:961)
at
com.sun.enterprise.admin.event.AdminEventMulticaster.processEvent(AdminEventMulticaster.java:464)
at
com.sun.enterprise.admin.event.AdminEventMulticaster.multicastEvent(AdminEventMulticaster.java:176)
at
com.sun.enterprise.admin.server.core.DeploymentNotificationHelper.multicastEvent(DeploymentNotificationHelper.java:308)
at
com.sun.enterprise.deployment.phasing.DeploymentServiceUtils.multicastEvent(DeploymentServiceUtils.java:226)
at
com.sun.enterprise.deployment.phasing.ServerDeploymentTarget.sendStartEvent(ServerDeploymentTarget.java:298)
at
com.sun.enterprise.deployment.phasing.ApplicationStartPhase.runPhase(ApplicationStartPhase.java:132)
at
com.sun.enterprise.deployment.phasing.DeploymentPhase.executePhase(DeploymentPhase.java:108)
at
com.sun.enterprise.deployment.phasing.PEDeploymentService.executePhases(PEDeploymentService.java:919)
at
com.sun.enterprise.deployment.phasing.PEDeploymentService.start(PEDeploymentService.java:591)
at
com.sun.enterprise.deployment.phasing.PEDeploymentService.start(PEDeploymentService.java:635)
at
com.sun.enterprise.admin.mbeans.ApplicationsConfigMBean.start(ApplicationsConfigMBean.java:744)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
com.sun.enterprise.admin.MBeanHelper.invokeOperationInBean(MBeanHelper.java:375)
at
com.sun.enterprise.admin.MBeanHelper.invokeOperationInBean(MBeanHelper.java:358)
at
com.sun.enterprise.admin.config.BaseConfigMBean.invoke(BaseConfigMBean.java:464)
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836)
at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.sun.enterprise.admin.util.proxy.ProxyClass.invoke(ProxyClass.java:90)
at $Proxy1.invoke(Unknown Source)
at
com.sun.enterprise.admin.server.core.jmx.SunoneInterceptor.invoke(SunoneInterceptor.java:304)
at
com.sun.enterprise.interceptor.DynamicInterceptor.invoke(DynamicInterceptor.java:174)
at
com.sun.enterprise.admin.jmx.remote.server.callers.InvokeCaller.call(InvokeCaller.java:69)
at
com.sun.enterprise.admin.jmx.remote.server.MBeanServerRequestHandler.handle(MBeanServerRequestHandler.java:155)
at
com.sun.enterprise.admin.jmx.remote.server.servlet.RemoteJmxConnectorServlet.processRequest(RemoteJmxConnectorServlet.java:122)
at
com.sun.enterprise.admin.jmx.remote.server.servlet.RemoteJmxConnectorServlet.doPost(RemoteJmxConnectorServlet.java:193)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:738)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:831)
at
org.apache.catalina.core.ApplicationFilterChain.servletService(ApplicationFilterChain.java:411)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:290)
at
org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:271)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:202)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:632)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:577)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:94)
at org.apache.catalina.core.StandardHostValve.invoke(Sta|#]
[#|2009-04javax.naming.NamingException: ejb ref resolution error for remote
business interfacepl.smi.sms.active.process.ActivityProcessableRemote [Root
exception is java.lang.IllegalArgumentException: object is not an instance of
declaring class]
at com.sun.ejb.EJBUtils.lookupRemote30BusinessObject(EJBUtils.java:425)
at
com.sun.ejb.containers.RemoteBusinessObjectFactory.getObjectInstance(RemoteBusinessObjectFactory.java:74)-22T05:54:52.543+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=16;_ThreadName=httpWorkerThread-4848-0;_RequestID=ecfcfc68-5b91-4c18-97c4-064f5ad358bb;|ndardHostValve.java:206)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:632)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:577)
at
org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:571)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:1080)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:150)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:632)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:577)
at
org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:571)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:1080)
at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:272)
at
com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.invokeAdapter(DefaultProcessorTask.java:637)
at
com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.doProcess(DefaultProcessorTask.java:568)
at
com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.process(DefaultProcessorTask.java:813)
at
com.sun.enterprise.web.connector.grizzly.DefaultReadTask.executeProcessorTask(DefaultReadTask.java:341)
at
com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:263)
at
com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:214)
at com.sun.enterprise.web.connector.grizzly.TaskBase.run(TaskBase.java:265)
at
com.sun.enterprise.web.connector.grizzly.WorkerThreadImpl.run(WorkerThreadImpl.java:116)
Caused by: java.lang.IllegalArgumentException: object is not an instance of
declaring class
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.sun.ejb.EJBUtils.lookupRemote30BusinessObject(EJBUtils.java:372)
... 106 more
| #] |
#### Environment
Operating System: Linux
Platform: Linux
#### Affected Versions
[v2.1.1]
|
1.0
|
ejb lookup faild when resource adapter with remote interface deployed - I'm using Sun Java System Application Server 9.1_02 (build b04-fcs)
I also reproduced this problem using GlassFish v3 (build 44)
Scenerio:
I have two Enterprise Applications (EAR) TEST-MODULE (EAR_1) and SMS-ACTIVE (EAR_2)
one Web Application (WAR_1) and one Connector Module (RAR_1)
EAR_1 : TEST-MODULE have one ejb (EJB_1):
@Stateless(mappedName = "ejb/sms/active/module/EJBCActivityTest")
@Remote(
{ActivityProcessableRemote.class}
)
public class EJBCActivityTest implements ActivityProcessableRemote {
public void doInit()
{ Logger.getLogger(this.getClass().getName()).info("doInit"); }
}
EAR_2 : SMS-ACTIVE have one ejb (EJB_2):
@Stateless(mappedName = "ejb/sms/active/SMSMainBean")
@TransactionAttribute(value = TransactionAttributeType.NOT_SUPPORTED)
public class SMSMainBean implements SMSBeanRemote
public void onTextMessage(String message) {
InitialContext ctx;
try
{ ctx = new javax.naming.InitialContext(); Object lookup = ctx .lookup("ejb/sms/active/module/EJBCActivityTest"); ActivityProcessableRemote activity = (ActivityProcessableRemote) lookup; activity.doInit(); } catch (NamingException e) { e.printStackTrace(); }
}
}
WAR_1 : Servlet (SERVLET_1) with <load-on-startup>1</load-on-startup> in
web.xml (this servlet is packaged in SMS-ACTIVE.ear)
ublic final class LifeCycleServlet extends javax.servlet.http.HttpServlet
implements javax.servlet.Servlet {
private transient java.util.logging.Logger logger;
@Override
public void init() throws ServletException {
// start first part of code
getLogger().info("--------- part 1");
InitialContext ctx;
try { ctx = new javax.naming.InitialContext(); Object lookup = ctx .lookup("ejb/sms/active/module/EJBCActivityTest"); ActivityProcessableRemote activity = (ActivityProcessableRemote) lookup; activity.doInit(); }
catch (NamingException e)
{ e.printStackTrace(); }
// end first part of code
// start second part of code
getLogger().info("--------- part 2");
try { ctx = new javax.naming.InitialContext(); Object lookup = ctx .lookup("ejb/sms/active/SMSMainBean"); SMSBeanRemote activity = (SMSBeanRemote) lookup; activity.onTextMessage("message"); } catch (NamingException e) { e.printStackTrace(); }
// end second part of code
super.init();
}
public java.util.logging.Logger getLogger()
{ return logger == null ? logger = Logger .getLogger(LifeCycleServlet.class.getName()) : logger; }
}
So, in first part of (see servlet code listening) this code servlet call EJB_1
from SERVLET_1 and in second part it call EJB_2 which call EJB_1\.
First part of this code always successes.
But in some cases the second part of this code crashed with exception:
javax.naming.NamingException: ejb ref resolution error for remote business
interfacepl.smi.sms.active.process.ActivityProcessableRemote [Root exception is
java.lang.IllegalArgumentException: object is not an instance of declaring class]
at com.sun.ejb.EJBUtils.lookupRemote30BusinessObject(EJBUtils.java:425)
(...)
Caused by: java.lang.IllegalArgumentException: object is not an instance of
declaring class
(...)
I described the cases when this code crashed here:
Case 1.
Step 1\. deploy RAR_1 with ActivityProcessableRemote.class (remote interface of
EJB_1) on glassfish
Step 2\. deploy EAR_1
Step 3\. deploy EAR_2
Step 3\. deploy WAR_1 – init method of SERVLET_1 is called
OUTCOME:
First part of code: OK
Second part of code: Exception.
Case 2.
Step 1\. deploy EAR_1
Step 2\. deploy EAR_2
Step 3\. deploy RAR_1 with ActivityProcessableRemote.class (remote interface of
EJB_1) on glassfish
Step 4\. deploy WAR_1 – init method of SERVLET_1 is called
OUTCOME:
First part of code: OK
Second part of code: OK.
Case 3.
Step 1\. deploy EAR_1
Step 2\. deploy EAR_2
Step 3\. deploy WAR_1 – init method of SERVLET_1 is called
(RAR_1 is not on the glassfish)
OUTCOME:
First part of code: OK
Second part of code: OK.
Case 4.
Step 1\. deploy RAR_1 _without_ definition of ActivityProcessableRemote interface
Step 2\. deploy EAR_1
Step 3\. deploy EAR_2
Step 3\. deploy WAR_1 – init method of SERVLET_1 is called
OUTCOME:
First part of code: OK
Second part of code: OK.
Conclusion:
Problem occurred only when definition of ejb interface are deployed in RAR_1
before EAR_2 is deployed.
Full exception:
[#|2009-04-22T05:54:52.542+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=16;_ThreadName=httpWorkerThread-4848-0;_RequestID=ecfcfc68-5b91-4c18-97c4-064f5ad358bb;|
javax.naming.NamingException: ejb ref resolution error for remote business
interfacepl.smi.sms.active.process.ActivityProcessableRemote [Root exception is
java.lang.IllegalArgumentException: object is not an instance of declaring class]
at com.sun.ejb.EJBUtils.lookupRemote30BusinessObject(EJBUtils.java:425)
at
com.sun.ejb.containers.RemoteBusinessObjectFactory.getObjectInstance(RemoteBusinessObjectFactory.java:74)
at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:304)
at com.sun.enterprise.naming.SerialContext.lookup(SerialContext.java:403)
at javax.naming.InitialContext.lookup(InitialContext.java:392)
at pl.smi.sms.active.manager.SMSMainBean.onTextMessage(SMSMainBean.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.rFull exception:
[#|2009-04-22T05:54:52.542+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=16;_ThreadName=httpWorkerThread-4848-0;_RequestID=ecfcfc68-5b91-4c18-97c4-064f5ad358bb;|
javax.naming.NamingException: ejb ref resolution error for remote business
interfacepl.smi.sms.active.process.ActivityProcessableRemote [Root exception is
java.lang.IllegalArgumentException: object is not an instance of declaring class]
at com.sun.ejb.EJBUtils.lookupRemote30BusinessObject(EJBUtils.java:425)
at
com.sun.ejb.containers.RemoteBusinessObjectFactory.getObjectInstance(RemoteBusinessObjectFactory.java:74)
at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:304)
at com.sun.enterprise.naming.SerialContext.lookup(SerialContext.java:403)
at
javaxeflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
com.sun.enterprise.security.application.EJBSecurityManager.runMethod(EJBSecurityManager.java:1067)
at com.sun.enterprise.security.SecurityUtil.invoke(SecurityUtil.java:176)
at
com.sun.ejb.containers.BaseContainer.invokeTargetBeanMethod(BaseContainer.java:2895)
at com.sun.ejb.containers.BaseContainer.intercept(BaseContainer.java:3986)
at
com.sun.ejb.containers.EJBObjectInvocationHandler.invoke(EJBObjectInvocationHandler.java:203)
at
com.sun.ejb.containers.EJBObjectInvocationHandlerDelegate.invoke(EJBObjectInvocationHandlerDelegate.java:77)
at $Proxy23.onTextMessage(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
com.sun.corba.ee.impl.presentation.rmi.ReflectiveTie._invoke(ReflectiveTie.java:154)
at
com.sun.corba.ee.impl.protocol.CorbaServerRequestDispatcherImpl.dispatchToServant(CorbaServerRequestDispatcherImpl.java:687)
at
com.sun.corba.ee.impl.protocol.CorbaServerRequestDispatcherImpl.dispatch(CorbaServerRequestDispatcherImpl.java:227)
at
com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.handleRequestRequest(CorbaMessageMediatorImpl.java:1846)
at
com.sun.corba.ee.impl.protocol.SharedCDRClientRequestDispatcherImpl.marshalingComplete(SharedCDRClientRequestDispatcherImpl.java:183)
at
com.sun.corba.ee.impl.protocol.CorbaClientDelegateImpl.invoke(CorbaClientDelegateImpl.java:219)
at
com.sun.corba.ee.impl.presentation.rmi.StubInvocationHandlerImpl.privateInvoke(StubInvocationHandlerImpl.java:192)
at
com.sun.corba.ee.impl.presentation.rmi.StubInvocationHandlerImpl.invoke(StubInvocationHandlerImpl.java:152)
at
com.sun.corba.ee.impl.presentation.rmi.bcel.BCELStubBase.invoke(BCELStubBase.java:225)
at
pl.smi.sms.active.manager.__SMSBeanRemote_Remote_DynamicStub.onTextMessage(pl/smi/sms/active/manager/__SMSBeanRemote_Remote_DynamicStub.java)
at
pl.smi.sms.active.manager._SMSBeanRemote_Wrapper.onTextMessage(pl/smi/sms/active/manager/_SMSBeanRemote_Wrapper.java)
at pl.smi.sms.active.LifeCycleServlet.init(LifeCycleServlet.java:48)
at javax.servlet.GenericServlet.init(GenericServlet.java:254)
at
org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1178)
at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1007)
at
org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4808)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:5196)
at com.sun.enterprise.web.WebModule.start(WebModule.java:326)
at
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:973)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:957)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:688)
at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1584)
at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1222)
at
com.sun.enterprise.server.WebModuleDeployEventListener.moduleDeployed(WebModuleDeployEventListener.java:182)
at
com.sun.enterprise.server.WebModuleDeployEventListener.moduleDeployed(WebModuleDeployEventListener.java:278)
at
com.sun.enterprise.admin.event.AdminEventMulticaster.invokeModuleDeployEventListener(AdminEventMulticaster.java:974)
at
com.sun.enterprise.admin.event.AdminEventMulticaster.handleModuleDeployEvent(AdminEventMulticaster.java:961)
at
com.sun.enterprise.admin.event.AdminEventMulticaster.processEvent(AdminEventMulticaster.java:464)
at
com.sun.enterprise.admin.event.AdminEventMulticaster.multicastEvent(AdminEventMulticaster.java:176)
at
com.sun.enterprise.admin.server.core.DeploymentNotificationHelper.multicastEvent(DeploymentNotificationHelper.java:308)
at
com.sun.enterprise.deployment.phasing.DeploymentServiceUtils.multicastEvent(DeploymentServiceUtils.java:226)
at
com.sun.enterprise.deployment.phasing.ServerDeploymentTarget.sendStartEvent(ServerDeploymentTarget.java:298)
at
com.sun.enterprise.deployment.phasing.ApplicationStartPhase.runPhase(ApplicationStartPhase.java:132)
at
com.sun.enterprise.deployment.phasing.DeploymentPhase.executePhase(DeploymentPhase.java:108)
at
com.sun.enterprise.deployment.phasing.PEDeploymentService.executePhases(PEDeploymentService.java:919)
at
com.sun.enterprise.deployment.phasing.PEDeploymentService.start(PEDeploymentService.java:591)
at
com.sun.enterprise.deployment.phasing.PEDeploymentService.start(PEDeploymentService.java:635)
at
com.sun.enterprise.admin.mbeans.ApplicationsConfigMBean.start(ApplicationsConfigMBean.java:744)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
com.sun.enterprise.admin.MBeanHelper.invokeOperationInBean(MBeanHelper.java:375)
at
com.sun.enterprise.admin.MBeanHelper.invokeOperationInBean(MBeanHelper.java:358)
at
com.sun.enterprise.admin.config.BaseConfigMBean.invoke(BaseConfigMBean.java:464)
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836)
at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.sun.enterprise.admin.util.proxy.ProxyClass.invoke(ProxyClass.java:90)
at $Proxy1.invoke(Unknown Source)
at
com.sun.enterprise.admin.server.core.jmx.SunoneInterceptor.invoke(SunoneInterceptor.java:304)
at
com.sun.enterprise.interceptor.DynamicInterceptor.invoke(DynamicInterceptor.java:174)
at
com.sun.enterprise.admin.jmx.remote.server.callers.InvokeCaller.call(InvokeCaller.java:69)
at
com.sun.enterprise.admin.jmx.remote.server.MBeanServerRequestHandler.handle(MBeanServerRequestHandler.java:155)
at
com.sun.enterprise.admin.jmx.remote.server.servlet.RemoteJmxConnectorServlet.processRequest(RemoteJmxConnectorServlet.java:122)
at
com.sun.enterprise.admin.jmx.remote.server.servlet.RemoteJmxConnectorServlet.doPost(RemoteJmxConnectorServlet.java:193)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:738)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:831)
at
org.apache.catalina.core.ApplicationFilterChain.servletService(ApplicationFilterChain.java:411)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:290)
at
org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:271)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:202)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:632)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:577)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:94)
at org.apache.catalina.core.StandardHostValve.invoke(Sta|#]
[#|2009-04javax.naming.NamingException: ejb ref resolution error for remote
business interfacepl.smi.sms.active.process.ActivityProcessableRemote [Root
exception is java.lang.IllegalArgumentException: object is not an instance of
declaring class]
at com.sun.ejb.EJBUtils.lookupRemote30BusinessObject(EJBUtils.java:425)
at
com.sun.ejb.containers.RemoteBusinessObjectFactory.getObjectInstance(RemoteBusinessObjectFactory.java:74)-22T05:54:52.543+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=16;_ThreadName=httpWorkerThread-4848-0;_RequestID=ecfcfc68-5b91-4c18-97c4-064f5ad358bb;|ndardHostValve.java:206)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:632)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:577)
at
org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:571)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:1080)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:150)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:632)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:577)
at
org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:571)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:1080)
at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:272)
at
com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.invokeAdapter(DefaultProcessorTask.java:637)
at
com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.doProcess(DefaultProcessorTask.java:568)
at
com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.process(DefaultProcessorTask.java:813)
at
com.sun.enterprise.web.connector.grizzly.DefaultReadTask.executeProcessorTask(DefaultReadTask.java:341)
at
com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:263)
at
com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:214)
at com.sun.enterprise.web.connector.grizzly.TaskBase.run(TaskBase.java:265)
at
com.sun.enterprise.web.connector.grizzly.WorkerThreadImpl.run(WorkerThreadImpl.java:116)
Caused by: java.lang.IllegalArgumentException: object is not an instance of
declaring class
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.sun.ejb.EJBUtils.lookupRemote30BusinessObject(EJBUtils.java:372)
... 106 more
| #] |
#### Environment
Operating System: Linux
Platform: Linux
#### Affected Versions
[v2.1.1]
|
non_process
|
ejb lookup faild when resource adapter with remote interface deployed i m using sun java system application server build fcs i also reproduced this problem using glassfish build scenerio i have two enterprise applications ear test module ear and sms active ear one web application war and one connector module rar ear test module have one ejb ejb stateless mappedname ejb sms active module ejbcactivitytest remote activityprocessableremote class public class ejbcactivitytest implements activityprocessableremote public void doinit logger getlogger this getclass getname info doinit ear sms active have one ejb ejb stateless mappedname ejb sms active smsmainbean transactionattribute value transactionattributetype not supported public class smsmainbean implements smsbeanremote public void ontextmessage string message initialcontext ctx try ctx new javax naming initialcontext object lookup ctx lookup ejb sms active module ejbcactivitytest activityprocessableremote activity activityprocessableremote lookup activity doinit catch namingexception e e printstacktrace war servlet servlet with in web xml this servlet is packaged in sms active ear ublic final class lifecycleservlet extends javax servlet http httpservlet implements javax servlet servlet private transient java util logging logger logger override public void init throws servletexception start first part of code getlogger info part initialcontext ctx try ctx new javax naming initialcontext object lookup ctx lookup ejb sms active module ejbcactivitytest activityprocessableremote activity activityprocessableremote lookup activity doinit catch namingexception e e printstacktrace end first part of code start second part of code getlogger info part try ctx new javax naming initialcontext object lookup ctx lookup ejb sms active smsmainbean smsbeanremote activity smsbeanremote lookup activity ontextmessage message catch namingexception e e printstacktrace end second part of code super init public java util logging logger getlogger return logger null logger logger getlogger lifecycleservlet class getname logger so in first part of see servlet code listening this code servlet call ejb from servlet and in second part it call ejb which call ejb first part of this code always successes but in some cases the second part of this code crashed with exception javax naming namingexception ejb ref resolution error for remote business interfacepl smi sms active process activityprocessableremote root exception is java lang illegalargumentexception object is not an instance of declaring class at com sun ejb ejbutils ejbutils java caused by java lang illegalargumentexception object is not an instance of declaring class i described the cases when this code crashed here case step deploy rar with activityprocessableremote class remote interface of ejb on glassfish step deploy ear step deploy ear step deploy war – init method of servlet is called outcome first part of code ok second part of code exception case step deploy ear step deploy ear step deploy rar with activityprocessableremote class remote interface of ejb on glassfish step deploy war – init method of servlet is called outcome first part of code ok second part of code ok case step deploy ear step deploy ear step deploy war – init method of servlet is called rar is not on the glassfish outcome first part of code ok second part of code ok case step deploy rar without definition of activityprocessableremote interface step deploy ear step deploy ear step deploy war – init method of servlet is called outcome first part of code ok second part of code ok conclusion problem occurred only when definition of ejb interface are deployed in rar before ear is deployed full exception warning sun javax enterprise system stream err threadid threadname httpworkerthread requestid javax naming namingexception ejb ref resolution error for remote business interfacepl smi sms active process activityprocessableremote root exception is java lang illegalargumentexception object is not an instance of declaring class at com sun ejb ejbutils ejbutils java at com sun ejb containers remotebusinessobjectfactory getobjectinstance remotebusinessobjectfactory java at javax naming spi namingmanager getobjectinstance namingmanager java at com sun enterprise naming serialcontext lookup serialcontext java at javax naming initialcontext lookup initialcontext java at pl smi sms active manager smsmainbean ontextmessage smsmainbean java at sun reflect nativemethodaccessorimpl native method at sun rfull exception warning sun javax enterprise system stream err threadid threadname httpworkerthread requestid javax naming namingexception ejb ref resolution error for remote business interfacepl smi sms active process activityprocessableremote root exception is java lang illegalargumentexception object is not an instance of declaring class at com sun ejb ejbutils ejbutils java at com sun ejb containers remotebusinessobjectfactory getobjectinstance remotebusinessobjectfactory java at javax naming spi namingmanager getobjectinstance namingmanager java at com sun enterprise naming serialcontext lookup serialcontext java at javaxeflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com sun enterprise security application ejbsecuritymanager runmethod ejbsecuritymanager java at com sun enterprise security securityutil invoke securityutil java at com sun ejb containers basecontainer invoketargetbeanmethod basecontainer java at com sun ejb containers basecontainer intercept basecontainer java at com sun ejb containers ejbobjectinvocationhandler invoke ejbobjectinvocationhandler java at com sun ejb containers ejbobjectinvocationhandlerdelegate invoke ejbobjectinvocationhandlerdelegate java at ontextmessage unknown source at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com sun corba ee impl presentation rmi reflectivetie invoke reflectivetie java at com sun corba ee impl protocol corbaserverrequestdispatcherimpl dispatchtoservant corbaserverrequestdispatcherimpl java at com sun corba ee impl protocol corbaserverrequestdispatcherimpl dispatch corbaserverrequestdispatcherimpl java at com sun corba ee impl protocol corbamessagemediatorimpl handlerequestrequest corbamessagemediatorimpl java at com sun corba ee impl protocol sharedcdrclientrequestdispatcherimpl marshalingcomplete sharedcdrclientrequestdispatcherimpl java at com sun corba ee impl protocol corbaclientdelegateimpl invoke corbaclientdelegateimpl java at com sun corba ee impl presentation rmi stubinvocationhandlerimpl privateinvoke stubinvocationhandlerimpl java at com sun corba ee impl presentation rmi stubinvocationhandlerimpl invoke stubinvocationhandlerimpl java at com sun corba ee impl presentation rmi bcel bcelstubbase invoke bcelstubbase java at pl smi sms active manager smsbeanremote remote dynamicstub ontextmessage pl smi sms active manager smsbeanremote remote dynamicstub java at pl smi sms active manager smsbeanremote wrapper ontextmessage pl smi sms active manager smsbeanremote wrapper java at pl smi sms active lifecycleservlet init lifecycleservlet java at javax servlet genericservlet init genericservlet java at org apache catalina core standardwrapper loadservlet standardwrapper java at org apache catalina core standardwrapper load standardwrapper java at org apache catalina core standardcontext loadonstartup standardcontext java at org apache catalina core standardcontext start standardcontext java at com sun enterprise web webmodule start webmodule java at org apache catalina core containerbase addchildinternal containerbase java at org apache catalina core containerbase addchild containerbase java at org apache catalina core standardhost addchild standardhost java at com sun enterprise web webcontainer loadwebmodule webcontainer java at com sun enterprise web webcontainer loadwebmodule webcontainer java at com sun enterprise server webmoduledeployeventlistener moduledeployed webmoduledeployeventlistener java at com sun enterprise server webmoduledeployeventlistener moduledeployed webmoduledeployeventlistener java at com sun enterprise admin event admineventmulticaster invokemoduledeployeventlistener admineventmulticaster java at com sun enterprise admin event admineventmulticaster handlemoduledeployevent admineventmulticaster java at com sun enterprise admin event admineventmulticaster processevent admineventmulticaster java at com sun enterprise admin event admineventmulticaster multicastevent admineventmulticaster java at com sun enterprise admin server core deploymentnotificationhelper multicastevent deploymentnotificationhelper java at com sun enterprise deployment phasing deploymentserviceutils multicastevent deploymentserviceutils java at com sun enterprise deployment phasing serverdeploymenttarget sendstartevent serverdeploymenttarget java at com sun enterprise deployment phasing applicationstartphase runphase applicationstartphase java at com sun enterprise deployment phasing deploymentphase executephase deploymentphase java at com sun enterprise deployment phasing pedeploymentservice executephases pedeploymentservice java at com sun enterprise deployment phasing pedeploymentservice start pedeploymentservice java at com sun enterprise deployment phasing pedeploymentservice start pedeploymentservice java at com sun enterprise admin mbeans applicationsconfigmbean start applicationsconfigmbean java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com sun enterprise admin mbeanhelper invokeoperationinbean mbeanhelper java at com sun enterprise admin mbeanhelper invokeoperationinbean mbeanhelper java at com sun enterprise admin config baseconfigmbean invoke baseconfigmbean java at com sun jmx interceptor defaultmbeanserverinterceptor invoke defaultmbeanserverinterceptor java at com sun jmx mbeanserver jmxmbeanserver invoke jmxmbeanserver java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com sun enterprise admin util proxy proxyclass invoke proxyclass java at invoke unknown source at com sun enterprise admin server core jmx sunoneinterceptor invoke sunoneinterceptor java at com sun enterprise interceptor dynamicinterceptor invoke dynamicinterceptor java at com sun enterprise admin jmx remote server callers invokecaller call invokecaller java at com sun enterprise admin jmx remote server mbeanserverrequesthandler handle mbeanserverrequesthandler java at com sun enterprise admin jmx remote server servlet remotejmxconnectorservlet processrequest remotejmxconnectorservlet java at com sun enterprise admin jmx remote server servlet remotejmxconnectorservlet dopost remotejmxconnectorservlet java at javax servlet http httpservlet service httpservlet java at javax servlet http httpservlet service httpservlet java at org apache catalina core applicationfilterchain servletservice applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invokeinternal standardcontextvalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina core standardpipeline doinvoke standardpipeline java at org apache catalina core standardpipeline doinvoke standardpipeline java at com sun enterprise web webpipeline invoke webpipeline java at org apache catalina core standardhostvalve invoke sta naming namingexception ejb ref resolution error for remote business interfacepl smi sms active process activityprocessableremote root exception is java lang illegalargumentexception object is not an instance of declaring class at com sun ejb ejbutils ejbutils java at com sun ejb containers remotebusinessobjectfactory getobjectinstance remotebusinessobjectfactory java warning sun javax enterprise system stream err threadid threadname httpworkerthread requestid ndardhostvalve java at org apache catalina core standardpipeline doinvoke standardpipeline java at org apache catalina core standardpipeline doinvoke standardpipeline java at org apache catalina core standardpipeline invoke standardpipeline java at org apache catalina core containerbase invoke containerbase java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina core standardpipeline doinvoke standardpipeline java at org apache catalina core standardpipeline doinvoke standardpipeline java at org apache catalina core standardpipeline invoke standardpipeline java at org apache catalina core containerbase invoke containerbase java at org apache coyote coyoteadapter service coyoteadapter java at com sun enterprise web connector grizzly defaultprocessortask invokeadapter defaultprocessortask java at com sun enterprise web connector grizzly defaultprocessortask doprocess defaultprocessortask java at com sun enterprise web connector grizzly defaultprocessortask process defaultprocessortask java at com sun enterprise web connector grizzly defaultreadtask executeprocessortask defaultreadtask java at com sun enterprise web connector grizzly defaultreadtask dotask defaultreadtask java at com sun enterprise web connector grizzly defaultreadtask dotask defaultreadtask java at com sun enterprise web connector grizzly taskbase run taskbase java at com sun enterprise web connector grizzly workerthreadimpl run workerthreadimpl java caused by java lang illegalargumentexception object is not an instance of declaring class at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com sun ejb ejbutils ejbutils java more environment operating system linux platform linux affected versions
| 0
|
289
| 2,730,817,789
|
IssuesEvent
|
2015-04-16 16:49:01
|
hammerlab/pileup.js
|
https://api.github.com/repos/hammerlab/pileup.js
|
opened
|
Optimize for partial cache hits in RemoteFile
|
process
|
Currently, if you request these byte ranges from a `RemoteFile`:
1000-2000 (requests 1000-2000)
1500-3000 (requests 1500-3000)
It will perform those exact requests over the network. But really, for the second request, it should pull the first 500 bytes from cache:
1000-2000 (requests 1000-2000)
1500-3000 (requests 2001-3000)
This would reduce the network requirements for `BamFile`'s network access pattern, for example, where adjacent requests will overlap due to reads that span a compression block.
|
1.0
|
Optimize for partial cache hits in RemoteFile - Currently, if you request these byte ranges from a `RemoteFile`:
1000-2000 (requests 1000-2000)
1500-3000 (requests 1500-3000)
It will perform those exact requests over the network. But really, for the second request, it should pull the first 500 bytes from cache:
1000-2000 (requests 1000-2000)
1500-3000 (requests 2001-3000)
This would reduce the network requirements for `BamFile`'s network access pattern, for example, where adjacent requests will overlap due to reads that span a compression block.
|
process
|
optimize for partial cache hits in remotefile currently if you request these byte ranges from a remotefile requests requests it will perform those exact requests over the network but really for the second request it should pull the first bytes from cache requests requests this would reduce the network requirements for bamfile s network access pattern for example where adjacent requests will overlap due to reads that span a compression block
| 1
|
40,641
| 10,085,146,559
|
IssuesEvent
|
2019-07-25 17:23:06
|
Automattic/wp-calypso
|
https://api.github.com/repos/Automattic/wp-calypso
|
opened
|
FSE: Broken Hero block in Footer on fsedemo.wordpress.com
|
[Goal] Full Site Editing [Type] Defect
|
We have a demo site setup for the FSE plugin at fsedemo.wordpress.com. I noticed that there was an extra Hero block (from Coblocks) in the Footer. I attempted to edit the footer and remove this block, but I found that anytime I try to focus on the Hero block Gutenberg crashes.

If you're a member of the site, you can head straight to the editor for the Footer template part with this URL: https://wordpress.com/block-editor/edit/wp_template_part/fsedemo.wordpress.com/39?fse_parent_post=2
|
1.0
|
FSE: Broken Hero block in Footer on fsedemo.wordpress.com - We have a demo site setup for the FSE plugin at fsedemo.wordpress.com. I noticed that there was an extra Hero block (from Coblocks) in the Footer. I attempted to edit the footer and remove this block, but I found that anytime I try to focus on the Hero block Gutenberg crashes.

If you're a member of the site, you can head straight to the editor for the Footer template part with this URL: https://wordpress.com/block-editor/edit/wp_template_part/fsedemo.wordpress.com/39?fse_parent_post=2
|
non_process
|
fse broken hero block in footer on fsedemo wordpress com we have a demo site setup for the fse plugin at fsedemo wordpress com i noticed that there was an extra hero block from coblocks in the footer i attempted to edit the footer and remove this block but i found that anytime i try to focus on the hero block gutenberg crashes if you re a member of the site you can head straight to the editor for the footer template part with this url
| 0
|
9,293
| 11,305,784,002
|
IssuesEvent
|
2020-01-18 08:49:32
|
PG85/OpenTerrainGenerator
|
https://api.github.com/repos/PG85/OpenTerrainGenerator
|
closed
|
Bug: Mekanism + OTG Nether Portal not working
|
Bug - Issue Dimensions Forge Mod Compatibility
|
I have around 39 other mods installed(AE2, better animals+, code chicken, coroutil, corpse, craft tweaker, hwyla, iblis, ice and fire, ichun util, jei, large fluid tank, llibrary, MCA, mekanism, mekanism generators, mekanism tools, mob dismemberment, my chunk loader, NEI, optifine, radixcore, resynth, u team core, useful back packs, voxel map and weather 2) and the nether worked perfectly, but when i intsalled biome bundle and OTG, when I activated the nether portal, instead of the portal blocks, a purple and black block appeared(as if there was no texture for it) and when entering them, the portal noise was still there, but it wouldn't teleport me to the nether and after a while the portal disappears.
|
True
|
Bug: Mekanism + OTG Nether Portal not working - I have around 39 other mods installed(AE2, better animals+, code chicken, coroutil, corpse, craft tweaker, hwyla, iblis, ice and fire, ichun util, jei, large fluid tank, llibrary, MCA, mekanism, mekanism generators, mekanism tools, mob dismemberment, my chunk loader, NEI, optifine, radixcore, resynth, u team core, useful back packs, voxel map and weather 2) and the nether worked perfectly, but when i intsalled biome bundle and OTG, when I activated the nether portal, instead of the portal blocks, a purple and black block appeared(as if there was no texture for it) and when entering them, the portal noise was still there, but it wouldn't teleport me to the nether and after a while the portal disappears.
|
non_process
|
bug mekanism otg nether portal not working i have around other mods installed better animals code chicken coroutil corpse craft tweaker hwyla iblis ice and fire ichun util jei large fluid tank llibrary mca mekanism mekanism generators mekanism tools mob dismemberment my chunk loader nei optifine radixcore resynth u team core useful back packs voxel map and weather and the nether worked perfectly but when i intsalled biome bundle and otg when i activated the nether portal instead of the portal blocks a purple and black block appeared as if there was no texture for it and when entering them the portal noise was still there but it wouldn t teleport me to the nether and after a while the portal disappears
| 0
|
31,849
| 15,106,241,622
|
IssuesEvent
|
2021-02-08 14:04:26
|
yejineee/VanillaBank
|
https://api.github.com/repos/yejineee/VanillaBank
|
opened
|
이전, 다음 월 캐시하기
|
Feat🍦 performance 🌟
|
## ⚽️ 목표
이전 달과 다음 달의 거래내역을 캐시해보기. 사용자가 이전 달이나 다음 달을 선택했을 때, 캐시된 값을 바로 가져올 수 있어 더 나은 사용자 경험을 제공할 수 있을 것이다.
## 참고 자료
[이전/다음 페이지 캐시 - TOAST](https://ui.toast.com/weekly-pick/ko_20201201)
## ✅ CheckPoint
- [ ] 이전 달, 다음 달을 사용자가 명시적으로 요청하지 않아도, 캐시된다.
- [ ] 해당 구현 내용을 글로 정리한다.
|
True
|
이전, 다음 월 캐시하기 - ## ⚽️ 목표
이전 달과 다음 달의 거래내역을 캐시해보기. 사용자가 이전 달이나 다음 달을 선택했을 때, 캐시된 값을 바로 가져올 수 있어 더 나은 사용자 경험을 제공할 수 있을 것이다.
## 참고 자료
[이전/다음 페이지 캐시 - TOAST](https://ui.toast.com/weekly-pick/ko_20201201)
## ✅ CheckPoint
- [ ] 이전 달, 다음 달을 사용자가 명시적으로 요청하지 않아도, 캐시된다.
- [ ] 해당 구현 내용을 글로 정리한다.
|
non_process
|
이전 다음 월 캐시하기 ⚽️ 목표 이전 달과 다음 달의 거래내역을 캐시해보기 사용자가 이전 달이나 다음 달을 선택했을 때 캐시된 값을 바로 가져올 수 있어 더 나은 사용자 경험을 제공할 수 있을 것이다 참고 자료 ✅ checkpoint 이전 달 다음 달을 사용자가 명시적으로 요청하지 않아도 캐시된다 해당 구현 내용을 글로 정리한다
| 0
|
70,952
| 23,388,166,526
|
IssuesEvent
|
2022-08-11 15:21:13
|
department-of-veterans-affairs/va.gov-cms
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
|
opened
|
Empty table header cells in Revisions table in CMS
|
Needs refining ⭐️ Sitewide CMS 508/Accessibility 508-defect-4
|
## Description
On the Revisions screen in the CMS, there are two empty table header cells in the table. Empty table header cells create confusion for users as they may not be able to fully understand the content in the associated table data cells.
## Screenshot

## Accessibility Standard
WCAG version 2.0 A, [Criterion 1.3.1](https://www.w3.org/WAI/WCAG21/Understanding/info-and-relationships.html)
## Acceptance Criteria
- [ ] Technical review to determine feasibility of adding content to table headers
- [ ] UX/IA to determine what content should say
- [ ] Change Management consulted
- [ ] Implementation ticket created
### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [ ] `⭐️ Sitewide CMS`
- [ ] `⭐️ Public Websites`
- [ ] `⭐️ Facilities`
- [ ] `⭐️ User support`
|
1.0
|
Empty table header cells in Revisions table in CMS - ## Description
On the Revisions screen in the CMS, there are two empty table header cells in the table. Empty table header cells create confusion for users as they may not be able to fully understand the content in the associated table data cells.
## Screenshot

## Accessibility Standard
WCAG version 2.0 A, [Criterion 1.3.1](https://www.w3.org/WAI/WCAG21/Understanding/info-and-relationships.html)
## Acceptance Criteria
- [ ] Technical review to determine feasibility of adding content to table headers
- [ ] UX/IA to determine what content should say
- [ ] Change Management consulted
- [ ] Implementation ticket created
### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [ ] `⭐️ Sitewide CMS`
- [ ] `⭐️ Public Websites`
- [ ] `⭐️ Facilities`
- [ ] `⭐️ User support`
|
non_process
|
empty table header cells in revisions table in cms description on the revisions screen in the cms there are two empty table header cells in the table empty table header cells create confusion for users as they may not be able to fully understand the content in the associated table data cells screenshot accessibility standard wcag version a acceptance criteria technical review to determine feasibility of adding content to table headers ux ia to determine what content should say change management consulted implementation ticket created cms team please check the team s that will do this work program platform cms team sitewide crew ⭐️ sitewide cms ⭐️ public websites ⭐️ facilities ⭐️ user support
| 0
|
102,340
| 12,768,787,584
|
IssuesEvent
|
2020-06-30 01:35:54
|
xqrzd/kudu-client-net
|
https://api.github.com/repos/xqrzd/kudu-client-net
|
opened
|
Scanner parser interface
|
Design Epic
|
Figure out how scan data should be exposed. Should scan data be exposed incrementally as it's available, or only once it has been buffered (like the Java/C++ clients). Exposing incrementally would slightly reduce latency, but it's more complex. Implementations would also have to be able to handle both Kudu formats (row and columnar).
### Current implementation (allows for incremental parsing):
```csharp
interface IKuduScanParser<T> : IDisposable
{
T Output { get; }
void BeginProcessingSidecars(
KuduSchema scanSchema,
ScanResponsePB scanResponse,
KuduSidecarOffsets sidecars);
void ParseSidecarSegment(ref SequenceReader<byte> reader);
}
```
IKuduScanParser should be provided by a factory, as if this parser is passed to a scanner, that scanner cannot be cached and reused, due to the implementation of IKuduScanParser mutating internal state.
I wouldn't expect anyone to implement IKuduScanParser, primary motivation of incremental parsing would be for provided helpers, such as a Dapper style simple object mapper, or a ML.NET IDataView generator. In theory, incremental parsing would allow these types to outperform RowResult, as RowResult must be buffered entirely before it can be used.
### Possible example of exposing buffered data:
```csharp
class KuduSidecars : IDisposable
{
int NumSidecars { get; }
ReadOnlyMemory<byte> GetSidecarMemory(int sidecar);
ReadOnlySpan<byte> GetSidecarSpan(int sidecar);
}
```
|
1.0
|
Scanner parser interface - Figure out how scan data should be exposed. Should scan data be exposed incrementally as it's available, or only once it has been buffered (like the Java/C++ clients). Exposing incrementally would slightly reduce latency, but it's more complex. Implementations would also have to be able to handle both Kudu formats (row and columnar).
### Current implementation (allows for incremental parsing):
```csharp
interface IKuduScanParser<T> : IDisposable
{
T Output { get; }
void BeginProcessingSidecars(
KuduSchema scanSchema,
ScanResponsePB scanResponse,
KuduSidecarOffsets sidecars);
void ParseSidecarSegment(ref SequenceReader<byte> reader);
}
```
IKuduScanParser should be provided by a factory, as if this parser is passed to a scanner, that scanner cannot be cached and reused, due to the implementation of IKuduScanParser mutating internal state.
I wouldn't expect anyone to implement IKuduScanParser, primary motivation of incremental parsing would be for provided helpers, such as a Dapper style simple object mapper, or a ML.NET IDataView generator. In theory, incremental parsing would allow these types to outperform RowResult, as RowResult must be buffered entirely before it can be used.
### Possible example of exposing buffered data:
```csharp
class KuduSidecars : IDisposable
{
int NumSidecars { get; }
ReadOnlyMemory<byte> GetSidecarMemory(int sidecar);
ReadOnlySpan<byte> GetSidecarSpan(int sidecar);
}
```
|
non_process
|
scanner parser interface figure out how scan data should be exposed should scan data be exposed incrementally as it s available or only once it has been buffered like the java c clients exposing incrementally would slightly reduce latency but it s more complex implementations would also have to be able to handle both kudu formats row and columnar current implementation allows for incremental parsing csharp interface ikuduscanparser idisposable t output get void beginprocessingsidecars kuduschema scanschema scanresponsepb scanresponse kudusidecaroffsets sidecars void parsesidecarsegment ref sequencereader reader ikuduscanparser should be provided by a factory as if this parser is passed to a scanner that scanner cannot be cached and reused due to the implementation of ikuduscanparser mutating internal state i wouldn t expect anyone to implement ikuduscanparser primary motivation of incremental parsing would be for provided helpers such as a dapper style simple object mapper or a ml net idataview generator in theory incremental parsing would allow these types to outperform rowresult as rowresult must be buffered entirely before it can be used possible example of exposing buffered data csharp class kudusidecars idisposable int numsidecars get readonlymemory getsidecarmemory int sidecar readonlyspan getsidecarspan int sidecar
| 0
|
113,468
| 9,647,671,137
|
IssuesEvent
|
2019-05-17 14:28:06
|
ManageIQ/manageiq-ui-classic
|
https://api.github.com/repos/ManageIQ/manageiq-ui-classic
|
closed
|
sporadic test failure with ops_controller/settings/common_spec.rb
|
bug/sporadic test failure
|
```
1) OpsController OpsController::Settings::Common #settings_update won't render form buttons after rhn settings submission
Failure/Error:
MiqHashStruct.new(
:registered => username.present?,
:registration_type => db.registration_type,
:user_name => username,
:server => db.registration_server,
:company_name => db.registration_organization_name,
:subscription => rhn_subscription_map[db.registration_type] || 'None',
:update_repo_name => db.update_repo_name,
:version_available => db.cfme_version_available,
:registration_http_proxy_server => db.registration_http_proxy_server
NameError:
uninitialized constant OpsController::Settings::RHN::MiqHashStruct
# ./app/controllers/ops_controller/settings/rhn.rb:92:in `rhn_subscription'
# ./app/controllers/ops_controller/settings/common.rb:1196:in `settings_get_info'
# ./app/controllers/ops_controller/settings/rhn.rb:146:in `rhn_save_subscription'
# ./app/controllers/ops_controller/settings/common.rb:377:in `settings_update_save'
# ./app/controllers/ops_controller/settings/common.rb:164:in `settings_update'
# ./spec/controllers/ops_controller/settings/common_spec.rb:161:in `block (4 levels) in <top (required)>'
```
I've narrowed it down to a particular test+seed:
`bundle exec rspec spec/controllers/ops_controller/settings/common_spec.rb --seed 55112`
I *think* it's a const resolution problem between `OpsController::Settings` and `::Settings` or perhaps between `OpsController::Settings::RHN` and `::RHN` (the model), but I could be wrong. This may need a core change instead of a ui-specific change.
cc @bdunne as this is related to RHN
|
1.0
|
sporadic test failure with ops_controller/settings/common_spec.rb - ```
1) OpsController OpsController::Settings::Common #settings_update won't render form buttons after rhn settings submission
Failure/Error:
MiqHashStruct.new(
:registered => username.present?,
:registration_type => db.registration_type,
:user_name => username,
:server => db.registration_server,
:company_name => db.registration_organization_name,
:subscription => rhn_subscription_map[db.registration_type] || 'None',
:update_repo_name => db.update_repo_name,
:version_available => db.cfme_version_available,
:registration_http_proxy_server => db.registration_http_proxy_server
NameError:
uninitialized constant OpsController::Settings::RHN::MiqHashStruct
# ./app/controllers/ops_controller/settings/rhn.rb:92:in `rhn_subscription'
# ./app/controllers/ops_controller/settings/common.rb:1196:in `settings_get_info'
# ./app/controllers/ops_controller/settings/rhn.rb:146:in `rhn_save_subscription'
# ./app/controllers/ops_controller/settings/common.rb:377:in `settings_update_save'
# ./app/controllers/ops_controller/settings/common.rb:164:in `settings_update'
# ./spec/controllers/ops_controller/settings/common_spec.rb:161:in `block (4 levels) in <top (required)>'
```
I've narrowed it down to a particular test+seed:
`bundle exec rspec spec/controllers/ops_controller/settings/common_spec.rb --seed 55112`
I *think* it's a const resolution problem between `OpsController::Settings` and `::Settings` or perhaps between `OpsController::Settings::RHN` and `::RHN` (the model), but I could be wrong. This may need a core change instead of a ui-specific change.
cc @bdunne as this is related to RHN
|
non_process
|
sporadic test failure with ops controller settings common spec rb opscontroller opscontroller settings common settings update won t render form buttons after rhn settings submission failure error miqhashstruct new registered username present registration type db registration type user name username server db registration server company name db registration organization name subscription rhn subscription map none update repo name db update repo name version available db cfme version available registration http proxy server db registration http proxy server nameerror uninitialized constant opscontroller settings rhn miqhashstruct app controllers ops controller settings rhn rb in rhn subscription app controllers ops controller settings common rb in settings get info app controllers ops controller settings rhn rb in rhn save subscription app controllers ops controller settings common rb in settings update save app controllers ops controller settings common rb in settings update spec controllers ops controller settings common spec rb in block levels in i ve narrowed it down to a particular test seed bundle exec rspec spec controllers ops controller settings common spec rb seed i think it s a const resolution problem between opscontroller settings and settings or perhaps between opscontroller settings rhn and rhn the model but i could be wrong this may need a core change instead of a ui specific change cc bdunne as this is related to rhn
| 0
|
48,503
| 2,998,302,493
|
IssuesEvent
|
2015-07-23 13:26:55
|
jayway/powermock
|
https://api.github.com/repos/jayway/powermock
|
closed
|
Upgrade to EasyMock class extension 2.5
|
enhancement imported Milestone-Release2.0 Priority-High wontfix
|
_From [johan.ha...@gmail.com](https://code.google.com/u/105676376875942041029/) on May 25, 2009 13:01:12_
Update the EasyMock extension API to use EasyMock 2.5
_Original issue: http://code.google.com/p/powermock/issues/detail?id=109_
|
1.0
|
Upgrade to EasyMock class extension 2.5 - _From [johan.ha...@gmail.com](https://code.google.com/u/105676376875942041029/) on May 25, 2009 13:01:12_
Update the EasyMock extension API to use EasyMock 2.5
_Original issue: http://code.google.com/p/powermock/issues/detail?id=109_
|
non_process
|
upgrade to easymock class extension from on may update the easymock extension api to use easymock original issue
| 0
|
9,396
| 12,397,111,504
|
IssuesEvent
|
2020-05-20 21:55:31
|
mattermost/mattermost-developer-documentation
|
https://api.github.com/repos/mattermost/mattermost-developer-documentation
|
opened
|
Process for promoting a contributor to a core committer
|
Process
|
This is a proposed process on how to promote a contributor to a [core committer](https://developers.mattermost.com/contribute/getting-started/core-committers/). Feedback welcome!
1. Identify a core committer
- A core committer is a maintainer on the Mattermost project that has merge access to Mattermost repositories. They are responsible for reviewing pull requests, cultivating the Mattermost developer community, and guiding the technical vision of Mattermost. If you have a question or need some help, these are the people to ask.
- If you feel someone in the community would be interested in such activities, then they may be a great candidate for being promoted to a core committer!
2. Nominate a core committer
- If the nomination is for the Mattermost project, or for a core repository (e.g. mattermost-server, mattermost-webapp), raise the topic in weekly developers meeting
- If the nomination is for a repository managed by a specific team (e.g. mattermost-plugin-jira), raise the topic with that team
3. Team discussed nomination
- During the meeting, the nomination is discussed. Meeting participants should feel empowered to raise concerns on the nomination if any (e.g. length of involvement in the community, skill sets)
4. If the nomination is agreed on, the person who nominated the contributor reaches out to them
- The promotion should be opt-in, and the contributor should feel empowered to decline the promotion if they so choose to
5. If the contributor accepts the nomination, they are
- given merge access in the respective GitHub repositories
- added to the [Mattermost GitHub organization](https://github.com/orgs/mattermost/people), which shows up in their GitHub profile
- added as reviewers to relevant pull requests, to offer technical guidance and reviews
6. The promotion is announced (details TBD)
7. The new core committer is gifted with lot of :heart: and lot of swag :tada: (details TBD)
|
1.0
|
Process for promoting a contributor to a core committer - This is a proposed process on how to promote a contributor to a [core committer](https://developers.mattermost.com/contribute/getting-started/core-committers/). Feedback welcome!
1. Identify a core committer
- A core committer is a maintainer on the Mattermost project that has merge access to Mattermost repositories. They are responsible for reviewing pull requests, cultivating the Mattermost developer community, and guiding the technical vision of Mattermost. If you have a question or need some help, these are the people to ask.
- If you feel someone in the community would be interested in such activities, then they may be a great candidate for being promoted to a core committer!
2. Nominate a core committer
- If the nomination is for the Mattermost project, or for a core repository (e.g. mattermost-server, mattermost-webapp), raise the topic in weekly developers meeting
- If the nomination is for a repository managed by a specific team (e.g. mattermost-plugin-jira), raise the topic with that team
3. Team discussed nomination
- During the meeting, the nomination is discussed. Meeting participants should feel empowered to raise concerns on the nomination if any (e.g. length of involvement in the community, skill sets)
4. If the nomination is agreed on, the person who nominated the contributor reaches out to them
- The promotion should be opt-in, and the contributor should feel empowered to decline the promotion if they so choose to
5. If the contributor accepts the nomination, they are
- given merge access in the respective GitHub repositories
- added to the [Mattermost GitHub organization](https://github.com/orgs/mattermost/people), which shows up in their GitHub profile
- added as reviewers to relevant pull requests, to offer technical guidance and reviews
6. The promotion is announced (details TBD)
7. The new core committer is gifted with lot of :heart: and lot of swag :tada: (details TBD)
|
process
|
process for promoting a contributor to a core committer this is a proposed process on how to promote a contributor to a feedback welcome identify a core committer a core committer is a maintainer on the mattermost project that has merge access to mattermost repositories they are responsible for reviewing pull requests cultivating the mattermost developer community and guiding the technical vision of mattermost if you have a question or need some help these are the people to ask if you feel someone in the community would be interested in such activities then they may be a great candidate for being promoted to a core committer nominate a core committer if the nomination is for the mattermost project or for a core repository e g mattermost server mattermost webapp raise the topic in weekly developers meeting if the nomination is for a repository managed by a specific team e g mattermost plugin jira raise the topic with that team team discussed nomination during the meeting the nomination is discussed meeting participants should feel empowered to raise concerns on the nomination if any e g length of involvement in the community skill sets if the nomination is agreed on the person who nominated the contributor reaches out to them the promotion should be opt in and the contributor should feel empowered to decline the promotion if they so choose to if the contributor accepts the nomination they are given merge access in the respective github repositories added to the which shows up in their github profile added as reviewers to relevant pull requests to offer technical guidance and reviews the promotion is announced details tbd the new core committer is gifted with lot of heart and lot of swag tada details tbd
| 1
|
5,214
| 7,999,829,495
|
IssuesEvent
|
2018-07-22 08:06:44
|
kerubistan/kerub
|
https://api.github.com/repos/kerubistan/kerub
|
closed
|
password protect iscsi shares
|
component:data processing enhancement priority: normal
|
Just in order to avoid anyone else accidentally connecting to it
|
1.0
|
password protect iscsi shares - Just in order to avoid anyone else accidentally connecting to it
|
process
|
password protect iscsi shares just in order to avoid anyone else accidentally connecting to it
| 1
|
145,882
| 11,711,600,513
|
IssuesEvent
|
2020-03-09 05:49:57
|
2643/2020-Code
|
https://api.github.com/repos/2643/2020-Code
|
closed
|
feeding power cells from the conveyor belt to the shooter
|
shooter test
|
Do the power cells properly feed on the conveyor belt to the shooter with the current algorithms?
|
1.0
|
feeding power cells from the conveyor belt to the shooter - Do the power cells properly feed on the conveyor belt to the shooter with the current algorithms?
|
non_process
|
feeding power cells from the conveyor belt to the shooter do the power cells properly feed on the conveyor belt to the shooter with the current algorithms
| 0
|
19,577
| 25,898,525,590
|
IssuesEvent
|
2022-12-15 02:00:08
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Thu, 15 Dec 22
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### Event-based YOLO Object Detection: Proof of Concept for Forward Perception System
- **Authors:** Waseem Shariff, Muhammad Ali Farooq, Joe Lemley, Peter Corcoran
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.07181
- **Pdf link:** https://arxiv.org/pdf/2212.07181
- **Abstract**
Neuromorphic vision or event vision is an advanced vision technology, where in contrast to the visible camera that outputs pixels, the event vision generates neuromorphic events every time there is a brightness change which exceeds a specific threshold in the field of view (FOV). This study focuses on leveraging neuromorphic event data for roadside object detection. This is a proof of concept towards building artificial intelligence (AI) based pipelines which can be used for forward perception systems for advanced vehicular applications. The focus is on building efficient state-of-the-art object detection networks with better inference results for fast-moving forward perception using an event camera. In this article, the event-simulated A2D2 dataset is manually annotated and trained on two different YOLOv5 networks (small and large variants). To further assess its robustness, single model testing and ensemble model testing are carried out.
### A Fast Geometric Regularizer to Mitigate Event Collapse in the Contrast Maximization Framework
- **Authors:** Shintaro Shiba, Yoshimitsu Aoki, Guillermo Gallego
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO); Differential Geometry (math.DG)
- **Arxiv link:** https://arxiv.org/abs/2212.07350
- **Pdf link:** https://arxiv.org/pdf/2212.07350
- **Abstract**
Event cameras are emerging vision sensors and their advantages are suitable for various applications such as autonomous robots. Contrast maximization (CMax), which provides state-of-the-art accuracy on motion estimation using events, may suffer from an overfitting problem called event collapse. Prior works are computationally expensive or cannot alleviate the overfitting, which undermines the benefits of the CMax framework. We propose a novel, computationally efficient regularizer based on geometric principles to mitigate event collapse. The experiments show that the proposed regularizer achieves state-of-the-art accuracy results, while its reduced computational complexity makes it two to four times faster than previous approaches. To the best of our knowledge, our regularizer is the only effective solution for event collapse without trading off runtime. We hope our work opens the door for future applications that unlocks the advantages of event cameras.
## Keyword: event camera
### Event-based YOLO Object Detection: Proof of Concept for Forward Perception System
- **Authors:** Waseem Shariff, Muhammad Ali Farooq, Joe Lemley, Peter Corcoran
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.07181
- **Pdf link:** https://arxiv.org/pdf/2212.07181
- **Abstract**
Neuromorphic vision or event vision is an advanced vision technology, where in contrast to the visible camera that outputs pixels, the event vision generates neuromorphic events every time there is a brightness change which exceeds a specific threshold in the field of view (FOV). This study focuses on leveraging neuromorphic event data for roadside object detection. This is a proof of concept towards building artificial intelligence (AI) based pipelines which can be used for forward perception systems for advanced vehicular applications. The focus is on building efficient state-of-the-art object detection networks with better inference results for fast-moving forward perception using an event camera. In this article, the event-simulated A2D2 dataset is manually annotated and trained on two different YOLOv5 networks (small and large variants). To further assess its robustness, single model testing and ensemble model testing are carried out.
### A Fast Geometric Regularizer to Mitigate Event Collapse in the Contrast Maximization Framework
- **Authors:** Shintaro Shiba, Yoshimitsu Aoki, Guillermo Gallego
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO); Differential Geometry (math.DG)
- **Arxiv link:** https://arxiv.org/abs/2212.07350
- **Pdf link:** https://arxiv.org/pdf/2212.07350
- **Abstract**
Event cameras are emerging vision sensors and their advantages are suitable for various applications such as autonomous robots. Contrast maximization (CMax), which provides state-of-the-art accuracy on motion estimation using events, may suffer from an overfitting problem called event collapse. Prior works are computationally expensive or cannot alleviate the overfitting, which undermines the benefits of the CMax framework. We propose a novel, computationally efficient regularizer based on geometric principles to mitigate event collapse. The experiments show that the proposed regularizer achieves state-of-the-art accuracy results, while its reduced computational complexity makes it two to four times faster than previous approaches. To the best of our knowledge, our regularizer is the only effective solution for event collapse without trading off runtime. We hope our work opens the door for future applications that unlocks the advantages of event cameras.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare
- **Authors:** Yann Labbé, Lucas Manuelli, Arsalan Mousavian, Stephen Tyree, Stan Birchfield, Jonathan Tremblay, Justin Carpentier, Mathieu Aubry, Dieter Fox, Josef Sivic
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
- **Arxiv link:** https://arxiv.org/abs/2212.06870
- **Pdf link:** https://arxiv.org/pdf/2212.06870
- **Abstract**
We introduce MegaPose, a method to estimate the 6D pose of novel objects, that is, objects unseen during training. At inference time, the method only assumes knowledge of (i) a region of interest displaying the object in the image and (ii) a CAD model of the observed object. The contributions of this work are threefold. First, we present a 6D pose refiner based on a render&compare strategy which can be applied to novel objects. The shape and coordinate system of the novel object are provided as inputs to the network by rendering multiple synthetic views of the object's CAD model. Second, we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner. Third, we introduce a large-scale synthetic dataset of photorealistic images of thousands of objects with diverse visual and shape properties and show that this diversity is crucial to obtain good generalization performance on novel objects. We train our approach on this large synthetic dataset and apply it without retraining to hundreds of novel objects in real images from several pose estimation benchmarks. Our approach achieves state-of-the-art performance on the ModelNet and YCB-Video datasets. An extensive evaluation on the 7 core datasets of the BOP challenge demonstrates that our approach achieves performance competitive with existing approaches that require access to the target objects during training. Code, dataset and trained models are available on the project page: https://megapose6d.github.io/.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### PD-Quant: Post-Training Quantization based on Prediction Difference Metric
- **Authors:** Jiawei Liu, Lin Niu, Zhihang Yuan, Dawei Yang, Xinggang Wang, Wenyu Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2212.07048
- **Pdf link:** https://arxiv.org/pdf/2212.07048
- **Abstract**
As a neural network compression technique, post-training quantization (PTQ) transforms a pre-trained model into a quantized model using a lower-precision data type. However, the prediction accuracy will decrease because of the quantization noise, especially in extremely low-bit settings. How to determine the appropriate quantization parameters (e.g., scaling factors and rounding of weights) is the main problem facing now. Many existing methods determine the quantization parameters by minimizing the distance between features before and after quantization. Using this distance as the metric to optimize the quantization parameters only considers local information. We analyze the problem of minimizing local metrics and indicate that it would not result in optimal quantization parameters. Furthermore, the quantized model suffers from overfitting due to the small number of calibration samples in PTQ. In this paper, we propose PD-Quant to solve the problems. PD-Quant uses the information of differences between network prediction before and after quantization to determine the quantization parameters. To mitigate the overfitting problem, PD-Quant adjusts the distribution of activations in PTQ. Experiments show that PD-Quant leads to better quantization parameters and improves the prediction accuracy of quantized models, especially in low-bit settings. For example, PD-Quant pushes the accuracy of ResNet-18 up to 53.08% and RegNetX-600MF up to 40.92% in weight 2-bit activation 2-bit. The code will be released at https://github.com/hustvl/PD-Quant.
### Image Compression with Product Quantized Masked Image Modeling
- **Authors:** Alaaeldin El-Nouby, Matthew J. Muckley, Karen Ullrich, Ivan Laptev, Jakob Verbeek, Hervé Jégou
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2212.07372
- **Pdf link:** https://arxiv.org/pdf/2212.07372
- **Abstract**
Recent neural compression methods have been based on the popular hyperprior framework. It relies on Scalar Quantization and offers a very strong compression performance. This contrasts from recent advances in image generation and representation learning, where Vector Quantization is more commonly employed. In this work, we attempt to bring these lines of research closer by revisiting vector quantization for image compression. We build upon the VQ-VAE framework and introduce several modifications. First, we replace the vanilla vector quantizer by a product quantizer. This intermediate solution between vector and scalar quantization allows for a much wider set of rate-distortion points: It implicitly defines high-quality quantizers that would otherwise require intractably large codebooks. Second, inspired by the success of Masked Image Modeling (MIM) in the context of self-supervised learning and generative image models, we propose a novel conditional entropy model which improves entropy coding by modelling the co-dependencies of the quantized latent codes. The resulting PQ-MIM model is surprisingly effective: its compression performance on par with recent hyperprior methods. It also outperforms HiFiC in terms of FID and KID metrics when optimized with perceptual losses (e.g. adversarial). Finally, since PQ-MIM is compatible with image generation frameworks, we show qualitatively that it can operate under a hybrid mode between compression and generation, with no further training or finetuning. As a result, we explore the extreme compression regime where an image is compressed into 200 bytes, i.e., less than a tweet.
## Keyword: RAW
### NLIP: Noise-robust Language-Image Pre-training
- **Authors:** Runhui Huang, Yanxin Long, Jianhua Han, Hang Xu, Xiwen Liang, Chunjing Xu, Xiaodan Liang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.07086
- **Pdf link:** https://arxiv.org/pdf/2212.07086
- **Abstract**
Large-scale cross-modal pre-training paradigms have recently shown ubiquitous success on a wide range of downstream tasks, e.g., zero-shot classification, retrieval and image captioning. However, their successes highly rely on the scale and quality of web-crawled data that naturally contain incomplete and noisy information (e.g., wrong or irrelevant content). Existing works either design manual rules to clean data or generate pseudo-targets as auxiliary signals for reducing noise impact, which do not explicitly tackle both the incorrect and incomplete challenges simultaneously. In this paper, to automatically mitigate the impact of noise by solely mining over existing data, we propose a principled Noise-robust Language-Image Pre-training framework (NLIP) to stabilize pre-training via two schemes: noise-harmonization and noise-completion. First, in noise-harmonization scheme, NLIP estimates the noise probability of each pair according to the memorization effect of cross-modal transformers, then adopts noise-adaptive regularization to harmonize the cross-modal alignments with varying degrees. Second, in noise-completion scheme, to enrich the missing object information of text, NLIP injects a concept-conditioned cross-modal decoder to obtain semantic-consistent synthetic captions to complete noisy ones, which uses the retrieved visual concepts (i.e., objects' names) for the corresponding image to guide captioning generation. By collaboratively optimizing noise-harmonization and noise-completion schemes, our NLIP can alleviate the common noise effects during image-text pre-training in a more efficient way. Extensive experiments show the significant performance improvements of our NLIP using only 26M data over existing pre-trained models (e.g., CLIP, FILIP and BLIP) on 12 zero-shot classification datasets, MSCOCO image captioning and zero-shot image-text retrieval tasks.
### Interactive Sketching of Mannequin Poses
- **Authors:** Gizem Unlu, Mohamed Sayed, Gabriel Brostow
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR)
- **Arxiv link:** https://arxiv.org/abs/2212.07098
- **Pdf link:** https://arxiv.org/pdf/2212.07098
- **Abstract**
It can be easy and even fun to sketch humans in different poses. In contrast, creating those same poses on a 3D graphics "mannequin" is comparatively tedious. Yet 3D body poses are necessary for various downstream applications. We seek to preserve the convenience of 2D sketching while giving users of different skill levels the flexibility to accurately and more quickly pose\slash refine a 3D mannequin. At the core of the interactive system, we propose a machine-learning model for inferring the 3D pose of a CG mannequin from sketches of humans drawn in a cylinder-person style. Training such a model is challenging because of artist variability, a lack of sketch training data with corresponding ground truth 3D poses, and the high dimensionality of human pose-space. Our unique approach to synthesizing vector graphics training data underpins our integrated ML-and-kinematics system. We validate the system by tightly coupling it with a user interface, and by performing a user study, in addition to quantitative comparisons.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Thu, 15 Dec 22 - ## Keyword: events
### Event-based YOLO Object Detection: Proof of Concept for Forward Perception System
- **Authors:** Waseem Shariff, Muhammad Ali Farooq, Joe Lemley, Peter Corcoran
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.07181
- **Pdf link:** https://arxiv.org/pdf/2212.07181
- **Abstract**
Neuromorphic vision or event vision is an advanced vision technology, where in contrast to the visible camera that outputs pixels, the event vision generates neuromorphic events every time there is a brightness change which exceeds a specific threshold in the field of view (FOV). This study focuses on leveraging neuromorphic event data for roadside object detection. This is a proof of concept towards building artificial intelligence (AI) based pipelines which can be used for forward perception systems for advanced vehicular applications. The focus is on building efficient state-of-the-art object detection networks with better inference results for fast-moving forward perception using an event camera. In this article, the event-simulated A2D2 dataset is manually annotated and trained on two different YOLOv5 networks (small and large variants). To further assess its robustness, single model testing and ensemble model testing are carried out.
### A Fast Geometric Regularizer to Mitigate Event Collapse in the Contrast Maximization Framework
- **Authors:** Shintaro Shiba, Yoshimitsu Aoki, Guillermo Gallego
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO); Differential Geometry (math.DG)
- **Arxiv link:** https://arxiv.org/abs/2212.07350
- **Pdf link:** https://arxiv.org/pdf/2212.07350
- **Abstract**
Event cameras are emerging vision sensors and their advantages are suitable for various applications such as autonomous robots. Contrast maximization (CMax), which provides state-of-the-art accuracy on motion estimation using events, may suffer from an overfitting problem called event collapse. Prior works are computationally expensive or cannot alleviate the overfitting, which undermines the benefits of the CMax framework. We propose a novel, computationally efficient regularizer based on geometric principles to mitigate event collapse. The experiments show that the proposed regularizer achieves state-of-the-art accuracy results, while its reduced computational complexity makes it two to four times faster than previous approaches. To the best of our knowledge, our regularizer is the only effective solution for event collapse without trading off runtime. We hope our work opens the door for future applications that unlocks the advantages of event cameras.
## Keyword: event camera
### Event-based YOLO Object Detection: Proof of Concept for Forward Perception System
- **Authors:** Waseem Shariff, Muhammad Ali Farooq, Joe Lemley, Peter Corcoran
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.07181
- **Pdf link:** https://arxiv.org/pdf/2212.07181
- **Abstract**
Neuromorphic vision or event vision is an advanced vision technology, where in contrast to the visible camera that outputs pixels, the event vision generates neuromorphic events every time there is a brightness change which exceeds a specific threshold in the field of view (FOV). This study focuses on leveraging neuromorphic event data for roadside object detection. This is a proof of concept towards building artificial intelligence (AI) based pipelines which can be used for forward perception systems for advanced vehicular applications. The focus is on building efficient state-of-the-art object detection networks with better inference results for fast-moving forward perception using an event camera. In this article, the event-simulated A2D2 dataset is manually annotated and trained on two different YOLOv5 networks (small and large variants). To further assess its robustness, single model testing and ensemble model testing are carried out.
### A Fast Geometric Regularizer to Mitigate Event Collapse in the Contrast Maximization Framework
- **Authors:** Shintaro Shiba, Yoshimitsu Aoki, Guillermo Gallego
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO); Differential Geometry (math.DG)
- **Arxiv link:** https://arxiv.org/abs/2212.07350
- **Pdf link:** https://arxiv.org/pdf/2212.07350
- **Abstract**
Event cameras are emerging vision sensors and their advantages are suitable for various applications such as autonomous robots. Contrast maximization (CMax), which provides state-of-the-art accuracy on motion estimation using events, may suffer from an overfitting problem called event collapse. Prior works are computationally expensive or cannot alleviate the overfitting, which undermines the benefits of the CMax framework. We propose a novel, computationally efficient regularizer based on geometric principles to mitigate event collapse. The experiments show that the proposed regularizer achieves state-of-the-art accuracy results, while its reduced computational complexity makes it two to four times faster than previous approaches. To the best of our knowledge, our regularizer is the only effective solution for event collapse without trading off runtime. We hope our work opens the door for future applications that unlocks the advantages of event cameras.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare
- **Authors:** Yann Labbé, Lucas Manuelli, Arsalan Mousavian, Stephen Tyree, Stan Birchfield, Jonathan Tremblay, Justin Carpentier, Mathieu Aubry, Dieter Fox, Josef Sivic
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
- **Arxiv link:** https://arxiv.org/abs/2212.06870
- **Pdf link:** https://arxiv.org/pdf/2212.06870
- **Abstract**
We introduce MegaPose, a method to estimate the 6D pose of novel objects, that is, objects unseen during training. At inference time, the method only assumes knowledge of (i) a region of interest displaying the object in the image and (ii) a CAD model of the observed object. The contributions of this work are threefold. First, we present a 6D pose refiner based on a render&compare strategy which can be applied to novel objects. The shape and coordinate system of the novel object are provided as inputs to the network by rendering multiple synthetic views of the object's CAD model. Second, we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner. Third, we introduce a large-scale synthetic dataset of photorealistic images of thousands of objects with diverse visual and shape properties and show that this diversity is crucial to obtain good generalization performance on novel objects. We train our approach on this large synthetic dataset and apply it without retraining to hundreds of novel objects in real images from several pose estimation benchmarks. Our approach achieves state-of-the-art performance on the ModelNet and YCB-Video datasets. An extensive evaluation on the 7 core datasets of the BOP challenge demonstrates that our approach achieves performance competitive with existing approaches that require access to the target objects during training. Code, dataset and trained models are available on the project page: https://megapose6d.github.io/.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### PD-Quant: Post-Training Quantization based on Prediction Difference Metric
- **Authors:** Jiawei Liu, Lin Niu, Zhihang Yuan, Dawei Yang, Xinggang Wang, Wenyu Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2212.07048
- **Pdf link:** https://arxiv.org/pdf/2212.07048
- **Abstract**
As a neural network compression technique, post-training quantization (PTQ) transforms a pre-trained model into a quantized model using a lower-precision data type. However, the prediction accuracy will decrease because of the quantization noise, especially in extremely low-bit settings. How to determine the appropriate quantization parameters (e.g., scaling factors and rounding of weights) is the main problem facing now. Many existing methods determine the quantization parameters by minimizing the distance between features before and after quantization. Using this distance as the metric to optimize the quantization parameters only considers local information. We analyze the problem of minimizing local metrics and indicate that it would not result in optimal quantization parameters. Furthermore, the quantized model suffers from overfitting due to the small number of calibration samples in PTQ. In this paper, we propose PD-Quant to solve the problems. PD-Quant uses the information of differences between network prediction before and after quantization to determine the quantization parameters. To mitigate the overfitting problem, PD-Quant adjusts the distribution of activations in PTQ. Experiments show that PD-Quant leads to better quantization parameters and improves the prediction accuracy of quantized models, especially in low-bit settings. For example, PD-Quant pushes the accuracy of ResNet-18 up to 53.08% and RegNetX-600MF up to 40.92% in weight 2-bit activation 2-bit. The code will be released at https://github.com/hustvl/PD-Quant.
### Image Compression with Product Quantized Masked Image Modeling
- **Authors:** Alaaeldin El-Nouby, Matthew J. Muckley, Karen Ullrich, Ivan Laptev, Jakob Verbeek, Hervé Jégou
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2212.07372
- **Pdf link:** https://arxiv.org/pdf/2212.07372
- **Abstract**
Recent neural compression methods have been based on the popular hyperprior framework. It relies on Scalar Quantization and offers a very strong compression performance. This contrasts from recent advances in image generation and representation learning, where Vector Quantization is more commonly employed. In this work, we attempt to bring these lines of research closer by revisiting vector quantization for image compression. We build upon the VQ-VAE framework and introduce several modifications. First, we replace the vanilla vector quantizer by a product quantizer. This intermediate solution between vector and scalar quantization allows for a much wider set of rate-distortion points: It implicitly defines high-quality quantizers that would otherwise require intractably large codebooks. Second, inspired by the success of Masked Image Modeling (MIM) in the context of self-supervised learning and generative image models, we propose a novel conditional entropy model which improves entropy coding by modelling the co-dependencies of the quantized latent codes. The resulting PQ-MIM model is surprisingly effective: its compression performance on par with recent hyperprior methods. It also outperforms HiFiC in terms of FID and KID metrics when optimized with perceptual losses (e.g. adversarial). Finally, since PQ-MIM is compatible with image generation frameworks, we show qualitatively that it can operate under a hybrid mode between compression and generation, with no further training or finetuning. As a result, we explore the extreme compression regime where an image is compressed into 200 bytes, i.e., less than a tweet.
## Keyword: RAW
### NLIP: Noise-robust Language-Image Pre-training
- **Authors:** Runhui Huang, Yanxin Long, Jianhua Han, Hang Xu, Xiwen Liang, Chunjing Xu, Xiaodan Liang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.07086
- **Pdf link:** https://arxiv.org/pdf/2212.07086
- **Abstract**
Large-scale cross-modal pre-training paradigms have recently shown ubiquitous success on a wide range of downstream tasks, e.g., zero-shot classification, retrieval and image captioning. However, their successes highly rely on the scale and quality of web-crawled data that naturally contain incomplete and noisy information (e.g., wrong or irrelevant content). Existing works either design manual rules to clean data or generate pseudo-targets as auxiliary signals for reducing noise impact, which do not explicitly tackle both the incorrect and incomplete challenges simultaneously. In this paper, to automatically mitigate the impact of noise by solely mining over existing data, we propose a principled Noise-robust Language-Image Pre-training framework (NLIP) to stabilize pre-training via two schemes: noise-harmonization and noise-completion. First, in noise-harmonization scheme, NLIP estimates the noise probability of each pair according to the memorization effect of cross-modal transformers, then adopts noise-adaptive regularization to harmonize the cross-modal alignments with varying degrees. Second, in noise-completion scheme, to enrich the missing object information of text, NLIP injects a concept-conditioned cross-modal decoder to obtain semantic-consistent synthetic captions to complete noisy ones, which uses the retrieved visual concepts (i.e., objects' names) for the corresponding image to guide captioning generation. By collaboratively optimizing noise-harmonization and noise-completion schemes, our NLIP can alleviate the common noise effects during image-text pre-training in a more efficient way. Extensive experiments show the significant performance improvements of our NLIP using only 26M data over existing pre-trained models (e.g., CLIP, FILIP and BLIP) on 12 zero-shot classification datasets, MSCOCO image captioning and zero-shot image-text retrieval tasks.
### Interactive Sketching of Mannequin Poses
- **Authors:** Gizem Unlu, Mohamed Sayed, Gabriel Brostow
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR)
- **Arxiv link:** https://arxiv.org/abs/2212.07098
- **Pdf link:** https://arxiv.org/pdf/2212.07098
- **Abstract**
It can be easy and even fun to sketch humans in different poses. In contrast, creating those same poses on a 3D graphics "mannequin" is comparatively tedious. Yet 3D body poses are necessary for various downstream applications. We seek to preserve the convenience of 2D sketching while giving users of different skill levels the flexibility to accurately and more quickly pose\slash refine a 3D mannequin. At the core of the interactive system, we propose a machine-learning model for inferring the 3D pose of a CG mannequin from sketches of humans drawn in a cylinder-person style. Training such a model is challenging because of artist variability, a lack of sketch training data with corresponding ground truth 3D poses, and the high dimensionality of human pose-space. Our unique approach to synthesizing vector graphics training data underpins our integrated ML-and-kinematics system. We validate the system by tightly coupling it with a user interface, and by performing a user study, in addition to quantitative comparisons.
## Keyword: raw image
There is no result
|
process
|
new submissions for thu dec keyword events event based yolo object detection proof of concept for forward perception system authors waseem shariff muhammad ali farooq joe lemley peter corcoran subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract neuromorphic vision or event vision is an advanced vision technology where in contrast to the visible camera that outputs pixels the event vision generates neuromorphic events every time there is a brightness change which exceeds a specific threshold in the field of view fov this study focuses on leveraging neuromorphic event data for roadside object detection this is a proof of concept towards building artificial intelligence ai based pipelines which can be used for forward perception systems for advanced vehicular applications the focus is on building efficient state of the art object detection networks with better inference results for fast moving forward perception using an event camera in this article the event simulated dataset is manually annotated and trained on two different networks small and large variants to further assess its robustness single model testing and ensemble model testing are carried out a fast geometric regularizer to mitigate event collapse in the contrast maximization framework authors shintaro shiba yoshimitsu aoki guillermo gallego subjects computer vision and pattern recognition cs cv robotics cs ro differential geometry math dg arxiv link pdf link abstract event cameras are emerging vision sensors and their advantages are suitable for various applications such as autonomous robots contrast maximization cmax which provides state of the art accuracy on motion estimation using events may suffer from an overfitting problem called event collapse prior works are computationally expensive or cannot alleviate the overfitting which undermines the benefits of the cmax framework we propose a novel computationally efficient regularizer based on geometric principles to mitigate event collapse the experiments show that the proposed regularizer achieves state of the art accuracy results while its reduced computational complexity makes it two to four times faster than previous approaches to the best of our knowledge our regularizer is the only effective solution for event collapse without trading off runtime we hope our work opens the door for future applications that unlocks the advantages of event cameras keyword event camera event based yolo object detection proof of concept for forward perception system authors waseem shariff muhammad ali farooq joe lemley peter corcoran subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract neuromorphic vision or event vision is an advanced vision technology where in contrast to the visible camera that outputs pixels the event vision generates neuromorphic events every time there is a brightness change which exceeds a specific threshold in the field of view fov this study focuses on leveraging neuromorphic event data for roadside object detection this is a proof of concept towards building artificial intelligence ai based pipelines which can be used for forward perception systems for advanced vehicular applications the focus is on building efficient state of the art object detection networks with better inference results for fast moving forward perception using an event camera in this article the event simulated dataset is manually annotated and trained on two different networks small and large variants to further assess its robustness single model testing and ensemble model testing are carried out a fast geometric regularizer to mitigate event collapse in the contrast maximization framework authors shintaro shiba yoshimitsu aoki guillermo gallego subjects computer vision and pattern recognition cs cv robotics cs ro differential geometry math dg arxiv link pdf link abstract event cameras are emerging vision sensors and their advantages are suitable for various applications such as autonomous robots contrast maximization cmax which provides state of the art accuracy on motion estimation using events may suffer from an overfitting problem called event collapse prior works are computationally expensive or cannot alleviate the overfitting which undermines the benefits of the cmax framework we propose a novel computationally efficient regularizer based on geometric principles to mitigate event collapse the experiments show that the proposed regularizer achieves state of the art accuracy results while its reduced computational complexity makes it two to four times faster than previous approaches to the best of our knowledge our regularizer is the only effective solution for event collapse without trading off runtime we hope our work opens the door for future applications that unlocks the advantages of event cameras keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp megapose pose estimation of novel objects via render compare authors yann labbé lucas manuelli arsalan mousavian stephen tyree stan birchfield jonathan tremblay justin carpentier mathieu aubry dieter fox josef sivic subjects computer vision and pattern recognition cs cv robotics cs ro arxiv link pdf link abstract we introduce megapose a method to estimate the pose of novel objects that is objects unseen during training at inference time the method only assumes knowledge of i a region of interest displaying the object in the image and ii a cad model of the observed object the contributions of this work are threefold first we present a pose refiner based on a render compare strategy which can be applied to novel objects the shape and coordinate system of the novel object are provided as inputs to the network by rendering multiple synthetic views of the object s cad model second we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner third we introduce a large scale synthetic dataset of photorealistic images of thousands of objects with diverse visual and shape properties and show that this diversity is crucial to obtain good generalization performance on novel objects we train our approach on this large synthetic dataset and apply it without retraining to hundreds of novel objects in real images from several pose estimation benchmarks our approach achieves state of the art performance on the modelnet and ycb video datasets an extensive evaluation on the core datasets of the bop challenge demonstrates that our approach achieves performance competitive with existing approaches that require access to the target objects during training code dataset and trained models are available on the project page keyword image signal processing there is no result keyword image signal process there is no result keyword compression pd quant post training quantization based on prediction difference metric authors jiawei liu lin niu zhihang yuan dawei yang xinggang wang wenyu liu subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract as a neural network compression technique post training quantization ptq transforms a pre trained model into a quantized model using a lower precision data type however the prediction accuracy will decrease because of the quantization noise especially in extremely low bit settings how to determine the appropriate quantization parameters e g scaling factors and rounding of weights is the main problem facing now many existing methods determine the quantization parameters by minimizing the distance between features before and after quantization using this distance as the metric to optimize the quantization parameters only considers local information we analyze the problem of minimizing local metrics and indicate that it would not result in optimal quantization parameters furthermore the quantized model suffers from overfitting due to the small number of calibration samples in ptq in this paper we propose pd quant to solve the problems pd quant uses the information of differences between network prediction before and after quantization to determine the quantization parameters to mitigate the overfitting problem pd quant adjusts the distribution of activations in ptq experiments show that pd quant leads to better quantization parameters and improves the prediction accuracy of quantized models especially in low bit settings for example pd quant pushes the accuracy of resnet up to and regnetx up to in weight bit activation bit the code will be released at image compression with product quantized masked image modeling authors alaaeldin el nouby matthew j muckley karen ullrich ivan laptev jakob verbeek hervé jégou subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract recent neural compression methods have been based on the popular hyperprior framework it relies on scalar quantization and offers a very strong compression performance this contrasts from recent advances in image generation and representation learning where vector quantization is more commonly employed in this work we attempt to bring these lines of research closer by revisiting vector quantization for image compression we build upon the vq vae framework and introduce several modifications first we replace the vanilla vector quantizer by a product quantizer this intermediate solution between vector and scalar quantization allows for a much wider set of rate distortion points it implicitly defines high quality quantizers that would otherwise require intractably large codebooks second inspired by the success of masked image modeling mim in the context of self supervised learning and generative image models we propose a novel conditional entropy model which improves entropy coding by modelling the co dependencies of the quantized latent codes the resulting pq mim model is surprisingly effective its compression performance on par with recent hyperprior methods it also outperforms hific in terms of fid and kid metrics when optimized with perceptual losses e g adversarial finally since pq mim is compatible with image generation frameworks we show qualitatively that it can operate under a hybrid mode between compression and generation with no further training or finetuning as a result we explore the extreme compression regime where an image is compressed into bytes i e less than a tweet keyword raw nlip noise robust language image pre training authors runhui huang yanxin long jianhua han hang xu xiwen liang chunjing xu xiaodan liang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract large scale cross modal pre training paradigms have recently shown ubiquitous success on a wide range of downstream tasks e g zero shot classification retrieval and image captioning however their successes highly rely on the scale and quality of web crawled data that naturally contain incomplete and noisy information e g wrong or irrelevant content existing works either design manual rules to clean data or generate pseudo targets as auxiliary signals for reducing noise impact which do not explicitly tackle both the incorrect and incomplete challenges simultaneously in this paper to automatically mitigate the impact of noise by solely mining over existing data we propose a principled noise robust language image pre training framework nlip to stabilize pre training via two schemes noise harmonization and noise completion first in noise harmonization scheme nlip estimates the noise probability of each pair according to the memorization effect of cross modal transformers then adopts noise adaptive regularization to harmonize the cross modal alignments with varying degrees second in noise completion scheme to enrich the missing object information of text nlip injects a concept conditioned cross modal decoder to obtain semantic consistent synthetic captions to complete noisy ones which uses the retrieved visual concepts i e objects names for the corresponding image to guide captioning generation by collaboratively optimizing noise harmonization and noise completion schemes our nlip can alleviate the common noise effects during image text pre training in a more efficient way extensive experiments show the significant performance improvements of our nlip using only data over existing pre trained models e g clip filip and blip on zero shot classification datasets mscoco image captioning and zero shot image text retrieval tasks interactive sketching of mannequin poses authors gizem unlu mohamed sayed gabriel brostow subjects computer vision and pattern recognition cs cv graphics cs gr arxiv link pdf link abstract it can be easy and even fun to sketch humans in different poses in contrast creating those same poses on a graphics mannequin is comparatively tedious yet body poses are necessary for various downstream applications we seek to preserve the convenience of sketching while giving users of different skill levels the flexibility to accurately and more quickly pose slash refine a mannequin at the core of the interactive system we propose a machine learning model for inferring the pose of a cg mannequin from sketches of humans drawn in a cylinder person style training such a model is challenging because of artist variability a lack of sketch training data with corresponding ground truth poses and the high dimensionality of human pose space our unique approach to synthesizing vector graphics training data underpins our integrated ml and kinematics system we validate the system by tightly coupling it with a user interface and by performing a user study in addition to quantitative comparisons keyword raw image there is no result
| 1
|
5,278
| 8,067,978,025
|
IssuesEvent
|
2018-08-05 14:52:21
|
pwittchen/ReactiveNetwork
|
https://api.github.com/repos/pwittchen/ReactiveNetwork
|
closed
|
Release 2.1.0
|
RxJava2.x release process
|
**Release notes**:
- bumped project dependencies - PR #292, Commit: https://github.com/pwittchen/ReactiveNetwork/commit/7e4cd4b7e39931d6cceeb0b674ccb506d38f91e4
- RxJava: 2.1.16 -> 2.2.0
- Mockito Core: 2.19.1 -> 2.21.0
- NullAway: 0.4.7 -> 0.5.1
- Robolectric: 3.1.2 -> 4.0-alpha-3
**Things To Do**:
- [x] update JavaDoc on `gh-pages` (not needed in this release)
- [x] update documentation on `gh-pages` (not needed in this release)
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] update docs on `gh-pages` after updating `README.md`
- [x] create new GitHub release
|
1.0
|
Release 2.1.0 - **Release notes**:
- bumped project dependencies - PR #292, Commit: https://github.com/pwittchen/ReactiveNetwork/commit/7e4cd4b7e39931d6cceeb0b674ccb506d38f91e4
- RxJava: 2.1.16 -> 2.2.0
- Mockito Core: 2.19.1 -> 2.21.0
- NullAway: 0.4.7 -> 0.5.1
- Robolectric: 3.1.2 -> 4.0-alpha-3
**Things To Do**:
- [x] update JavaDoc on `gh-pages` (not needed in this release)
- [x] update documentation on `gh-pages` (not needed in this release)
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] update docs on `gh-pages` after updating `README.md`
- [x] create new GitHub release
|
process
|
release release notes bumped project dependencies pr commit rxjava mockito core nullaway robolectric alpha things to do update javadoc on gh pages not needed in this release update documentation on gh pages not needed in this release bump library version upload archives to maven central close and release artifact on maven central update changelog md after maven sync bump library version in readme md update docs on gh pages after updating readme md create new github release
| 1
|
41,423
| 10,711,665,475
|
IssuesEvent
|
2019-10-25 07:06:23
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
closed
|
Building AOT snapshot in release mode (ios-release)... Bad CPU type in executable Dart snapshot generator failed with exit code 1 . Building App.framework for arm64..
|
a: build severe: crash t: xcode tool waiting for customer response ⌘ platform-mac ⌺ platform-ios
|
<!--
The application works perfectly on debug mode. but cannot build release. flutter build ios --release throws error === BUILD TARGET Runner OF PROJECT Runner WITH CONFIGURATION Release ===
Building AOT snapshot in release mode (ios-release)...
arch: posix_spawnp:
/Users/xxx/Downloads/flutter/bin/cache/artifacts/engine/ios-releas
e/gen_snapshot: Bad CPU type in executable
Dart snapshot generator failed with exit code 1
Building App.framework for arm64...
Oops; flutter has exited unexpectedly.
-->
## Steps to Reproduce
1. ...i can build and run the app in my device.
2...`flutter build ios --release` throwing exceptions, and hence I cannot build archive for release.
## Logs
ProcessException: ProcessException: Process "xcrun" exited abnormally:
clang: error: no such file or directory: 'build/aot/arm64/snapshot_assembly.S'
clang: error: no input files
Command: xcrun cc -arch arm64 -miphoneos-version-min=8.0 -c build/aot/arm64/snapshot_assembly.S -o build/aot/arm64/snapshot_assembly.o
```
#0 runCheckedAsync (package:flutter_tools/src/base/process.dart:255:5)
<asynchronous suspension>
#1 Xcode.cc (package:flutter_tools/src/ios/mac.dart:231:12)
#2 AOTSnapshotter._buildIosFramework (package:flutter_tools/src/base/build.dart:253:49)
<asynchronous suspension>
#3 AOTSnapshotter.build (package:flutter_tools/src/base/build.dart:225:38)
<asynchronous suspension>
#4 BuildAotCommand.runCommand.<anonymous closure> (package:flutter_tools/src/commands/build_aot.dart:112:44)
#5 __InternalLinkedHashMap&_HashVMBase&MapMixin&_LinkedHashMapMixin.forEach (dart:collection-patch/compact_hash.dart:367:8)
#6 BuildAotCommand.runCommand (package:flutter_tools/src/commands/build_aot.dart:111:19)
#7 _asyncThenWrapperHelper.<anonymous closure> (dart:async-patch/async_patch.dart:77:64)
#8 _rootRunUnary (dart:async/zone.dart:1132:38)
#9 _CustomZone.runUnary (dart:async/zone.dart:1029:19)
#10 _FutureListener.handleValue (dart:async/future_impl.dart:126:18)
#11 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:639:45)
#12 Future._propagateToListeners (dart:async/future_impl.dart:668:32)
#13 Future._complete (dart:async/future_impl.dart:473:7)
#14 _SyncCompleter.complete (dart:async/future_impl.dart:51:12)
#15 _AsyncAwaitCompleter.complete (dart:async-patch/async_patch.dart:28:18)
#16 _completeOnAsyncReturn (dart:async-patch/async_patch.dart:294:13)
#17 AOTSnapshotter.compileKernel (package:flutter_tools/src/base/build.dart)
#18 _asyncThenWrapperHelper.<anonymous closure> (dart:async-patch/async_patch.dart:77:64)
#19 _rootRunUnary (dart:async/zone.dart:1132:38)
#20 _CustomZone.runUnary (dart:async/zone.dart:1029:19)
#21 _FutureListener.handleValue (dart:async/future_impl.dart:126:18)
#22 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:639:45)
#23 Future._propagateToListeners (dart:async/future_impl.dart:668:32)
#24 Future._complete (dart:async/future_impl.dart:473:7)
#25 _SyncCompleter.complete (dart:async/future_impl.dart:51:12)
#26 _AsyncAwaitCompleter.complete (dart:async-patch/async_patch.dart:28:18)
#27 _completeOnAsyncReturn (dart:async-patch/async_patch.dart:294:13)
#28 _LocalFile&LocalFileSystemEntity&ForwardingFile.writeAsString (package:file/src/forwarding/forwarding_file.dart)
#29 _asyncThenWrapperHelper.<anonymous closure> (dart:async-patch/async_patch.dart:77:64)
#30 _rootRunUnary (dart:async/zone.dart:1132:38)
#31 _CustomZone.runUnary (dart:async/zone.dart:1029:19)
#32 _FutureListener.handleValue (dart:async/future_impl.dart:126:18)
#33 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:639:45)
#34 Future._propagateToListeners (dart:async/future_impl.dart:668:32)
#35 Future._completeWithValue (dart:async/future_impl.dart:483:5)
#36 Future._asyncComplete.<anonymous closure> (dart:async/future_impl.dart:513:7)
#37 _rootRun (dart:async/zone.dart:1124:13)
#38 _CustomZone.run (dart:async/zone.dart:1021:19)
#39 _CustomZone.bindCallback.<anonymous closure> (dart:async/zone.dart:947:23)
#40 _microtaskLoop (dart:async/schedule_microtask.dart:41:21)
#41 _startMicrotaskLoop (dart:async/schedule_microtask.dart:50:5)
#42 _runPendingImmediateCallback (dart:isolate-patch/isolate_patch.dart:115:13)
#43 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:172:5)
```
## flutter doctor -v
```
[✓] Flutter (Channel stable, v1.5.4-hotfix.2, on Mac OS X 10.14.3 18D109, locale en-US)
• Flutter version 1.5.4-hotfix.2 at /Users/xxx/Downloads/flutter
• Framework revision 7a4c33425d (10 weeks ago), 2019-04-29 11:05:24 -0700
• Engine revision 52c7a1e849
• Dart version 2.3.0 (build 2.3.0-dev.0.5 a1668566e5)
[✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
• Android SDK at /Users/xxx/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-28, build-tools 28.0.3
• ANDROID_HOME = /Users/xxx/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
• All Android licenses accepted.
[✓] iOS toolchain - develop for iOS devices (Xcode 10.2.1)
• Xcode at /Users/xxx/Downloads/Xcode.app/Contents/Developer
• Xcode 10.2.1, Build version 10E1001
• ios-deploy 1.9.4
• CocoaPods version 1.5.3
[✓] Android Studio (version 3.4)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 37.0.1
• Dart plugin version 183.6270
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[✓] Connected device (1 available)
• edheba macbook’s iPhone • 00008020-00096DC1029A002E • ios • iOS 12.2
• No issues found!
```
```
|
1.0
|
Building AOT snapshot in release mode (ios-release)... Bad CPU type in executable Dart snapshot generator failed with exit code 1 . Building App.framework for arm64.. - <!--
The application works perfectly on debug mode. but cannot build release. flutter build ios --release throws error === BUILD TARGET Runner OF PROJECT Runner WITH CONFIGURATION Release ===
Building AOT snapshot in release mode (ios-release)...
arch: posix_spawnp:
/Users/xxx/Downloads/flutter/bin/cache/artifacts/engine/ios-releas
e/gen_snapshot: Bad CPU type in executable
Dart snapshot generator failed with exit code 1
Building App.framework for arm64...
Oops; flutter has exited unexpectedly.
-->
## Steps to Reproduce
1. ...i can build and run the app in my device.
2...`flutter build ios --release` throwing exceptions, and hence I cannot build archive for release.
## Logs
ProcessException: ProcessException: Process "xcrun" exited abnormally:
clang: error: no such file or directory: 'build/aot/arm64/snapshot_assembly.S'
clang: error: no input files
Command: xcrun cc -arch arm64 -miphoneos-version-min=8.0 -c build/aot/arm64/snapshot_assembly.S -o build/aot/arm64/snapshot_assembly.o
```
#0 runCheckedAsync (package:flutter_tools/src/base/process.dart:255:5)
<asynchronous suspension>
#1 Xcode.cc (package:flutter_tools/src/ios/mac.dart:231:12)
#2 AOTSnapshotter._buildIosFramework (package:flutter_tools/src/base/build.dart:253:49)
<asynchronous suspension>
#3 AOTSnapshotter.build (package:flutter_tools/src/base/build.dart:225:38)
<asynchronous suspension>
#4 BuildAotCommand.runCommand.<anonymous closure> (package:flutter_tools/src/commands/build_aot.dart:112:44)
#5 __InternalLinkedHashMap&_HashVMBase&MapMixin&_LinkedHashMapMixin.forEach (dart:collection-patch/compact_hash.dart:367:8)
#6 BuildAotCommand.runCommand (package:flutter_tools/src/commands/build_aot.dart:111:19)
#7 _asyncThenWrapperHelper.<anonymous closure> (dart:async-patch/async_patch.dart:77:64)
#8 _rootRunUnary (dart:async/zone.dart:1132:38)
#9 _CustomZone.runUnary (dart:async/zone.dart:1029:19)
#10 _FutureListener.handleValue (dart:async/future_impl.dart:126:18)
#11 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:639:45)
#12 Future._propagateToListeners (dart:async/future_impl.dart:668:32)
#13 Future._complete (dart:async/future_impl.dart:473:7)
#14 _SyncCompleter.complete (dart:async/future_impl.dart:51:12)
#15 _AsyncAwaitCompleter.complete (dart:async-patch/async_patch.dart:28:18)
#16 _completeOnAsyncReturn (dart:async-patch/async_patch.dart:294:13)
#17 AOTSnapshotter.compileKernel (package:flutter_tools/src/base/build.dart)
#18 _asyncThenWrapperHelper.<anonymous closure> (dart:async-patch/async_patch.dart:77:64)
#19 _rootRunUnary (dart:async/zone.dart:1132:38)
#20 _CustomZone.runUnary (dart:async/zone.dart:1029:19)
#21 _FutureListener.handleValue (dart:async/future_impl.dart:126:18)
#22 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:639:45)
#23 Future._propagateToListeners (dart:async/future_impl.dart:668:32)
#24 Future._complete (dart:async/future_impl.dart:473:7)
#25 _SyncCompleter.complete (dart:async/future_impl.dart:51:12)
#26 _AsyncAwaitCompleter.complete (dart:async-patch/async_patch.dart:28:18)
#27 _completeOnAsyncReturn (dart:async-patch/async_patch.dart:294:13)
#28 _LocalFile&LocalFileSystemEntity&ForwardingFile.writeAsString (package:file/src/forwarding/forwarding_file.dart)
#29 _asyncThenWrapperHelper.<anonymous closure> (dart:async-patch/async_patch.dart:77:64)
#30 _rootRunUnary (dart:async/zone.dart:1132:38)
#31 _CustomZone.runUnary (dart:async/zone.dart:1029:19)
#32 _FutureListener.handleValue (dart:async/future_impl.dart:126:18)
#33 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:639:45)
#34 Future._propagateToListeners (dart:async/future_impl.dart:668:32)
#35 Future._completeWithValue (dart:async/future_impl.dart:483:5)
#36 Future._asyncComplete.<anonymous closure> (dart:async/future_impl.dart:513:7)
#37 _rootRun (dart:async/zone.dart:1124:13)
#38 _CustomZone.run (dart:async/zone.dart:1021:19)
#39 _CustomZone.bindCallback.<anonymous closure> (dart:async/zone.dart:947:23)
#40 _microtaskLoop (dart:async/schedule_microtask.dart:41:21)
#41 _startMicrotaskLoop (dart:async/schedule_microtask.dart:50:5)
#42 _runPendingImmediateCallback (dart:isolate-patch/isolate_patch.dart:115:13)
#43 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:172:5)
```
## flutter doctor -v
```
[✓] Flutter (Channel stable, v1.5.4-hotfix.2, on Mac OS X 10.14.3 18D109, locale en-US)
• Flutter version 1.5.4-hotfix.2 at /Users/xxx/Downloads/flutter
• Framework revision 7a4c33425d (10 weeks ago), 2019-04-29 11:05:24 -0700
• Engine revision 52c7a1e849
• Dart version 2.3.0 (build 2.3.0-dev.0.5 a1668566e5)
[✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
• Android SDK at /Users/xxx/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-28, build-tools 28.0.3
• ANDROID_HOME = /Users/xxx/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
• All Android licenses accepted.
[✓] iOS toolchain - develop for iOS devices (Xcode 10.2.1)
• Xcode at /Users/xxx/Downloads/Xcode.app/Contents/Developer
• Xcode 10.2.1, Build version 10E1001
• ios-deploy 1.9.4
• CocoaPods version 1.5.3
[✓] Android Studio (version 3.4)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 37.0.1
• Dart plugin version 183.6270
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[✓] Connected device (1 available)
• edheba macbook’s iPhone • 00008020-00096DC1029A002E • ios • iOS 12.2
• No issues found!
```
```
|
non_process
|
building aot snapshot in release mode ios release bad cpu type in executable dart snapshot generator failed with exit code building app framework for the application works perfectly on debug mode but cannot build release flutter build ios release throws error build target runner of project runner with configuration release building aot snapshot in release mode ios release arch posix spawnp users xxx downloads flutter bin cache artifacts engine ios releas e gen snapshot bad cpu type in executable dart snapshot generator failed with exit code building app framework for oops flutter has exited unexpectedly steps to reproduce i can build and run the app in my device flutter build ios release throwing exceptions and hence i cannot build archive for release logs processexception processexception process xcrun exited abnormally clang error no such file or directory build aot snapshot assembly s clang error no input files command xcrun cc arch miphoneos version min c build aot snapshot assembly s o build aot snapshot assembly o runcheckedasync package flutter tools src base process dart xcode cc package flutter tools src ios mac dart aotsnapshotter buildiosframework package flutter tools src base build dart aotsnapshotter build package flutter tools src base build dart buildaotcommand runcommand package flutter tools src commands build aot dart internallinkedhashmap hashvmbase mapmixin linkedhashmapmixin foreach dart collection patch compact hash dart buildaotcommand runcommand package flutter tools src commands build aot dart asyncthenwrapperhelper dart async patch async patch dart rootrununary dart async zone dart customzone rununary dart async zone dart futurelistener handlevalue dart async future impl dart future propagatetolisteners handlevaluecallback dart async future impl dart future propagatetolisteners dart async future impl dart future complete dart async future impl dart synccompleter complete dart async future impl dart asyncawaitcompleter complete dart async patch async patch dart completeonasyncreturn dart async patch async patch dart aotsnapshotter compilekernel package flutter tools src base build dart asyncthenwrapperhelper dart async patch async patch dart rootrununary dart async zone dart customzone rununary dart async zone dart futurelistener handlevalue dart async future impl dart future propagatetolisteners handlevaluecallback dart async future impl dart future propagatetolisteners dart async future impl dart future complete dart async future impl dart synccompleter complete dart async future impl dart asyncawaitcompleter complete dart async patch async patch dart completeonasyncreturn dart async patch async patch dart localfile localfilesystementity forwardingfile writeasstring package file src forwarding forwarding file dart asyncthenwrapperhelper dart async patch async patch dart rootrununary dart async zone dart customzone rununary dart async zone dart futurelistener handlevalue dart async future impl dart future propagatetolisteners handlevaluecallback dart async future impl dart future propagatetolisteners dart async future impl dart future completewithvalue dart async future impl dart future asynccomplete dart async future impl dart rootrun dart async zone dart customzone run dart async zone dart customzone bindcallback dart async zone dart microtaskloop dart async schedule microtask dart startmicrotaskloop dart async schedule microtask dart runpendingimmediatecallback dart isolate patch isolate patch dart rawreceiveportimpl handlemessage dart isolate patch isolate patch dart flutter doctor v flutter channel stable hotfix on mac os x locale en us • flutter version hotfix at users xxx downloads flutter • framework revision weeks ago • engine revision • dart version build dev android toolchain develop for android devices android sdk version • android sdk at users xxx library android sdk • android ndk location not configured optional useful for native profiling support • platform android build tools • android home users xxx library android sdk • java binary at applications android studio app contents jre jdk contents home bin java • java version openjdk runtime environment build release • all android licenses accepted ios toolchain develop for ios devices xcode • xcode at users xxx downloads xcode app contents developer • xcode build version • ios deploy • cocoapods version android studio version • android studio at applications android studio app contents • flutter plugin version • dart plugin version • java version openjdk runtime environment build release connected device available • edheba macbook’s iphone • • ios • ios • no issues found
| 0
|
53,614
| 13,851,433,822
|
IssuesEvent
|
2020-10-15 03:59:01
|
MarenCarlo/Cursalia
|
https://api.github.com/repos/MarenCarlo/Cursalia
|
opened
|
CVE-2016-10735 (Medium) detected in bootstrap-3.3.7.min.js
|
security vulnerability
|
## CVE-2016-10735 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.7.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js</a></p>
<p>Path to dependency file: Cursalia/js/Responsive-2.2.5/examples/display-types/bootstrap-modal.html</p>
<p>Path to vulnerable library: Cursalia/js/Responsive-2.2.5/examples/display-types/bootstrap-modal.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.7.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/MarenCarlo/Cursalia/commit/c33515a5a4ce2d17379f0cba040bf039dbc10ab7">c33515a5a4ce2d17379f0cba040bf039dbc10ab7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap 3.x before 3.4.0 and 4.x-beta before 4.0.0-beta.2, XSS is possible in the data-target attribute, a different vulnerability than CVE-2018-14041.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10735>CVE-2016-10735</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/issues/20184">https://github.com/twbs/bootstrap/issues/20184</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2016-10735 (Medium) detected in bootstrap-3.3.7.min.js - ## CVE-2016-10735 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.7.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js</a></p>
<p>Path to dependency file: Cursalia/js/Responsive-2.2.5/examples/display-types/bootstrap-modal.html</p>
<p>Path to vulnerable library: Cursalia/js/Responsive-2.2.5/examples/display-types/bootstrap-modal.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.7.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/MarenCarlo/Cursalia/commit/c33515a5a4ce2d17379f0cba040bf039dbc10ab7">c33515a5a4ce2d17379f0cba040bf039dbc10ab7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap 3.x before 3.4.0 and 4.x-beta before 4.0.0-beta.2, XSS is possible in the data-target attribute, a different vulnerability than CVE-2018-14041.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10735>CVE-2016-10735</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/issues/20184">https://github.com/twbs/bootstrap/issues/20184</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in bootstrap min js cve medium severity vulnerability vulnerable library bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to dependency file cursalia js responsive examples display types bootstrap modal html path to vulnerable library cursalia js responsive examples display types bootstrap modal html dependency hierarchy x bootstrap min js vulnerable library found in head commit a href found in base branch master vulnerability details in bootstrap x before and x beta before beta xss is possible in the data target attribute a different vulnerability than cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
205,501
| 23,340,860,765
|
IssuesEvent
|
2022-08-09 13:56:03
|
ArchiMageAlex/singularity-core
|
https://api.github.com/repos/ArchiMageAlex/singularity-core
|
closed
|
CVE-2021-23337 (High) detected in lodash-4.17.20.tgz - autoclosed
|
security vulnerability
|
## CVE-2021-23337 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.20.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz</a></p>
<p>Path to dependency file: singularity-core/ui/package.json</p>
<p>Path to vulnerable library: singularity-core/ui/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- core-7.8.7.tgz (Root Library)
- :x: **lodash-4.17.20.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ArchiMageAlex/singularity-core/commit/4a0156c2c500876acd173b3a5899360eb37ad598">4a0156c2c500876acd173b3a5899360eb37ad598</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23337 (High) detected in lodash-4.17.20.tgz - autoclosed - ## CVE-2021-23337 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.20.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz</a></p>
<p>Path to dependency file: singularity-core/ui/package.json</p>
<p>Path to vulnerable library: singularity-core/ui/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- core-7.8.7.tgz (Root Library)
- :x: **lodash-4.17.20.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ArchiMageAlex/singularity-core/commit/4a0156c2c500876acd173b3a5899360eb37ad598">4a0156c2c500876acd173b3a5899360eb37ad598</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in lodash tgz autoclosed cve high severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file singularity core ui package json path to vulnerable library singularity core ui node modules lodash package json dependency hierarchy core tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash versions prior to are vulnerable to command injection via the template function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource
| 0
|
6,981
| 10,131,463,546
|
IssuesEvent
|
2019-08-01 19:36:37
|
toggl/mobileapp
|
https://api.github.com/repos/toggl/mobileapp
|
closed
|
Create template to submit new full translation requests
|
process
|
This issue will be used to submit new full translation requests.
Our users will be able to know the progress of these translations, based on this issue, the corresponding PR and its position on the translation project (or wherever else we might decide to keep these issues and PRs)
|
1.0
|
Create template to submit new full translation requests - This issue will be used to submit new full translation requests.
Our users will be able to know the progress of these translations, based on this issue, the corresponding PR and its position on the translation project (or wherever else we might decide to keep these issues and PRs)
|
process
|
create template to submit new full translation requests this issue will be used to submit new full translation requests our users will be able to know the progress of these translations based on this issue the corresponding pr and its position on the translation project or wherever else we might decide to keep these issues and prs
| 1
|
29,475
| 2,716,132,321
|
IssuesEvent
|
2015-04-10 17:13:48
|
CruxFramework/crux
|
https://api.github.com/repos/CruxFramework/crux
|
closed
|
Unecessary variable innerView in DialogViewContainer
|
bug Component-UI imported Milestone-M14-C3 Module-CruxWidgets Priority-Medium TargetVersion-5.1.1
|
_From [samuel@cruxframework.org](https://code.google.com/u/samuel@cruxframework.org/) on June 12, 2014 14:02:42_
This variable is generating an error when we first try to load a view.
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=402_
|
1.0
|
Unecessary variable innerView in DialogViewContainer - _From [samuel@cruxframework.org](https://code.google.com/u/samuel@cruxframework.org/) on June 12, 2014 14:02:42_
This variable is generating an error when we first try to load a view.
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=402_
|
non_process
|
unecessary variable innerview in dialogviewcontainer from on june this variable is generating an error when we first try to load a view original issue
| 0
|
196,332
| 22,441,338,283
|
IssuesEvent
|
2022-06-21 01:40:21
|
nekottyo/tatsuya_fujiwara
|
https://api.github.com/repos/nekottyo/tatsuya_fujiwara
|
closed
|
CVE-2021-23337 (High) detected in lodash-4.17.15.tgz - autoclosed
|
security vulnerability
|
## CVE-2021-23337 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p>
<p>Path to dependency file: tatsuya_fujiwara/package.json</p>
<p>Path to vulnerable library: tatsuya_fujiwara/node_modules/lodash</p>
<p>
Dependency Hierarchy:
- clasp-2.3.0.tgz (Root Library)
- inquirer-7.1.0.tgz
- :x: **lodash-4.17.15.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/nekottyo/tatsuya_fujiwara/commit/6c599e5c9aded0d08548cf6cda8ad1fa1ef1103c">6c599e5c9aded0d08548cf6cda8ad1fa1ef1103c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23337 (High) detected in lodash-4.17.15.tgz - autoclosed - ## CVE-2021-23337 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p>
<p>Path to dependency file: tatsuya_fujiwara/package.json</p>
<p>Path to vulnerable library: tatsuya_fujiwara/node_modules/lodash</p>
<p>
Dependency Hierarchy:
- clasp-2.3.0.tgz (Root Library)
- inquirer-7.1.0.tgz
- :x: **lodash-4.17.15.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/nekottyo/tatsuya_fujiwara/commit/6c599e5c9aded0d08548cf6cda8ad1fa1ef1103c">6c599e5c9aded0d08548cf6cda8ad1fa1ef1103c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in lodash tgz autoclosed cve high severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file tatsuya fujiwara package json path to vulnerable library tatsuya fujiwara node modules lodash dependency hierarchy clasp tgz root library inquirer tgz x lodash tgz vulnerable library found in head commit a href vulnerability details lodash versions prior to are vulnerable to command injection via the template function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource
| 0
|
4,942
| 3,898,482,029
|
IssuesEvent
|
2016-04-17 04:02:18
|
lionheart/openradar-mirror
|
https://api.github.com/repos/lionheart/openradar-mirror
|
opened
|
14594033: Springboard animation after unlock takes too long
|
classification:ui/usability reproducible:always status:open
|
#### Description
Summary:
After unlocking the phone the app icons fly in in a staggered fashion. As long as this animation has not
completed the user is blocked from swiping left/right or opening an app. In this situation I find myself
regularly tapping an app icon twice before it reacts, because the animation hasn't finished yet.
Users develop strong muscle memory for often used apps and therefore can tap on the right spot
quickly. The duration of the animation could get in their way when trying to quickly unlock the phone and
launch an app.
Steps to Reproduce:
1. Navigate to the home screen
2. Put the phone to sleep
3. Wake the phone
4. Slide to unlock
5. Tap an app icon right away before the animation has finished
Expected Results:
The app should launch on the first tap.
Actual Results:
The home screen is unresponsive until the animation is finished
Regression:
-
Notes:
-
-
Product Version: 7.0 (11A4435d)
Created: 2013-07-30T20:14:37.440242
Originated: 2013-07-30T00:00:00
Open Radar Link: http://www.openradar.me/14594033
|
True
|
14594033: Springboard animation after unlock takes too long - #### Description
Summary:
After unlocking the phone the app icons fly in in a staggered fashion. As long as this animation has not
completed the user is blocked from swiping left/right or opening an app. In this situation I find myself
regularly tapping an app icon twice before it reacts, because the animation hasn't finished yet.
Users develop strong muscle memory for often used apps and therefore can tap on the right spot
quickly. The duration of the animation could get in their way when trying to quickly unlock the phone and
launch an app.
Steps to Reproduce:
1. Navigate to the home screen
2. Put the phone to sleep
3. Wake the phone
4. Slide to unlock
5. Tap an app icon right away before the animation has finished
Expected Results:
The app should launch on the first tap.
Actual Results:
The home screen is unresponsive until the animation is finished
Regression:
-
Notes:
-
-
Product Version: 7.0 (11A4435d)
Created: 2013-07-30T20:14:37.440242
Originated: 2013-07-30T00:00:00
Open Radar Link: http://www.openradar.me/14594033
|
non_process
|
springboard animation after unlock takes too long description summary after unlocking the phone the app icons fly in in a staggered fashion as long as this animation has not completed the user is blocked from swiping left right or opening an app in this situation i find myself regularly tapping an app icon twice before it reacts because the animation hasn t finished yet users develop strong muscle memory for often used apps and therefore can tap on the right spot quickly the duration of the animation could get in their way when trying to quickly unlock the phone and launch an app steps to reproduce navigate to the home screen put the phone to sleep wake the phone slide to unlock tap an app icon right away before the animation has finished expected results the app should launch on the first tap actual results the home screen is unresponsive until the animation is finished regression notes product version created originated open radar link
| 0
|
6,405
| 2,842,673,992
|
IssuesEvent
|
2015-05-28 10:44:46
|
grails/grails-core
|
https://api.github.com/repos/grails/grails-core
|
closed
|
GRAILS-11365: inList broken for a list of domain objects in a unit test
|
Blocker Bug GORM In Progress Testing
|
Original Reporter: longwa
Environment: Not Specified
Version: 2.3.8
Migrated From: http://jira.grails.org/browse/GRAILS-11365
In a unit test, using inList to search for a list of domain objects is broken starting with 2.3.8.
{code}
class Author {
String name
static hasMany = [books: Book]
static constraints = {
}
}
{code}
{code}
class Book {
String name
static belongsTo = [author: Author]
static constraints = {
}
}
{code}
Given this setup:
{code}
def author = new Author(name: "Aaron")
author.addToBooks(name: "Twilight")
author.addToBooks(name: "Harry Potter")
author.save(flush: true, failOnError: true)
{code}
This query will fail to return any rows in 2.3.8. In 2.3.7, it works fine.
{code}
def books = Book.withCriteria {
inList 'author', Author.list()
}
{code}
Attached is a bug report with the failing scenario.
|
1.0
|
GRAILS-11365: inList broken for a list of domain objects in a unit test -
Original Reporter: longwa
Environment: Not Specified
Version: 2.3.8
Migrated From: http://jira.grails.org/browse/GRAILS-11365
In a unit test, using inList to search for a list of domain objects is broken starting with 2.3.8.
{code}
class Author {
String name
static hasMany = [books: Book]
static constraints = {
}
}
{code}
{code}
class Book {
String name
static belongsTo = [author: Author]
static constraints = {
}
}
{code}
Given this setup:
{code}
def author = new Author(name: "Aaron")
author.addToBooks(name: "Twilight")
author.addToBooks(name: "Harry Potter")
author.save(flush: true, failOnError: true)
{code}
This query will fail to return any rows in 2.3.8. In 2.3.7, it works fine.
{code}
def books = Book.withCriteria {
inList 'author', Author.list()
}
{code}
Attached is a bug report with the failing scenario.
|
non_process
|
grails inlist broken for a list of domain objects in a unit test original reporter longwa environment not specified version migrated from in a unit test using inlist to search for a list of domain objects is broken starting with code class author string name static hasmany static constraints code code class book string name static belongsto static constraints code given this setup code def author new author name aaron author addtobooks name twilight author addtobooks name harry potter author save flush true failonerror true code this query will fail to return any rows in in it works fine code def books book withcriteria inlist author author list code attached is a bug report with the failing scenario
| 0
|
201,813
| 15,814,086,953
|
IssuesEvent
|
2021-04-05 08:51:47
|
asiisii/Fitness-Tracker
|
https://api.github.com/repos/asiisii/Fitness-Tracker
|
closed
|
Charts are not updating
|
bug documentation
|
When the date or week or the user name is changed, charts need to remove the orignal data, and update it with new data.
|
1.0
|
Charts are not updating - When the date or week or the user name is changed, charts need to remove the orignal data, and update it with new data.
|
non_process
|
charts are not updating when the date or week or the user name is changed charts need to remove the orignal data and update it with new data
| 0
|
292,106
| 21,951,218,444
|
IssuesEvent
|
2022-05-24 08:06:34
|
strimzi/strimzi-kafka-operator
|
https://api.github.com/repos/strimzi/strimzi-kafka-operator
|
closed
|
CA Certificate renewal docs missing step to add old CA cert to secret
|
documentation question
|
**Suggestion / Problem**
I have been testing the CA Certificate renewal process using custom CA certificate, and found that the documentation appears to be missing a step. The Renewing your own CA certificates, https://strimzi.io/docs/operators/latest/configuring.html#renewing-your-own-ca-certificates-str, details the following actions:
- Copy the base64 encoded certificate over the top of the existing ca.crt value.
- Update the ca-cert-generation annotation.
When I did this I found that the zookeeper and kafka pods failed to roll, saying that they couldn't find a valid CA Certificate.
I found that I needed to have the old CA certificate defined in the CA cert secret as well as the new certificate. This is the documented in the Replacing Private Keys section of the documentation.
**Documentation Link**
I believe that this section of the documentation, https://strimzi.io/docs/operators/latest/configuring.html#renewing-your-own-ca-certificates-str, needs a step prior to the existing step 3, that says something similar to step 2b of https://strimzi.io/docs/operators/latest/configuring.html#proc-replacing-your-own-private-keys-str
|
1.0
|
CA Certificate renewal docs missing step to add old CA cert to secret - **Suggestion / Problem**
I have been testing the CA Certificate renewal process using custom CA certificate, and found that the documentation appears to be missing a step. The Renewing your own CA certificates, https://strimzi.io/docs/operators/latest/configuring.html#renewing-your-own-ca-certificates-str, details the following actions:
- Copy the base64 encoded certificate over the top of the existing ca.crt value.
- Update the ca-cert-generation annotation.
When I did this I found that the zookeeper and kafka pods failed to roll, saying that they couldn't find a valid CA Certificate.
I found that I needed to have the old CA certificate defined in the CA cert secret as well as the new certificate. This is the documented in the Replacing Private Keys section of the documentation.
**Documentation Link**
I believe that this section of the documentation, https://strimzi.io/docs/operators/latest/configuring.html#renewing-your-own-ca-certificates-str, needs a step prior to the existing step 3, that says something similar to step 2b of https://strimzi.io/docs/operators/latest/configuring.html#proc-replacing-your-own-private-keys-str
|
non_process
|
ca certificate renewal docs missing step to add old ca cert to secret suggestion problem i have been testing the ca certificate renewal process using custom ca certificate and found that the documentation appears to be missing a step the renewing your own ca certificates details the following actions copy the encoded certificate over the top of the existing ca crt value update the ca cert generation annotation when i did this i found that the zookeeper and kafka pods failed to roll saying that they couldn t find a valid ca certificate i found that i needed to have the old ca certificate defined in the ca cert secret as well as the new certificate this is the documented in the replacing private keys section of the documentation documentation link i believe that this section of the documentation needs a step prior to the existing step that says something similar to step of
| 0
|
160,391
| 25,156,878,249
|
IssuesEvent
|
2022-11-10 14:11:13
|
DXgovernance/DAVI
|
https://api.github.com/repos/DXgovernance/DAVI
|
opened
|
Holographic consensus interface design
|
Design
|
With DXdao contracts, nicknamed "Gov 1.5" due to it being only one large change away from "Gov 2.0" we have a more complex system.
The same as in DXdao currently this architecture has holographic consensus. This is what allows proposals to boost and execute faster and without 50% votes.
The logic of this system is that it incentivises attention to proposals with a high monetary value assigned to them. Essentially people staking on a proposal are making a bet that the proposal will either pass or fail.
We have a UI for this already however it is quite lacking and not intuitive

We can always use an interface like this one but we have a good opportunity here to redesign and make more clear what is happening in proposals.
This task might require a lot of explaining the system and brainstorming ideas for the design.
|
1.0
|
Holographic consensus interface design - With DXdao contracts, nicknamed "Gov 1.5" due to it being only one large change away from "Gov 2.0" we have a more complex system.
The same as in DXdao currently this architecture has holographic consensus. This is what allows proposals to boost and execute faster and without 50% votes.
The logic of this system is that it incentivises attention to proposals with a high monetary value assigned to them. Essentially people staking on a proposal are making a bet that the proposal will either pass or fail.
We have a UI for this already however it is quite lacking and not intuitive

We can always use an interface like this one but we have a good opportunity here to redesign and make more clear what is happening in proposals.
This task might require a lot of explaining the system and brainstorming ideas for the design.
|
non_process
|
holographic consensus interface design with dxdao contracts nicknamed gov due to it being only one large change away from gov we have a more complex system the same as in dxdao currently this architecture has holographic consensus this is what allows proposals to boost and execute faster and without votes the logic of this system is that it incentivises attention to proposals with a high monetary value assigned to them essentially people staking on a proposal are making a bet that the proposal will either pass or fail we have a ui for this already however it is quite lacking and not intuitive we can always use an interface like this one but we have a good opportunity here to redesign and make more clear what is happening in proposals this task might require a lot of explaining the system and brainstorming ideas for the design
| 0
|
8,785
| 11,903,630,334
|
IssuesEvent
|
2020-03-30 15:36:43
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
System.InvalidOperationException: StandardIn has not been redirected
|
area-System.Diagnostics.Process question
|
I got the following exception from time to time in my application whereas the standard input is redirected.
````
Exception has occurred: CLR/System.InvalidOperationException
An exception of type 'System.InvalidOperationException' occurred in System.Diagnostics.Process.dll but was not handled in user code: 'StandardIn has not been redirected.'
at System.Diagnostics.Process.get_StandardInput()
at repro_process.Program.ExecuteProcessAsync(ProcessStartInfo psi) in /home/meziantou/repro-process/Program.cs:line 72
at repro_process.Program.<>c.<Main>b__0_0(Int32 i) in /home/meziantou/repro-process/Program.cs:line 27
at System.Threading.Tasks.Parallel.<>c__DisplayClass19_0`1.<ForWorker>b__1(RangeWorker& currentWorker, Int32 timeout, Boolean& replicationDelegateYieldedBeforeCompletion)
````
I tried to make a small repro of the code I have in production. Note that it doesn't always throw the exception. You may need to run the code 10 times to get the exception.
A similar exception sometimes occurs on `process.BeginErrorReadLine()` or `process.BeginOutputReadLine()` with a similar message indicating the the standard output/error is not redirected.
````c#
using System;
using System.Diagnostics;
using System.Threading.Tasks;
namespace repro_process
{
class Program
{
static void Main()
{
// In the actual code, multiple unit tests (xUnit) run in parallel so I tried to reproduce this behavior by using Parallel.For
Parallel.For(0, 10000, i =>
{
var psi = new ProcessStartInfo
{
FileName = "git",
ArgumentList =
{
"config",
"--global",
"test.a",
"abc" + i,
},
RedirectStandardError = true,
RedirectStandardInput = true,
RedirectStandardOutput = true,
};
ExecuteProcessAsync(psi).Wait(); // In the actual code, there is no wait/Result, only await
Console.WriteLine(i);
});
}
private static Task<bool> ExecuteProcessAsync(ProcessStartInfo psi)
{
var process = new System.Diagnostics.Process
{
StartInfo = psi,
EnableRaisingEvents = true,
};
var tcs = new TaskCompletionSource<bool>(TaskCreationOptions.RunContinuationsAsynchronously);
process.Exited += (sender, e) =>
{
try
{
process.WaitForExit();
process.Dispose();
tcs.TrySetResult(true);
}
catch (Exception ex)
{
tcs.SetException(ex);
}
};
process.Start();
if (psi.RedirectStandardOutput)
{
process.OutputDataReceived += (s, e) => { Console.WriteLine(e.Data); };
process.BeginOutputReadLine();
}
if (psi.RedirectStandardError)
{
process.ErrorDataReceived += (s, e) => { Console.WriteLine(e.Data); };
process.BeginErrorReadLine();
}
if (psi.RedirectStandardInput)
{
process.StandardInput.Close();
}
return tcs.Task;
}
}
}
````
**Environment:**
- .NET Core 3.1.3 but I also get the exception on 3.1.0 and 3.1.2
````
$> lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic
$> uname -a
Linux DESKTOP-TV4IPEK 4.4.0-19041-Microsoft #1-Microsoft Fri Dec 06 14:06:00 PST 2019 x86_64 x86_64 x86_64 GNU/Linux
$> lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 94
Model name: Intel(R) Core(TM) i5-6600 CPU @ 3.30GHz
Stepping: 3
CPU MHz: 3301.000
CPU max MHz: 3301.0000
BogoMIPS: 6602.00
Hypervisor vendor: Windows Subsystem for Linux
Virtualization type: container
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm pni pclmulqdq dtes64 est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 movbe popcnt aes xsave osxsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap clflushopt ibrs ibpb stibp ssbd
````
Edit: I've just got the issue on Windows too (.NET Core 3.1.2).
|
1.0
|
System.InvalidOperationException: StandardIn has not been redirected - I got the following exception from time to time in my application whereas the standard input is redirected.
````
Exception has occurred: CLR/System.InvalidOperationException
An exception of type 'System.InvalidOperationException' occurred in System.Diagnostics.Process.dll but was not handled in user code: 'StandardIn has not been redirected.'
at System.Diagnostics.Process.get_StandardInput()
at repro_process.Program.ExecuteProcessAsync(ProcessStartInfo psi) in /home/meziantou/repro-process/Program.cs:line 72
at repro_process.Program.<>c.<Main>b__0_0(Int32 i) in /home/meziantou/repro-process/Program.cs:line 27
at System.Threading.Tasks.Parallel.<>c__DisplayClass19_0`1.<ForWorker>b__1(RangeWorker& currentWorker, Int32 timeout, Boolean& replicationDelegateYieldedBeforeCompletion)
````
I tried to make a small repro of the code I have in production. Note that it doesn't always throw the exception. You may need to run the code 10 times to get the exception.
A similar exception sometimes occurs on `process.BeginErrorReadLine()` or `process.BeginOutputReadLine()` with a similar message indicating the the standard output/error is not redirected.
````c#
using System;
using System.Diagnostics;
using System.Threading.Tasks;
namespace repro_process
{
class Program
{
static void Main()
{
// In the actual code, multiple unit tests (xUnit) run in parallel so I tried to reproduce this behavior by using Parallel.For
Parallel.For(0, 10000, i =>
{
var psi = new ProcessStartInfo
{
FileName = "git",
ArgumentList =
{
"config",
"--global",
"test.a",
"abc" + i,
},
RedirectStandardError = true,
RedirectStandardInput = true,
RedirectStandardOutput = true,
};
ExecuteProcessAsync(psi).Wait(); // In the actual code, there is no wait/Result, only await
Console.WriteLine(i);
});
}
private static Task<bool> ExecuteProcessAsync(ProcessStartInfo psi)
{
var process = new System.Diagnostics.Process
{
StartInfo = psi,
EnableRaisingEvents = true,
};
var tcs = new TaskCompletionSource<bool>(TaskCreationOptions.RunContinuationsAsynchronously);
process.Exited += (sender, e) =>
{
try
{
process.WaitForExit();
process.Dispose();
tcs.TrySetResult(true);
}
catch (Exception ex)
{
tcs.SetException(ex);
}
};
process.Start();
if (psi.RedirectStandardOutput)
{
process.OutputDataReceived += (s, e) => { Console.WriteLine(e.Data); };
process.BeginOutputReadLine();
}
if (psi.RedirectStandardError)
{
process.ErrorDataReceived += (s, e) => { Console.WriteLine(e.Data); };
process.BeginErrorReadLine();
}
if (psi.RedirectStandardInput)
{
process.StandardInput.Close();
}
return tcs.Task;
}
}
}
````
**Environment:**
- .NET Core 3.1.3 but I also get the exception on 3.1.0 and 3.1.2
````
$> lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic
$> uname -a
Linux DESKTOP-TV4IPEK 4.4.0-19041-Microsoft #1-Microsoft Fri Dec 06 14:06:00 PST 2019 x86_64 x86_64 x86_64 GNU/Linux
$> lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 94
Model name: Intel(R) Core(TM) i5-6600 CPU @ 3.30GHz
Stepping: 3
CPU MHz: 3301.000
CPU max MHz: 3301.0000
BogoMIPS: 6602.00
Hypervisor vendor: Windows Subsystem for Linux
Virtualization type: container
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm pni pclmulqdq dtes64 est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 movbe popcnt aes xsave osxsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap clflushopt ibrs ibpb stibp ssbd
````
Edit: I've just got the issue on Windows too (.NET Core 3.1.2).
|
process
|
system invalidoperationexception standardin has not been redirected i got the following exception from time to time in my application whereas the standard input is redirected exception has occurred clr system invalidoperationexception an exception of type system invalidoperationexception occurred in system diagnostics process dll but was not handled in user code standardin has not been redirected at system diagnostics process get standardinput at repro process program executeprocessasync processstartinfo psi in home meziantou repro process program cs line at repro process program c b i in home meziantou repro process program cs line at system threading tasks parallel c b rangeworker currentworker timeout boolean replicationdelegateyieldedbeforecompletion i tried to make a small repro of the code i have in production note that it doesn t always throw the exception you may need to run the code times to get the exception a similar exception sometimes occurs on process beginerrorreadline or process beginoutputreadline with a similar message indicating the the standard output error is not redirected c using system using system diagnostics using system threading tasks namespace repro process class program static void main in the actual code multiple unit tests xunit run in parallel so i tried to reproduce this behavior by using parallel for parallel for i var psi new processstartinfo filename git argumentlist config global test a abc i redirectstandarderror true redirectstandardinput true redirectstandardoutput true executeprocessasync psi wait in the actual code there is no wait result only await console writeline i private static task executeprocessasync processstartinfo psi var process new system diagnostics process startinfo psi enableraisingevents true var tcs new taskcompletionsource taskcreationoptions runcontinuationsasynchronously process exited sender e try process waitforexit process dispose tcs trysetresult true catch exception ex tcs setexception ex process start if psi redirectstandardoutput process outputdatareceived s e console writeline e data process beginoutputreadline if psi redirectstandarderror process errordatareceived s e console writeline e data process beginerrorreadline if psi redirectstandardinput process standardinput close return tcs task environment net core but i also get the exception on and lsb release a no lsb modules are available distributor id ubuntu description ubuntu lts release codename bionic uname a linux desktop microsoft microsoft fri dec pst gnu linux lscpu architecture cpu op mode s bit bit byte order little endian cpu s on line cpu s list thread s per core core s per socket socket s vendor id genuineintel cpu family model model name intel r core tm cpu stepping cpu mhz cpu max mhz bogomips hypervisor vendor windows subsystem for linux virtualization type container flags fpu vme de pse tsc msr pae mce apic sep mtrr pge mca cmov pat clflush dts acpi mmx fxsr sse ss ht tm pbe syscall nx rdtscp lm pni pclmulqdq est fma xtpr pdcm pcid movbe popcnt aes xsave osxsave avx rdrand hypervisor lahf lm abm fsgsbase tsc adjust hle smep erms invpcid rtm rdseed adx smap clflushopt ibrs ibpb stibp ssbd edit i ve just got the issue on windows too net core
| 1
|
20,588
| 11,473,550,845
|
IssuesEvent
|
2020-02-09 23:53:31
|
Azure/azure-powershell
|
https://api.github.com/repos/Azure/azure-powershell
|
closed
|
Cmdlet to quickly resize Azure Virtual Machine
|
Compute - VM Feature Request Service Attention customer-reported
|
## Description of the new feature
Implement a dedicated cmdlet to resize an Azure Virtual Machine without needing to update all of its model data. In Azure CLI, this is already implemented in the command [**az vm resize** ](https://docs.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-resize)
## Proposed implementation details (optional)
Example of the cmdlet:
`Set-AzVMSize -VMSize "Standard_B2ms" -VMName "VirtualMachine" -ResourceGroupName "ResourceGroup"`
|
1.0
|
Cmdlet to quickly resize Azure Virtual Machine - ## Description of the new feature
Implement a dedicated cmdlet to resize an Azure Virtual Machine without needing to update all of its model data. In Azure CLI, this is already implemented in the command [**az vm resize** ](https://docs.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-resize)
## Proposed implementation details (optional)
Example of the cmdlet:
`Set-AzVMSize -VMSize "Standard_B2ms" -VMName "VirtualMachine" -ResourceGroupName "ResourceGroup"`
|
non_process
|
cmdlet to quickly resize azure virtual machine description of the new feature implement a dedicated cmdlet to resize an azure virtual machine without needing to update all of its model data in azure cli this is already implemented in the command proposed implementation details optional example of the cmdlet set azvmsize vmsize standard vmname virtualmachine resourcegroupname resourcegroup
| 0
|
15,235
| 19,141,668,357
|
IssuesEvent
|
2021-12-02 00:02:05
|
eberharf/cfl
|
https://api.github.com/repos/eberharf/cfl
|
closed
|
Meaningless Y labels for visual bars data
|
data or preprocessing needs a flag
|
When I run create a CFL object and train it on the visual bars data, CFL returns labels for the y data, even though y labels are meaningless in this case (the Y in visual bars is a single binary variable). In the following code example, no error or warning is raised:
`#partial code to reproduce issue:
#(CDE and clusterer objects already created)
#cfl core object
cfl_object = tscfl.Two_Step_CFL_Core(condExp_object, cluster_object)
x_lbls, y_lbls = cfl_object.train(x, y)
`
This could be confusing or misleading for a user.
|
1.0
|
Meaningless Y labels for visual bars data - When I run create a CFL object and train it on the visual bars data, CFL returns labels for the y data, even though y labels are meaningless in this case (the Y in visual bars is a single binary variable). In the following code example, no error or warning is raised:
`#partial code to reproduce issue:
#(CDE and clusterer objects already created)
#cfl core object
cfl_object = tscfl.Two_Step_CFL_Core(condExp_object, cluster_object)
x_lbls, y_lbls = cfl_object.train(x, y)
`
This could be confusing or misleading for a user.
|
process
|
meaningless y labels for visual bars data when i run create a cfl object and train it on the visual bars data cfl returns labels for the y data even though y labels are meaningless in this case the y in visual bars is a single binary variable in the following code example no error or warning is raised partial code to reproduce issue cde and clusterer objects already created cfl core object cfl object tscfl two step cfl core condexp object cluster object x lbls y lbls cfl object train x y this could be confusing or misleading for a user
| 1
|
3,236
| 3,092,364,196
|
IssuesEvent
|
2015-08-26 17:26:53
|
numenta/nupic
|
https://api.github.com/repos/numenta/nupic
|
opened
|
Remove build dependency on darwin64, linux64 repos
|
type:build type:cleanup
|
Completely remove all dependencies within our build systems for https://github.com/numenta/nupic-linux64 and https://github.com/numenta/nupic-darwin64
|
1.0
|
Remove build dependency on darwin64, linux64 repos - Completely remove all dependencies within our build systems for https://github.com/numenta/nupic-linux64 and https://github.com/numenta/nupic-darwin64
|
non_process
|
remove build dependency on repos completely remove all dependencies within our build systems for and
| 0
|
488,450
| 14,077,641,873
|
IssuesEvent
|
2020-11-04 12:21:37
|
Scholar-6/brillder
|
https://api.github.com/repos/Scholar-6/brillder
|
closed
|
Mobile: If click play a brick, make full screen; add fullscreen options in dropdown
|
High Level Priority Onboarding | UX
|
- [x] play to fullscreeen
- [x] Dropdown: Enter Full Screen, Feather maximize
- [x] Dropdown when in Full Screen: Exit Full Screen, Feather maximize
|
1.0
|
Mobile: If click play a brick, make full screen; add fullscreen options in dropdown - - [x] play to fullscreeen
- [x] Dropdown: Enter Full Screen, Feather maximize
- [x] Dropdown when in Full Screen: Exit Full Screen, Feather maximize
|
non_process
|
mobile if click play a brick make full screen add fullscreen options in dropdown play to fullscreeen dropdown enter full screen feather maximize dropdown when in full screen exit full screen feather maximize
| 0
|
16,524
| 11,027,105,930
|
IssuesEvent
|
2019-12-06 08:41:14
|
virtualsatellite/VirtualSatellite4-Core
|
https://api.github.com/repos/virtualsatellite/VirtualSatellite4-Core
|
opened
|
If EquationBuilder encounters an error in the ResourceSet, don't silently ignore it
|
comfort/usability
|
If a model is in the broken state (example: a discipline is removed, but some elements still reference it), the calculations don't work. We need to inform the user that there is an error to avoid this error being committed.
|
True
|
If EquationBuilder encounters an error in the ResourceSet, don't silently ignore it - If a model is in the broken state (example: a discipline is removed, but some elements still reference it), the calculations don't work. We need to inform the user that there is an error to avoid this error being committed.
|
non_process
|
if equationbuilder encounters an error in the resourceset don t silently ignore it if a model is in the broken state example a discipline is removed but some elements still reference it the calculations don t work we need to inform the user that there is an error to avoid this error being committed
| 0
|
16,288
| 20,914,331,668
|
IssuesEvent
|
2022-03-24 12:07:35
|
parcel-bundler/parcel
|
https://api.github.com/repos/parcel-bundler/parcel
|
closed
|
Using custom PostCSS lost the auto CSS Modules
|
CSS Preprocessing Stale
|
<!---
Thanks for filing an issue 😄 ! Before you submit, please read the following:
Search open/closed issues before submitting since someone might have asked the same thing before!
-->
# 🐛 bug report
Having a custom PostCSS in Parcel v2 (e.g. to have autoprefixer) lost the benefit of auto having CSS Modules based on file names (.module.css)
## 🎛 Configuration (.babelrc, package.json, cli command)
Parcel v2 has built-in support for CSS Modules if the file ends with ".module.css". However, if I have a custom PostCSS config (e.g. to have autoprefixer) like this:
```js
{
"plugins": {
"autoprefixer": true
}
}
```
## 🤔 Expected Behavior
Then I expect the CSS Modules should still work like before, namely it applies for file ends with ".module.css".
## 😯 Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
<!--- If you are seeing an error, please include the full error message and stack trace -->
However, the current behaviour is that CSS Modules is no longer applied. If I set "modules: true", then CSS Modules is always applied (globally, for all files).
## 💁 Possible Solution
I don't know
## 🔦 Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
I'm trying to have:
- CSS Modules for files ending with ".module.css"
- Autoprefixer
## 🌍 Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 2
| Node | Latest
| npm/Yarn | Yarn 1
| Operating System | Mac
<!-- Love parcel? Please consider supporting our collective:
👉 https://opencollective.com/parcel/donate -->
|
1.0
|
Using custom PostCSS lost the auto CSS Modules - <!---
Thanks for filing an issue 😄 ! Before you submit, please read the following:
Search open/closed issues before submitting since someone might have asked the same thing before!
-->
# 🐛 bug report
Having a custom PostCSS in Parcel v2 (e.g. to have autoprefixer) lost the benefit of auto having CSS Modules based on file names (.module.css)
## 🎛 Configuration (.babelrc, package.json, cli command)
Parcel v2 has built-in support for CSS Modules if the file ends with ".module.css". However, if I have a custom PostCSS config (e.g. to have autoprefixer) like this:
```js
{
"plugins": {
"autoprefixer": true
}
}
```
## 🤔 Expected Behavior
Then I expect the CSS Modules should still work like before, namely it applies for file ends with ".module.css".
## 😯 Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
<!--- If you are seeing an error, please include the full error message and stack trace -->
However, the current behaviour is that CSS Modules is no longer applied. If I set "modules: true", then CSS Modules is always applied (globally, for all files).
## 💁 Possible Solution
I don't know
## 🔦 Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
I'm trying to have:
- CSS Modules for files ending with ".module.css"
- Autoprefixer
## 🌍 Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 2
| Node | Latest
| npm/Yarn | Yarn 1
| Operating System | Mac
<!-- Love parcel? Please consider supporting our collective:
👉 https://opencollective.com/parcel/donate -->
|
process
|
using custom postcss lost the auto css modules thanks for filing an issue 😄 before you submit please read the following search open closed issues before submitting since someone might have asked the same thing before 🐛 bug report having a custom postcss in parcel e g to have autoprefixer lost the benefit of auto having css modules based on file names module css 🎛 configuration babelrc package json cli command parcel has built in support for css modules if the file ends with module css however if i have a custom postcss config e g to have autoprefixer like this js plugins autoprefixer true 🤔 expected behavior then i expect the css modules should still work like before namely it applies for file ends with module css 😯 current behavior however the current behaviour is that css modules is no longer applied if i set modules true then css modules is always applied globally for all files 💁 possible solution i don t know 🔦 context i m trying to have css modules for files ending with module css autoprefixer 🌍 your environment software version s parcel node latest npm yarn yarn operating system mac love parcel please consider supporting our collective 👉
| 1
|
9,951
| 12,977,806,572
|
IssuesEvent
|
2020-07-21 21:23:09
|
medic/cht-core
|
https://api.github.com/repos/medic/cht-core
|
opened
|
Release 3.11.0
|
Type: Internal process
|
# Planning
- [ ] Create an [organisation wide project](https://github.com/orgs/medic/projects?query=is%3Aopen+sort%3Aname-asc) and add this issue to it. We use [semver](http://semver.org) so if there are breaking changes increment the major, otherwise if there are new features increment the minor, otherwise increment the service pack. Breaking changes in our case relate to updated software requirements (egs: CouchDB, node, minimum browser versions), broken backwards compatibility in an api, or a major visual update that requires user retraining.
- [ ] Add all the issues to be worked on to the project. Ideally each minor release will have one or two features, a handful of improvements, and plenty of bugs.
# Development
When development is ready to begin one of the engineers should be nominated as a Release Manager. They will be responsible for making sure the following tasks are completed though not necessarily completing them.
- [ ] Set the version number in `package.json` and `package-lock.json` and submit a PR. The easiest way to do this is to use `npm --no-git-tag-version version <major|minor>`.
- [ ] Raise a new issue called `Update dependencies for <version>` with a description that links to [the documentation](https://github.com/medic/medic-docs/blob/master/development/update-dependencies.md). This should be done early in the release cycle so find a volunteer to take this on and assign it to them.
- [ ] Go through all features and improvements scheduled for this release and raise cht-docs issues for product education to be written where appropriate. If in doubt, check with Max.
- [ ] Write an update in the weekly Product Team call agenda summarising development and acceptance testing progress and identifying any blockers. The release manager is to update this every week until the version is released.
# Releasing
Once all issues have passed acceptance testing and have been merged into `master` release testing can begin.
- [ ] Create a new release branch from `master` named `<major>.<minor>.x` in medic. Post a message to #development using this template:
```
@core_devs I've just created the `<major>.<minor>.x` release branch. Please be aware that any further changes intended for this release will have to be merged to `master` then backported. Thanks!
```
- [ ] Build a beta named `<major>.<minor>.<patch>-beta.1` by pushing a git tag and when CI completes successfully notify the QA team that it's ready for release testing.
- [ ] [Import translations keys](https://github.com/medic/medic-docs/blob/master/development/translations.md#adding-new-keys) into POE and notify the #translations Slack channel translate new and updated values, for example:
```
@channel I've just updated the translations in POE. These keys have been added: "<added-list>", and these keys have been updated: "<updated-list>"
```
- [ ] Create a new document in the [release-notes folder](https://github.com/medic/medic/tree/master/release-notes) in `master`. Ensure all issues are in the GH Project, that they're correct labelled, and have human readable descriptions. Use [this script](https://github.com/medic/medic/blob/master/scripts/changelog-generator) to export the issues into our changelog format. Manually document any known migration steps and known issues. Provide description, screenshots, videos, and anything else to help communicate particularly important changes. Document any required or recommended upgrades to our other products (eg: medic-conf, medic-gateway, medic-android). Assign the PR to a) the Director of Technology, and b) an SRE to review and confirm the documentation on upgrade instructions and breaking changes is sufficient.
- [ ] Create a Google Doc in the [blog posts folder](https://drive.google.com/drive/u/0/folders/0B2PTUNZFwxEvMHRWNTBjY2ZHNHc) with the draft of a blog post promoting the release based on the release notes above. Once it's ready ask Max and Kelly to review it.
- [ ] Until release testing passes, make sure regressions are fixed in `master`, cherry-pick them into the release branch, and release another beta.
- [ ] [Export the translations](https://github.com/medic/medic-docs/blob/master/development/translations.md#exporting-changes-from-poeditor-to-github), delete empty translation files and commit to `master`. Cherry-pick the commit into the release branch.
- [ ] Create a release in GitHub from the release branch so it shows up under the [Releases tab](https://github.com/medic/medic/releases) with the naming convention `<major>.<minor>.<patch>`. This will create the git tag automatically. Link to the release notes in the description of the release.
- [ ] Confirm the release build completes successfully and the new release is available on the [market](https://staging.dev.medicmobile.org/builds/releases). Make sure that the document has new entry with `id: medic:medic:<major>.<minor>.<patch>`
- [ ] Upgrade the `demo-cht.dev` instance to this version.
- [ ] Follow the instructions for [releasing other products](https://github.com/medic/medic-docs/blob/master/development/releasing.md) that have been updated in this project (eg: medic-conf, medic-gateway, medic-android).
- [ ] Add the release to the [Supported versions](https://github.com/medic/medic-docs/blob/master/installation/supported-software.md#supported-versions) and update the EOL date and status of previous releases.
- [ ] Announce the release in #products and #cht-contributors using this template:
```
@channel *We're excited to announce the release of {{version}}*
New features include {{key_features}}. We've also implemented loads of other improvements and fixed a heap of bugs.
Read the release notes for full details: {{url}}
Following our support policy, versions {{versions}} are no longer supported. Projects running these versions should start planning to upgrade in the near future. For more details read our software support documentation: https://github.com/medic/medic-docs/blob/master/installation/supported-software.md#supported-versions
To see what's scheduled for the next releases have a read of the product roadmap: https://github.com/orgs/medic/projects?query=is%3Aopen+sort%3Aname-asc
```
- [ ] Announce the release on the [CHT forum](https://forum.communityhealthtoolkit.org/), under the "Product - Releases" category. You can use the previous message and omit `@channel`.
- [ ] Mark this issue "done" and close the project.
|
1.0
|
Release 3.11.0 - # Planning
- [ ] Create an [organisation wide project](https://github.com/orgs/medic/projects?query=is%3Aopen+sort%3Aname-asc) and add this issue to it. We use [semver](http://semver.org) so if there are breaking changes increment the major, otherwise if there are new features increment the minor, otherwise increment the service pack. Breaking changes in our case relate to updated software requirements (egs: CouchDB, node, minimum browser versions), broken backwards compatibility in an api, or a major visual update that requires user retraining.
- [ ] Add all the issues to be worked on to the project. Ideally each minor release will have one or two features, a handful of improvements, and plenty of bugs.
# Development
When development is ready to begin one of the engineers should be nominated as a Release Manager. They will be responsible for making sure the following tasks are completed though not necessarily completing them.
- [ ] Set the version number in `package.json` and `package-lock.json` and submit a PR. The easiest way to do this is to use `npm --no-git-tag-version version <major|minor>`.
- [ ] Raise a new issue called `Update dependencies for <version>` with a description that links to [the documentation](https://github.com/medic/medic-docs/blob/master/development/update-dependencies.md). This should be done early in the release cycle so find a volunteer to take this on and assign it to them.
- [ ] Go through all features and improvements scheduled for this release and raise cht-docs issues for product education to be written where appropriate. If in doubt, check with Max.
- [ ] Write an update in the weekly Product Team call agenda summarising development and acceptance testing progress and identifying any blockers. The release manager is to update this every week until the version is released.
# Releasing
Once all issues have passed acceptance testing and have been merged into `master` release testing can begin.
- [ ] Create a new release branch from `master` named `<major>.<minor>.x` in medic. Post a message to #development using this template:
```
@core_devs I've just created the `<major>.<minor>.x` release branch. Please be aware that any further changes intended for this release will have to be merged to `master` then backported. Thanks!
```
- [ ] Build a beta named `<major>.<minor>.<patch>-beta.1` by pushing a git tag and when CI completes successfully notify the QA team that it's ready for release testing.
- [ ] [Import translations keys](https://github.com/medic/medic-docs/blob/master/development/translations.md#adding-new-keys) into POE and notify the #translations Slack channel translate new and updated values, for example:
```
@channel I've just updated the translations in POE. These keys have been added: "<added-list>", and these keys have been updated: "<updated-list>"
```
- [ ] Create a new document in the [release-notes folder](https://github.com/medic/medic/tree/master/release-notes) in `master`. Ensure all issues are in the GH Project, that they're correct labelled, and have human readable descriptions. Use [this script](https://github.com/medic/medic/blob/master/scripts/changelog-generator) to export the issues into our changelog format. Manually document any known migration steps and known issues. Provide description, screenshots, videos, and anything else to help communicate particularly important changes. Document any required or recommended upgrades to our other products (eg: medic-conf, medic-gateway, medic-android). Assign the PR to a) the Director of Technology, and b) an SRE to review and confirm the documentation on upgrade instructions and breaking changes is sufficient.
- [ ] Create a Google Doc in the [blog posts folder](https://drive.google.com/drive/u/0/folders/0B2PTUNZFwxEvMHRWNTBjY2ZHNHc) with the draft of a blog post promoting the release based on the release notes above. Once it's ready ask Max and Kelly to review it.
- [ ] Until release testing passes, make sure regressions are fixed in `master`, cherry-pick them into the release branch, and release another beta.
- [ ] [Export the translations](https://github.com/medic/medic-docs/blob/master/development/translations.md#exporting-changes-from-poeditor-to-github), delete empty translation files and commit to `master`. Cherry-pick the commit into the release branch.
- [ ] Create a release in GitHub from the release branch so it shows up under the [Releases tab](https://github.com/medic/medic/releases) with the naming convention `<major>.<minor>.<patch>`. This will create the git tag automatically. Link to the release notes in the description of the release.
- [ ] Confirm the release build completes successfully and the new release is available on the [market](https://staging.dev.medicmobile.org/builds/releases). Make sure that the document has new entry with `id: medic:medic:<major>.<minor>.<patch>`
- [ ] Upgrade the `demo-cht.dev` instance to this version.
- [ ] Follow the instructions for [releasing other products](https://github.com/medic/medic-docs/blob/master/development/releasing.md) that have been updated in this project (eg: medic-conf, medic-gateway, medic-android).
- [ ] Add the release to the [Supported versions](https://github.com/medic/medic-docs/blob/master/installation/supported-software.md#supported-versions) and update the EOL date and status of previous releases.
- [ ] Announce the release in #products and #cht-contributors using this template:
```
@channel *We're excited to announce the release of {{version}}*
New features include {{key_features}}. We've also implemented loads of other improvements and fixed a heap of bugs.
Read the release notes for full details: {{url}}
Following our support policy, versions {{versions}} are no longer supported. Projects running these versions should start planning to upgrade in the near future. For more details read our software support documentation: https://github.com/medic/medic-docs/blob/master/installation/supported-software.md#supported-versions
To see what's scheduled for the next releases have a read of the product roadmap: https://github.com/orgs/medic/projects?query=is%3Aopen+sort%3Aname-asc
```
- [ ] Announce the release on the [CHT forum](https://forum.communityhealthtoolkit.org/), under the "Product - Releases" category. You can use the previous message and omit `@channel`.
- [ ] Mark this issue "done" and close the project.
|
process
|
release planning create an and add this issue to it we use so if there are breaking changes increment the major otherwise if there are new features increment the minor otherwise increment the service pack breaking changes in our case relate to updated software requirements egs couchdb node minimum browser versions broken backwards compatibility in an api or a major visual update that requires user retraining add all the issues to be worked on to the project ideally each minor release will have one or two features a handful of improvements and plenty of bugs development when development is ready to begin one of the engineers should be nominated as a release manager they will be responsible for making sure the following tasks are completed though not necessarily completing them set the version number in package json and package lock json and submit a pr the easiest way to do this is to use npm no git tag version version raise a new issue called update dependencies for with a description that links to this should be done early in the release cycle so find a volunteer to take this on and assign it to them go through all features and improvements scheduled for this release and raise cht docs issues for product education to be written where appropriate if in doubt check with max write an update in the weekly product team call agenda summarising development and acceptance testing progress and identifying any blockers the release manager is to update this every week until the version is released releasing once all issues have passed acceptance testing and have been merged into master release testing can begin create a new release branch from master named x in medic post a message to development using this template core devs i ve just created the x release branch please be aware that any further changes intended for this release will have to be merged to master then backported thanks build a beta named beta by pushing a git tag and when ci completes successfully notify the qa team that it s ready for release testing into poe and notify the translations slack channel translate new and updated values for example channel i ve just updated the translations in poe these keys have been added and these keys have been updated create a new document in the in master ensure all issues are in the gh project that they re correct labelled and have human readable descriptions use to export the issues into our changelog format manually document any known migration steps and known issues provide description screenshots videos and anything else to help communicate particularly important changes document any required or recommended upgrades to our other products eg medic conf medic gateway medic android assign the pr to a the director of technology and b an sre to review and confirm the documentation on upgrade instructions and breaking changes is sufficient create a google doc in the with the draft of a blog post promoting the release based on the release notes above once it s ready ask max and kelly to review it until release testing passes make sure regressions are fixed in master cherry pick them into the release branch and release another beta delete empty translation files and commit to master cherry pick the commit into the release branch create a release in github from the release branch so it shows up under the with the naming convention this will create the git tag automatically link to the release notes in the description of the release confirm the release build completes successfully and the new release is available on the make sure that the document has new entry with id medic medic upgrade the demo cht dev instance to this version follow the instructions for that have been updated in this project eg medic conf medic gateway medic android add the release to the and update the eol date and status of previous releases announce the release in products and cht contributors using this template channel we re excited to announce the release of version new features include key features we ve also implemented loads of other improvements and fixed a heap of bugs read the release notes for full details url following our support policy versions versions are no longer supported projects running these versions should start planning to upgrade in the near future for more details read our software support documentation to see what s scheduled for the next releases have a read of the product roadmap announce the release on the under the product releases category you can use the previous message and omit channel mark this issue done and close the project
| 1
|
73,558
| 19,710,474,749
|
IssuesEvent
|
2022-01-13 04:22:11
|
NatronGitHub/Natron
|
https://api.github.com/repos/NatronGitHub/Natron
|
opened
|
(Release): Natron 2.4.3 - any blockers?
|
func:buildsystem
|
Does anyone have blockers or things they want to merge before 2.4.3?
@rodlie color palette?
For the next 2.5.0 alpha, we are waiting for the Windows build (I can hear it coming).
|
1.0
|
(Release): Natron 2.4.3 - any blockers? - Does anyone have blockers or things they want to merge before 2.4.3?
@rodlie color palette?
For the next 2.5.0 alpha, we are waiting for the Windows build (I can hear it coming).
|
non_process
|
release natron any blockers does anyone have blockers or things they want to merge before rodlie color palette for the next alpha we are waiting for the windows build i can hear it coming
| 0
|
14,343
| 3,392,056,591
|
IssuesEvent
|
2015-11-30 17:56:07
|
MajkiIT/polish-ads-filter
|
https://api.github.com/repos/MajkiIT/polish-ads-filter
|
closed
|
epuap.gov.pl
|
cookies reguły gotowe/testowanie
|
cookies
UWAGA! W trakcie korzystania z ePUAP na komputerze użytkownika przechowywane są informacje (tzw. „ciasteczka”, ang. „cookies”), które pozwalają na dostosowanie świadczonych usług elektronicznych do indywidualnych potrzeb użytkowników. Warunki przechowywania lub dostępu do plików cookie mogą być określone przez użytkownika w ustawieniach przeglądarki internetowej. Kontynuacja korzystania z ePUAP bez dokonania wyżej wspomnianych zmian w ustawieniach przeglądarki internetowej uznana zostaje za wyrażenie zgody przez użytkownika na wykorzystywanie plików cookie. Więcej znajdziesz w Polityce Cookies
|
1.0
|
epuap.gov.pl - cookies
UWAGA! W trakcie korzystania z ePUAP na komputerze użytkownika przechowywane są informacje (tzw. „ciasteczka”, ang. „cookies”), które pozwalają na dostosowanie świadczonych usług elektronicznych do indywidualnych potrzeb użytkowników. Warunki przechowywania lub dostępu do plików cookie mogą być określone przez użytkownika w ustawieniach przeglądarki internetowej. Kontynuacja korzystania z ePUAP bez dokonania wyżej wspomnianych zmian w ustawieniach przeglądarki internetowej uznana zostaje za wyrażenie zgody przez użytkownika na wykorzystywanie plików cookie. Więcej znajdziesz w Polityce Cookies
|
non_process
|
epuap gov pl cookies uwaga w trakcie korzystania z epuap na komputerze użytkownika przechowywane są informacje tzw „ciasteczka” ang „cookies” które pozwalają na dostosowanie świadczonych usług elektronicznych do indywidualnych potrzeb użytkowników warunki przechowywania lub dostępu do plików cookie mogą być określone przez użytkownika w ustawieniach przeglądarki internetowej kontynuacja korzystania z epuap bez dokonania wyżej wspomnianych zmian w ustawieniach przeglądarki internetowej uznana zostaje za wyrażenie zgody przez użytkownika na wykorzystywanie plików cookie więcej znajdziesz w polityce cookies
| 0
|
166,654
| 12,965,729,242
|
IssuesEvent
|
2020-07-20 23:02:08
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
Disabled test_udf_remote_message_delay_timeout_to_self (__main__.FaultyAgentRpcTestWithSpawn)
|
high priority module: rpc topic: flaky-tests triage review triaged
|
https://app.circleci.com/pipelines/github/pytorch/pytorch/187910/workflows/48b2adf2-4780-498c-97e7-b81aeffab49f/jobs/6117749/steps
```
Jul 05 00:36:45 ======================================================================
Jul 05 00:36:45 ERROR [23.394s]: test_udf_remote_message_delay_timeout_to_self (__main__.FaultyAgentRpcTestWithSpawn)
Jul 05 00:36:45 ----------------------------------------------------------------------
Jul 05 00:36:45 Traceback (most recent call last):
Jul 05 00:36:45 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 204, in wrapper
Jul 05 00:36:45 self._join_processes(fn)
Jul 05 00:36:45 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 311, in _join_processes
Jul 05 00:36:45 self._check_return_codes(elapsed_time)
Jul 05 00:36:45 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 344, in _check_return_codes
Jul 05 00:36:45 raise RuntimeError(error)
Jul 05 00:36:45 RuntimeError: Processes 0 exited with error code 10
```
```
Jul 05 00:36:22 test_udf_remote_message_delay_timeout_to_self (__main__.FaultyAgentRpcTestWithSpawn) ... ERROR:root:Caught exception:
Jul 05 00:36:22 Traceback (most recent call last):
Jul 05 00:36:22 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 207, in wrapper
Jul 05 00:36:22 fn()
Jul 05 00:36:22 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/dist_utils.py", line 93, in new_test_method
Jul 05 00:36:22 return_value = old_test_method(self, *arg, **kwargs)
Jul 05 00:36:22 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/distributed/rpc/rpc_test.py", line 3575, in test_udf_remote_message_delay_timeout_to_self
Jul 05 00:36:22 self._test_remote_message_delay_timeout(func, args, dst=0)
Jul 05 00:36:22 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/distributed/rpc/rpc_test.py", line 3563, in _test_remote_message_delay_timeout
Jul 05 00:36:22 wait_until_owners_and_forks_on_rank(1, 0, rank=dst_rank)
Jul 05 00:36:22 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/dist_utils.py", line 262, in wait_until_owners_and_forks_on_rank
Jul 05 00:36:22 num_owners, num_forks, num_owners_on_rank, num_forks_on_rank
Jul 05 00:36:22 ValueError: Timed out waiting for 1 owners and 0 forks on rank, had 0 owners and 0 forks
Jul 05 00:36:22 exiting process with exit code: 10
Jul 05 00:36:23 ERROR (23.394s)
```
cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @xush6528 @jjlilley @osalpekar @jiayisuse
|
1.0
|
Disabled test_udf_remote_message_delay_timeout_to_self (__main__.FaultyAgentRpcTestWithSpawn) - https://app.circleci.com/pipelines/github/pytorch/pytorch/187910/workflows/48b2adf2-4780-498c-97e7-b81aeffab49f/jobs/6117749/steps
```
Jul 05 00:36:45 ======================================================================
Jul 05 00:36:45 ERROR [23.394s]: test_udf_remote_message_delay_timeout_to_self (__main__.FaultyAgentRpcTestWithSpawn)
Jul 05 00:36:45 ----------------------------------------------------------------------
Jul 05 00:36:45 Traceback (most recent call last):
Jul 05 00:36:45 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 204, in wrapper
Jul 05 00:36:45 self._join_processes(fn)
Jul 05 00:36:45 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 311, in _join_processes
Jul 05 00:36:45 self._check_return_codes(elapsed_time)
Jul 05 00:36:45 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 344, in _check_return_codes
Jul 05 00:36:45 raise RuntimeError(error)
Jul 05 00:36:45 RuntimeError: Processes 0 exited with error code 10
```
```
Jul 05 00:36:22 test_udf_remote_message_delay_timeout_to_self (__main__.FaultyAgentRpcTestWithSpawn) ... ERROR:root:Caught exception:
Jul 05 00:36:22 Traceback (most recent call last):
Jul 05 00:36:22 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 207, in wrapper
Jul 05 00:36:22 fn()
Jul 05 00:36:22 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/dist_utils.py", line 93, in new_test_method
Jul 05 00:36:22 return_value = old_test_method(self, *arg, **kwargs)
Jul 05 00:36:22 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/distributed/rpc/rpc_test.py", line 3575, in test_udf_remote_message_delay_timeout_to_self
Jul 05 00:36:22 self._test_remote_message_delay_timeout(func, args, dst=0)
Jul 05 00:36:22 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/distributed/rpc/rpc_test.py", line 3563, in _test_remote_message_delay_timeout
Jul 05 00:36:22 wait_until_owners_and_forks_on_rank(1, 0, rank=dst_rank)
Jul 05 00:36:22 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/dist_utils.py", line 262, in wait_until_owners_and_forks_on_rank
Jul 05 00:36:22 num_owners, num_forks, num_owners_on_rank, num_forks_on_rank
Jul 05 00:36:22 ValueError: Timed out waiting for 1 owners and 0 forks on rank, had 0 owners and 0 forks
Jul 05 00:36:22 exiting process with exit code: 10
Jul 05 00:36:23 ERROR (23.394s)
```
cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @xush6528 @jjlilley @osalpekar @jiayisuse
|
non_process
|
disabled test udf remote message delay timeout to self main faultyagentrpctestwithspawn jul jul error test udf remote message delay timeout to self main faultyagentrpctestwithspawn jul jul traceback most recent call last jul file opt conda lib site packages torch testing internal common distributed py line in wrapper jul self join processes fn jul file opt conda lib site packages torch testing internal common distributed py line in join processes jul self check return codes elapsed time jul file opt conda lib site packages torch testing internal common distributed py line in check return codes jul raise runtimeerror error jul runtimeerror processes exited with error code jul test udf remote message delay timeout to self main faultyagentrpctestwithspawn error root caught exception jul traceback most recent call last jul file opt conda lib site packages torch testing internal common distributed py line in wrapper jul fn jul file opt conda lib site packages torch testing internal dist utils py line in new test method jul return value old test method self arg kwargs jul file opt conda lib site packages torch testing internal distributed rpc rpc test py line in test udf remote message delay timeout to self jul self test remote message delay timeout func args dst jul file opt conda lib site packages torch testing internal distributed rpc rpc test py line in test remote message delay timeout jul wait until owners and forks on rank rank dst rank jul file opt conda lib site packages torch testing internal dist utils py line in wait until owners and forks on rank jul num owners num forks num owners on rank num forks on rank jul valueerror timed out waiting for owners and forks on rank had owners and forks jul exiting process with exit code jul error cc ezyang gchanan pietern mrshenli zhaojuanmao satgera gqchen aazzolini rohan varma jjlilley osalpekar jiayisuse
| 0
|
10,286
| 13,134,567,690
|
IssuesEvent
|
2020-08-06 23:53:43
|
GoogleCloudPlatform/stackdriver-sandbox
|
https://api.github.com/repos/GoogleCloudPlatform/stackdriver-sandbox
|
closed
|
logging: install script logs for istio installation are very verbose
|
priority: p2 type: process
|
My log upon running `./install` (as per user instructions) devotes many redundant lines + newlines in between to the istio installation. This is unnecessary (the time elapsed log every 10 seconds is pretty standard and enough to convince the user that things are happening).
```
null_resource.install_istio: Creating...
... (downloading logs)
null_resource.install_istio (local-exec): Istio has been successfully downloaded into the istio-1.6.2 folder on your system.
null_resource.install_istio (local-exec): Next Steps:
null_resource.install_istio (local-exec): See https://istio.io/docs/setup/kubernetes/install/ to add Istio to your Kubernetes cluster.
null_resource.install_istio (local-exec): To configure the istioctl client tool for your workstation,
null_resource.install_istio (local-exec): add the /home/gloriazhaogoog/cloudshell_open/stackdriver-sandbox/terraform/istio/istio-1.6.2/b
in directory to your environment path variable with:
null_resource.install_istio (local-exec): export PATH="$PATH:/home/gloriazhaogoog/cloudshell_open/stackdriver-sandbox/terraform/i
stio/istio-1.6.2/bin"
null_resource.install_istio (local-exec): Begin the Istio pre-installation verification check by running:
null_resource.install_istio (local-exec): istioctl verify-install
null_resource.install_istio (local-exec): Need more information? Visit https://istio.io/docs/setup/kubernetes/install/
null_resource.install_istio (local-exec): Moving istioctl into WORKDIR...
null_resource.install_istio (local-exec): namespace/istio-system created
null_resource.install_istio (local-exec): secret/cacerts created
null_resource.install_istio (local-exec): namespace/default labeled
null_resource.install_istio (local-exec): Your active configuration is: [cloudshell-27169]
null_resource.install_istio (local-exec): clusterrolebinding.rbac.authorization.k8s.io/cluster-admin-binding created
null_resource.install_istio: Still creating... [10s elapsed]
null_resource.install_istio (local-exec): - Processing resources for Istio core.
null_resource.install_istio (local-exec): - Processing resources for Istio core.
null_resource.install_istio (local-exec): - Processing resources for Istio core.
null_resource.install_istio (local-exec): - Processing resources for Istio core.
null_resource.install_istio (local-exec): - Processing resources for Istio core.
null_resource.install_istio (local-exec): - Processing resources for Istio core.
...(repeated many more times with newlines in between each)
null_resource.install_istio: Still creating... [20s elapsed]
null_resource.install_istio (local-exec): - Processing resources for Istio core.
null_resource.install_istio (local-exec): - Processing resources for Istio core.
null_resource.install_istio (local-exec): - Processing resources for Istio core.
...(repeated many more times again with newlines in between each)
null_resource.install_istio (local-exec): ✔ Istiod installed
null_resource.install_istio (local-exec): - Processing resources for Addons, Ingress gateways.
null_resource.install_istio (local-exec): - Processing resources for Addons, Ingress gateways.
null_resource.install_istio (local-exec): - Processing resources for Addons, Ingress gateways.
null_resource.install_istio (local-exec): - Processing resources for Addons, Ingress gateways.
null_resource.install_istio (local-exec): - Processing resources for Addons, Ingress gateways.
... (repeated many times)
```
|
1.0
|
logging: install script logs for istio installation are very verbose - My log upon running `./install` (as per user instructions) devotes many redundant lines + newlines in between to the istio installation. This is unnecessary (the time elapsed log every 10 seconds is pretty standard and enough to convince the user that things are happening).
```
null_resource.install_istio: Creating...
... (downloading logs)
null_resource.install_istio (local-exec): Istio has been successfully downloaded into the istio-1.6.2 folder on your system.
null_resource.install_istio (local-exec): Next Steps:
null_resource.install_istio (local-exec): See https://istio.io/docs/setup/kubernetes/install/ to add Istio to your Kubernetes cluster.
null_resource.install_istio (local-exec): To configure the istioctl client tool for your workstation,
null_resource.install_istio (local-exec): add the /home/gloriazhaogoog/cloudshell_open/stackdriver-sandbox/terraform/istio/istio-1.6.2/b
in directory to your environment path variable with:
null_resource.install_istio (local-exec): export PATH="$PATH:/home/gloriazhaogoog/cloudshell_open/stackdriver-sandbox/terraform/i
stio/istio-1.6.2/bin"
null_resource.install_istio (local-exec): Begin the Istio pre-installation verification check by running:
null_resource.install_istio (local-exec): istioctl verify-install
null_resource.install_istio (local-exec): Need more information? Visit https://istio.io/docs/setup/kubernetes/install/
null_resource.install_istio (local-exec): Moving istioctl into WORKDIR...
null_resource.install_istio (local-exec): namespace/istio-system created
null_resource.install_istio (local-exec): secret/cacerts created
null_resource.install_istio (local-exec): namespace/default labeled
null_resource.install_istio (local-exec): Your active configuration is: [cloudshell-27169]
null_resource.install_istio (local-exec): clusterrolebinding.rbac.authorization.k8s.io/cluster-admin-binding created
null_resource.install_istio: Still creating... [10s elapsed]
null_resource.install_istio (local-exec): - Processing resources for Istio core.
null_resource.install_istio (local-exec): - Processing resources for Istio core.
null_resource.install_istio (local-exec): - Processing resources for Istio core.
null_resource.install_istio (local-exec): - Processing resources for Istio core.
null_resource.install_istio (local-exec): - Processing resources for Istio core.
null_resource.install_istio (local-exec): - Processing resources for Istio core.
...(repeated many more times with newlines in between each)
null_resource.install_istio: Still creating... [20s elapsed]
null_resource.install_istio (local-exec): - Processing resources for Istio core.
null_resource.install_istio (local-exec): - Processing resources for Istio core.
null_resource.install_istio (local-exec): - Processing resources for Istio core.
...(repeated many more times again with newlines in between each)
null_resource.install_istio (local-exec): ✔ Istiod installed
null_resource.install_istio (local-exec): - Processing resources for Addons, Ingress gateways.
null_resource.install_istio (local-exec): - Processing resources for Addons, Ingress gateways.
null_resource.install_istio (local-exec): - Processing resources for Addons, Ingress gateways.
null_resource.install_istio (local-exec): - Processing resources for Addons, Ingress gateways.
null_resource.install_istio (local-exec): - Processing resources for Addons, Ingress gateways.
... (repeated many times)
```
|
process
|
logging install script logs for istio installation are very verbose my log upon running install as per user instructions devotes many redundant lines newlines in between to the istio installation this is unnecessary the time elapsed log every seconds is pretty standard and enough to convince the user that things are happening null resource install istio creating downloading logs null resource install istio local exec istio has been successfully downloaded into the istio folder on your system null resource install istio local exec next steps null resource install istio local exec see to add istio to your kubernetes cluster null resource install istio local exec to configure the istioctl client tool for your workstation null resource install istio local exec add the home gloriazhaogoog cloudshell open stackdriver sandbox terraform istio istio b in directory to your environment path variable with null resource install istio local exec export path path home gloriazhaogoog cloudshell open stackdriver sandbox terraform i stio istio bin null resource install istio local exec begin the istio pre installation verification check by running null resource install istio local exec istioctl verify install null resource install istio local exec need more information visit null resource install istio local exec moving istioctl into workdir null resource install istio local exec namespace istio system created null resource install istio local exec secret cacerts created null resource install istio local exec namespace default labeled null resource install istio local exec your active configuration is null resource install istio local exec clusterrolebinding rbac authorization io cluster admin binding created null resource install istio still creating null resource install istio local exec processing resources for istio core null resource install istio local exec processing resources for istio core null resource install istio local exec processing resources for istio core null resource install istio local exec processing resources for istio core null resource install istio local exec processing resources for istio core null resource install istio local exec processing resources for istio core repeated many more times with newlines in between each null resource install istio still creating null resource install istio local exec processing resources for istio core null resource install istio local exec processing resources for istio core null resource install istio local exec processing resources for istio core repeated many more times again with newlines in between each null resource install istio local exec ✔ istiod installed null resource install istio local exec processing resources for addons ingress gateways null resource install istio local exec processing resources for addons ingress gateways null resource install istio local exec processing resources for addons ingress gateways null resource install istio local exec processing resources for addons ingress gateways null resource install istio local exec processing resources for addons ingress gateways repeated many times
| 1
|
12,464
| 14,937,390,011
|
IssuesEvent
|
2021-01-25 14:35:00
|
department-of-veterans-affairs/notification-api
|
https://api.github.com/repos/department-of-veterans-affairs/notification-api
|
closed
|
Initiate 508 audit for 526EZ confirmation email - Accessibility
|
Process Task
|
This is a process card to initiate the 508 audit review within the VA for the 526EZ confirmation email
Resources:
- [508 Checklist](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/508-checklist.md)
- [VA 508 Site](https://www.section508.va.gov/)
|
1.0
|
Initiate 508 audit for 526EZ confirmation email - Accessibility - This is a process card to initiate the 508 audit review within the VA for the 526EZ confirmation email
Resources:
- [508 Checklist](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/508-checklist.md)
- [VA 508 Site](https://www.section508.va.gov/)
|
process
|
initiate audit for confirmation email accessibility this is a process card to initiate the audit review within the va for the confirmation email resources
| 1
|
12,956
| 15,213,591,693
|
IssuesEvent
|
2021-02-17 12:01:34
|
Phoenix616/Snap
|
https://api.github.com/repos/Phoenix616/Snap
|
closed
|
FastLoginBungee incompatible
|
plugin compatibility wontfix
|
#### Used Version
<!-- The full program version like it is printed in the log. Please check if there are any newer development builds! Can usually be found at https://ci.minebench.de -->
Latest build (#9)
#### Config
<!-- The full config file -->
```yaml
[Put the config here]
```
I changed nothing about config file. it's default.
#### Environment description
<!-- Information like the operating system and language as well as full server version if you are using a plugin-->
OS: Ubuntu 20.04.2 LTS (Kernel 5.4 LTS)
Java: 11 LTS
Proxy: Velocity
#### Full Log
<!-- The full log file, especially important if you have a stack trace -->
```
[Your log here]
```
There is errors:
[22:55:15 WARN] [Snap]: Exception encountered when loading plugin: FastLogin
java.lang.NoClassDefFoundError: net/md_5/bungee/connection/InitialHandler
at com.github.games647.fastlogin.bungee.listener.ConnectListener.<clinit>(ConnectListener.java:56) ~[?:?]
at com.github.games647.fastlogin.bungee.FastLoginBungee.onEnable(FastLoginBungee.java:60) ~[?:?]
at net.md_5.bungee.api.plugin.PluginManager.enablePlugins(PluginManager.java:300) ~[?:?]
at de.themoep.snap.SnapBungeeAdapter.loadPlugins(SnapBungeeAdapter.java:121) ~[?:?]
at de.themoep.snap.Snap.onProxyInitialization(Snap.java:59) ~[?:?]
at net.kyori.event.asm.generated.9fa8fed8c8.Snap-onProxyInitialization-ProxyInitializeEvent-11.invoke(Unknown Source) ~[?:?]
at net.kyori.event.method.SimpleMethodSubscriptionAdapter$MethodEventSubscriber.invoke(SimpleMethodSubscriptionAdapter.java:148) ~[velocity.jar:1.1.4]
at net.kyori.event.SimpleEventBus.post(SimpleEventBus.java:107) ~[velocity.jar:1.1.4]
at com.velocitypowered.proxy.plugin.VelocityEventManager.fireEvent(VelocityEventManager.java:137) ~[velocity.jar:1.1.4]
at com.velocitypowered.proxy.plugin.VelocityEventManager.lambda$fire$1(VelocityEventManager.java:119) ~[velocity.jar:1.1.4]
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: java.lang.ClassNotFoundException: net.md_5.bungee.connection.InitialHandler
at net.md_5.bungee.api.plugin.PluginClassloader.loadClass0(PluginClassloader.java:85) ~[?:?]
at net.md_5.bungee.api.plugin.PluginClassloader.loadClass(PluginClassloader.java:67) ~[?:?]
at java.lang.ClassLoa
|
True
|
FastLoginBungee incompatible - #### Used Version
<!-- The full program version like it is printed in the log. Please check if there are any newer development builds! Can usually be found at https://ci.minebench.de -->
Latest build (#9)
#### Config
<!-- The full config file -->
```yaml
[Put the config here]
```
I changed nothing about config file. it's default.
#### Environment description
<!-- Information like the operating system and language as well as full server version if you are using a plugin-->
OS: Ubuntu 20.04.2 LTS (Kernel 5.4 LTS)
Java: 11 LTS
Proxy: Velocity
#### Full Log
<!-- The full log file, especially important if you have a stack trace -->
```
[Your log here]
```
There is errors:
[22:55:15 WARN] [Snap]: Exception encountered when loading plugin: FastLogin
java.lang.NoClassDefFoundError: net/md_5/bungee/connection/InitialHandler
at com.github.games647.fastlogin.bungee.listener.ConnectListener.<clinit>(ConnectListener.java:56) ~[?:?]
at com.github.games647.fastlogin.bungee.FastLoginBungee.onEnable(FastLoginBungee.java:60) ~[?:?]
at net.md_5.bungee.api.plugin.PluginManager.enablePlugins(PluginManager.java:300) ~[?:?]
at de.themoep.snap.SnapBungeeAdapter.loadPlugins(SnapBungeeAdapter.java:121) ~[?:?]
at de.themoep.snap.Snap.onProxyInitialization(Snap.java:59) ~[?:?]
at net.kyori.event.asm.generated.9fa8fed8c8.Snap-onProxyInitialization-ProxyInitializeEvent-11.invoke(Unknown Source) ~[?:?]
at net.kyori.event.method.SimpleMethodSubscriptionAdapter$MethodEventSubscriber.invoke(SimpleMethodSubscriptionAdapter.java:148) ~[velocity.jar:1.1.4]
at net.kyori.event.SimpleEventBus.post(SimpleEventBus.java:107) ~[velocity.jar:1.1.4]
at com.velocitypowered.proxy.plugin.VelocityEventManager.fireEvent(VelocityEventManager.java:137) ~[velocity.jar:1.1.4]
at com.velocitypowered.proxy.plugin.VelocityEventManager.lambda$fire$1(VelocityEventManager.java:119) ~[velocity.jar:1.1.4]
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: java.lang.ClassNotFoundException: net.md_5.bungee.connection.InitialHandler
at net.md_5.bungee.api.plugin.PluginClassloader.loadClass0(PluginClassloader.java:85) ~[?:?]
at net.md_5.bungee.api.plugin.PluginClassloader.loadClass(PluginClassloader.java:67) ~[?:?]
at java.lang.ClassLoa
|
non_process
|
fastloginbungee incompatible used version latest build config yaml i changed nothing about config file it s default environment description os ubuntu lts kernel lts java lts proxy velocity full log there is errors exception encountered when loading plugin fastlogin java lang noclassdeffounderror net md bungee connection initialhandler at com github fastlogin bungee listener connectlistener connectlistener java at com github fastlogin bungee fastloginbungee onenable fastloginbungee java at net md bungee api plugin pluginmanager enableplugins pluginmanager java at de themoep snap snapbungeeadapter loadplugins snapbungeeadapter java at de themoep snap snap onproxyinitialization snap java at net kyori event asm generated snap onproxyinitialization proxyinitializeevent invoke unknown source at net kyori event method simplemethodsubscriptionadapter methodeventsubscriber invoke simplemethodsubscriptionadapter java at net kyori event simpleeventbus post simpleeventbus java at com velocitypowered proxy plugin velocityeventmanager fireevent velocityeventmanager java at com velocitypowered proxy plugin velocityeventmanager lambda fire velocityeventmanager java at java util concurrent completablefuture asyncsupply run completablefuture java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by java lang classnotfoundexception net md bungee connection initialhandler at net md bungee api plugin pluginclassloader pluginclassloader java at net md bungee api plugin pluginclassloader loadclass pluginclassloader java at java lang classloa
| 0
|
5,111
| 7,886,308,786
|
IssuesEvent
|
2018-06-27 14:54:42
|
aiidateam/aiida_core
|
https://api.github.com/repos/aiidateam/aiida_core
|
opened
|
Group scheduler update TransportTasks into a single task
|
topic/Daemon topic/JobCalculationAndProcess topic/Schedulers type/feature request
|
Currently, each `JobProcess` will schedule a `TransportTask` when it needs to update its scheduler state. These tasks will already be grouped and processed in batch when the requests need to be made over a non-local transport, to avoid spamming transport connections. However, the scheduler calls are still made individually and some schedulers may also limit the amount of requests within a certain time period. Would there be a way to batch process these `UpdateSchedulerTasks` into a single scheduler request just like the transport tasks in general?
|
1.0
|
Group scheduler update TransportTasks into a single task - Currently, each `JobProcess` will schedule a `TransportTask` when it needs to update its scheduler state. These tasks will already be grouped and processed in batch when the requests need to be made over a non-local transport, to avoid spamming transport connections. However, the scheduler calls are still made individually and some schedulers may also limit the amount of requests within a certain time period. Would there be a way to batch process these `UpdateSchedulerTasks` into a single scheduler request just like the transport tasks in general?
|
process
|
group scheduler update transporttasks into a single task currently each jobprocess will schedule a transporttask when it needs to update its scheduler state these tasks will already be grouped and processed in batch when the requests need to be made over a non local transport to avoid spamming transport connections however the scheduler calls are still made individually and some schedulers may also limit the amount of requests within a certain time period would there be a way to batch process these updateschedulertasks into a single scheduler request just like the transport tasks in general
| 1
|
5,785
| 8,632,875,678
|
IssuesEvent
|
2018-11-22 12:09:08
|
kiwicom/orbit-components
|
https://api.github.com/repos/kiwicom/orbit-components
|
closed
|
<Breadcrumbs /> component
|
Accessibility Enhancement Processing
|
## Description
Breadcrumbs component is navigational component placed on top of the page for showing hieararchy of current page.
## Visual style

Zeplin: https://zpl.io/anYQLWO
### Interactions
- Link will underline itself when hovered
### Additional information
- Last item is also URL
## Accessibility
- https://www.w3.org/TR/wai-aria-practices/examples/breadcrumb/index.html
- https://gist.github.com/jonathantneal/4037764
## Functional specs
- Not sure how it works with React, but it would be great to have correct semantics for this component, so it shows nicely in Google. More info can be found or [here](https://coderwall.com/p/p0nvjw/creating-a-semantic-breadcrumb-using-html5-microdata) or [here](https://www.npmjs.com/package/react-semantic-breadcrumbs).
**Google preview**
<img width="633" alt="breadcrumbs-semantic" src="https://user-images.githubusercontent.com/6851247/47953082-8a0d5800-df78-11e8-8c7f-bd1534984ec8.png">
|
1.0
|
<Breadcrumbs /> component - ## Description
Breadcrumbs component is navigational component placed on top of the page for showing hieararchy of current page.
## Visual style

Zeplin: https://zpl.io/anYQLWO
### Interactions
- Link will underline itself when hovered
### Additional information
- Last item is also URL
## Accessibility
- https://www.w3.org/TR/wai-aria-practices/examples/breadcrumb/index.html
- https://gist.github.com/jonathantneal/4037764
## Functional specs
- Not sure how it works with React, but it would be great to have correct semantics for this component, so it shows nicely in Google. More info can be found or [here](https://coderwall.com/p/p0nvjw/creating-a-semantic-breadcrumb-using-html5-microdata) or [here](https://www.npmjs.com/package/react-semantic-breadcrumbs).
**Google preview**
<img width="633" alt="breadcrumbs-semantic" src="https://user-images.githubusercontent.com/6851247/47953082-8a0d5800-df78-11e8-8c7f-bd1534984ec8.png">
|
process
|
component description breadcrumbs component is navigational component placed on top of the page for showing hieararchy of current page visual style zeplin interactions link will underline itself when hovered additional information last item is also url accessibility functional specs not sure how it works with react but it would be great to have correct semantics for this component so it shows nicely in google more info can be found or or google preview img width alt breadcrumbs semantic src
| 1
|
2,993
| 5,969,943,221
|
IssuesEvent
|
2017-05-30 21:24:27
|
IIIF/iiif.io
|
https://api.github.com/repos/IIIF/iiif.io
|
opened
|
How do we close stories issues?
|
process
|
Related to #843, not only how do we track stories movement into specs ... but also how we decide that a story is not in scope, or otherwise not going to be solved, or when it is solved, who can/should close the story issue?
We have people not understanding the relationship and commenting on stories that have been already long since solved: https://github.com/IIIF/iiif-stories/issues/9#issuecomment-305011591
This is a disservice to both our community and to ourselves.
|
1.0
|
How do we close stories issues? - Related to #843, not only how do we track stories movement into specs ... but also how we decide that a story is not in scope, or otherwise not going to be solved, or when it is solved, who can/should close the story issue?
We have people not understanding the relationship and commenting on stories that have been already long since solved: https://github.com/IIIF/iiif-stories/issues/9#issuecomment-305011591
This is a disservice to both our community and to ourselves.
|
process
|
how do we close stories issues related to not only how do we track stories movement into specs but also how we decide that a story is not in scope or otherwise not going to be solved or when it is solved who can should close the story issue we have people not understanding the relationship and commenting on stories that have been already long since solved this is a disservice to both our community and to ourselves
| 1
|
15,840
| 20,028,185,949
|
IssuesEvent
|
2022-02-02 00:26:38
|
googleapis/java-translate
|
https://api.github.com/repos/googleapis/java-translate
|
closed
|
com.example.translate.BatchTranslateTextWithModelTests: testBatchTranslateTextWithModel failed
|
priority: p2 type: process api: translate flakybot: issue flakybot: flaky
|
Note: #565 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 8b482c5500ff7b3d27005e66c04088c6f71493dc
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/a35aad5b-d869-4d93-a7e8-5cdca865ddc3), [Sponge](http://sponge2/a35aad5b-d869-4d93-a7e8-5cdca865ddc3)
status: failed
<details><summary>Test output</summary><br><pre>java.util.concurrent.ExecutionException: com.google.api.gax.rpc.InvalidArgumentException: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Output dir is in use by another batch translation job. output_uri_prefix: gs://java-docs-samples-testing/BATCH_TRANSLATION_WITH_MODEL_OUTPUT_19e59d0e-6292-4c15-a797-ea00600ef210/
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:566)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:445)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:95)
at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:68)
at com.google.api.gax.longrunning.OperationFutureImpl.get(OperationFutureImpl.java:133)
at com.example.translate.BatchTranslateTextWithModel.batchTranslateTextWithModel(BatchTranslateTextWithModel.java:107)
at com.example.translate.BatchTranslateTextWithModelTests.testBatchTranslateTextWithModel(BatchTranslateTextWithModelTests.java:119)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at com.google.cloud.testing.junit4.MultipleAttemptsRule$1.evaluate(MultipleAttemptsRule.java:94)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Caused by: com.google.api.gax.rpc.InvalidArgumentException: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Output dir is in use by another batch translation job. output_uri_prefix: gs://java-docs-samples-testing/BATCH_TRANSLATION_WITH_MODEL_OUTPUT_19e59d0e-6292-4c15-a797-ea00600ef210/
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:49)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1074)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1213)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.DelayedClientCall$DelayedListener$3.run(DelayedClientCall.java:463)
at io.grpc.internal.DelayedClientCall$DelayedListener.delayOrExecute(DelayedClientCall.java:427)
at io.grpc.internal.DelayedClientCall$DelayedListener.onClose(DelayedClientCall.java:460)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:557)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:69)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:738)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:717)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Output dir is in use by another batch translation job. output_uri_prefix: gs://java-docs-samples-testing/BATCH_TRANSLATION_WITH_MODEL_OUTPUT_19e59d0e-6292-4c15-a797-ea00600ef210/
at io.grpc.Status.asRuntimeException(Status.java:535)
... 13 more
</pre></details>
|
1.0
|
com.example.translate.BatchTranslateTextWithModelTests: testBatchTranslateTextWithModel failed - Note: #565 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 8b482c5500ff7b3d27005e66c04088c6f71493dc
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/a35aad5b-d869-4d93-a7e8-5cdca865ddc3), [Sponge](http://sponge2/a35aad5b-d869-4d93-a7e8-5cdca865ddc3)
status: failed
<details><summary>Test output</summary><br><pre>java.util.concurrent.ExecutionException: com.google.api.gax.rpc.InvalidArgumentException: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Output dir is in use by another batch translation job. output_uri_prefix: gs://java-docs-samples-testing/BATCH_TRANSLATION_WITH_MODEL_OUTPUT_19e59d0e-6292-4c15-a797-ea00600ef210/
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:566)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:445)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:95)
at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:68)
at com.google.api.gax.longrunning.OperationFutureImpl.get(OperationFutureImpl.java:133)
at com.example.translate.BatchTranslateTextWithModel.batchTranslateTextWithModel(BatchTranslateTextWithModel.java:107)
at com.example.translate.BatchTranslateTextWithModelTests.testBatchTranslateTextWithModel(BatchTranslateTextWithModelTests.java:119)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at com.google.cloud.testing.junit4.MultipleAttemptsRule$1.evaluate(MultipleAttemptsRule.java:94)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Caused by: com.google.api.gax.rpc.InvalidArgumentException: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Output dir is in use by another batch translation job. output_uri_prefix: gs://java-docs-samples-testing/BATCH_TRANSLATION_WITH_MODEL_OUTPUT_19e59d0e-6292-4c15-a797-ea00600ef210/
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:49)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1074)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1213)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.DelayedClientCall$DelayedListener$3.run(DelayedClientCall.java:463)
at io.grpc.internal.DelayedClientCall$DelayedListener.delayOrExecute(DelayedClientCall.java:427)
at io.grpc.internal.DelayedClientCall$DelayedListener.onClose(DelayedClientCall.java:460)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:557)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:69)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:738)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:717)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Output dir is in use by another batch translation job. output_uri_prefix: gs://java-docs-samples-testing/BATCH_TRANSLATION_WITH_MODEL_OUTPUT_19e59d0e-6292-4c15-a797-ea00600ef210/
at io.grpc.Status.asRuntimeException(Status.java:535)
... 13 more
</pre></details>
|
process
|
com example translate batchtranslatetextwithmodeltests testbatchtranslatetextwithmodel failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output java util concurrent executionexception com google api gax rpc invalidargumentexception io grpc statusruntimeexception invalid argument output dir is in use by another batch translation job output uri prefix gs java docs samples testing batch translation with model output at com google common util concurrent abstractfuture getdonevalue abstractfuture java at com google common util concurrent abstractfuture get abstractfuture java at com google common util concurrent fluentfuture trustedfuture get fluentfuture java at com google common util concurrent forwardingfuture get forwardingfuture java at com google api gax longrunning operationfutureimpl get operationfutureimpl java at com example translate batchtranslatetextwithmodel batchtranslatetextwithmodel batchtranslatetextwithmodel java at com example translate batchtranslatetextwithmodeltests testbatchtranslatetextwithmodel batchtranslatetextwithmodeltests java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at com google cloud testing multipleattemptsrule evaluate multipleattemptsrule java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit internal runners statements runbefores evaluate runbefores java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire execute java at org apache maven surefire executewithrerun java at org apache maven surefire executetestset java at org apache maven surefire invoke java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java caused by com google api gax rpc invalidargumentexception io grpc statusruntimeexception invalid argument output dir is in use by another batch translation job output uri prefix gs java docs samples testing batch translation with model output at com google api gax rpc apiexceptionfactory createexception apiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcexceptioncallable exceptiontransformingfuture onfailure grpcexceptioncallable java at com google api core apifutures onfailure apifutures java at com google common util concurrent futures callbacklistener run futures java at com google common util concurrent directexecutor execute directexecutor java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture setexception abstractfuture java at io grpc stub clientcalls grpcfuture setexception clientcalls java at io grpc stub clientcalls unarystreamtofuture onclose clientcalls java at io grpc internal delayedclientcall delayedlistener run delayedclientcall java at io grpc internal delayedclientcall delayedlistener delayorexecute delayedclientcall java at io grpc internal delayedclientcall delayedlistener onclose delayedclientcall java at io grpc internal clientcallimpl closeobserver clientcallimpl java at io grpc internal clientcallimpl access clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runinternal clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runincontext clientcallimpl java at io grpc internal contextrunnable run contextrunnable java at io grpc internal serializingexecutor run serializingexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by io grpc statusruntimeexception invalid argument output dir is in use by another batch translation job output uri prefix gs java docs samples testing batch translation with model output at io grpc status asruntimeexception status java more
| 1
|
12,781
| 15,164,687,899
|
IssuesEvent
|
2021-02-12 14:05:16
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
closed
|
Backend: Allow users to specify a role name for the log processing role in s3 sources
|
story team:data processing
|
See the epic ticket #2355 for more info.
|
1.0
|
Backend: Allow users to specify a role name for the log processing role in s3 sources - See the epic ticket #2355 for more info.
|
process
|
backend allow users to specify a role name for the log processing role in sources see the epic ticket for more info
| 1
|
12,493
| 14,960,378,003
|
IssuesEvent
|
2021-01-27 05:38:31
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Enrollment and Onboarding status is not updated in PM post deleting account in mobile
|
Blocker Bug P1 Process: Fixed Process: Tested dev iOS
|
Steps:
1. Signup/login into iOS mobile app
2. Join any open/closed study successfully
3. Delete the accounnt
4. Observe the Enrollment and Onboarding statuses
Actual: Enrollment and Onboarding status is not updated in PM post deleting account in mobile
Expected: Enrollment and Onboarding status should be updated in PM post deleting account in mobile
|
2.0
|
[iOS] Enrollment and Onboarding status is not updated in PM post deleting account in mobile - Steps:
1. Signup/login into iOS mobile app
2. Join any open/closed study successfully
3. Delete the accounnt
4. Observe the Enrollment and Onboarding statuses
Actual: Enrollment and Onboarding status is not updated in PM post deleting account in mobile
Expected: Enrollment and Onboarding status should be updated in PM post deleting account in mobile
|
process
|
enrollment and onboarding status is not updated in pm post deleting account in mobile steps signup login into ios mobile app join any open closed study successfully delete the accounnt observe the enrollment and onboarding statuses actual enrollment and onboarding status is not updated in pm post deleting account in mobile expected enrollment and onboarding status should be updated in pm post deleting account in mobile
| 1
|
22,330
| 30,913,867,192
|
IssuesEvent
|
2023-08-05 03:11:31
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
pih 1.48036 has 2 GuardDog issues
|
guarddog typosquatting silent-process-execution
|
https://pypi.org/project/pih
https://inspector.pypi.io/project/pih
```{
"dependency": "pih",
"version": "1.48036",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: pip, pid",
"silent-process-execution": [
{
"location": "pih-1.48036/pih/tools.py:781",
"code": " result = subprocess.run(command, stdin=subprocess.DEVNULL, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpf3doesm5/pih"
}
}```
|
1.0
|
pih 1.48036 has 2 GuardDog issues - https://pypi.org/project/pih
https://inspector.pypi.io/project/pih
```{
"dependency": "pih",
"version": "1.48036",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: pip, pid",
"silent-process-execution": [
{
"location": "pih-1.48036/pih/tools.py:781",
"code": " result = subprocess.run(command, stdin=subprocess.DEVNULL, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpf3doesm5/pih"
}
}```
|
process
|
pih has guarddog issues dependency pih version result issues errors results typosquatting this package closely ressembles the following package names and might be a typosquatting attempt pip pid silent process execution location pih pih tools py code result subprocess run command stdin subprocess devnull stdout subprocess devnull stderr subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp pih
| 1
|
9,383
| 12,391,387,878
|
IssuesEvent
|
2020-05-20 12:24:47
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `GreatestTime` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `GreatestTime` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `GreatestTime` from TiDB -
## Description
Port the scalar function `GreatestTime` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function greatesttime from tidb description port the scalar function greatesttime from tidb to coprocessor score mentor s iosmanthus recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.