Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
164,904 | 12,821,027,453 | IssuesEvent | 2020-07-06 07:15:35 | ripe-tech/ripe-components-vue | https://api.github.com/repos/ripe-tech/ripe-components-vue | closed | Add label on empty dropdown | enhancement p-low unit-testing | ## Description
When a `dropdown` is empty, show a `messageEmpty` message to the user, if it's defined (this new prop is optional).
This should be optional and if it does not exist and the dropdown is empty the dropdown contents should not be visible.
| 1.0 | Add label on empty dropdown - ## Description
When a `dropdown` is empty, show a `messageEmpty` message to the user, if it's defined (this new prop is optional).
This should be optional and if it does not exist and the dropdown is empty the dropdown contents should not be visible.
| non_process | add label on empty dropdown description when a dropdown is empty show a messageempty message to the user if it s defined this new prop is optional this should be optional and if it does not exist and the dropdown is empty the dropdown contents should not be visible | 0 |
21,891 | 30,341,426,901 | IssuesEvent | 2023-07-11 12:59:30 | kitspace/kitspace-v2 | https://api.github.com/repos/kitspace/kitspace-v2 | closed | Storage leak in the processor | bug processor | If a repo is already processed no project gets added to the queue, so no clean-up job gets created for this repo.
So if the processor gets restarted (or tries processing a repo that has been processed for any other reason) the repo doesn't get clean-up properly. | 1.0 | Storage leak in the processor - If a repo is already processed no project gets added to the queue, so no clean-up job gets created for this repo.
So if the processor gets restarted (or tries processing a repo that has been processed for any other reason) the repo doesn't get clean-up properly. | process | storage leak in the processor if a repo is already processed no project gets added to the queue so no clean up job gets created for this repo so if the processor gets restarted or tries processing a repo that has been processed for any other reason the repo doesn t get clean up properly | 1 |
21,414 | 29,359,589,742 | IssuesEvent | 2023-05-28 00:36:29 | devssa/onde-codar-em-salvador | https://api.github.com/repos/devssa/onde-codar-em-salvador | closed | [Remoto] SAP Architect na Coodesh | SALVADOR REDES FULL-STACK REQUISITOS SAP REMOTO PROCESSOS GITHUB UMA APIs Stale | ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/sap-architech-124813716?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A empresa <strong>Tech Social </strong>está buscando <strong>SAP Architect </strong>para compor a equipe! <br></p>
<p>Somos uma empresa de Soluções Tecnológicas, que busca transformar os dados e informações de nossos clientes em resultados. Evoluímos a partir de consultoria em Gestão Empresarial, somando as múltiplas competências e experiência de nossos profissionais às inovações tecnológicas.</p>
<p>A Tech é uma empresa inovadora! Desenvolvemos e aportamos inteligência em softwares, aplicativos, RPAs, APIs entre outras soluções digitais. Nossa missão é simplificar os processos de nossos clientes por meio da tecnologia e estruturar grandes bancos de dados para garimparmos e lapidarmos as melhores informações para as empresas.</p>
## Techsocial:
<p>Somos uma empresa de Soluções Tecnológicas, que busca transformar os dados e informações de nossos clientes em resultados. Evoluímos a partir de consultoria em Gestão Empresarial, somando as múltiplas competências e experiência de nossos profissionais às inovações tecnológicas.</p>
<p>A Tech é uma empresa inovadora! Desenvolvemos e aportamos inteligência em softwares, aplicativos, RPAs, APIs entre outras soluções digitais. Nossa missão é simplificar os processos de nossos clientes por meio da tecnologia e estruturar grandes bancos de dados para garimparmos e lapidarmos as melhores informações para as empresas.</p><a href='https://coodesh.com/empresas/techsocial-tecnologia-e-consultoria-ltda'>Veja mais no site</a>
## Habilidades:
- API
- Infraestruturas/Arquitetura de redes
- SAP
## Local:
100% Remoto
## Requisitos:
- Conhecimento em Integração SAP usando o midleware CPI;
- Experiência em Desenvolvimento em ABAP;
## Benefícios:
- Integração SAP ECC no CPI
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [SAP Architect na Techsocial](https://coodesh.com/vagas/sap-architech-124813716?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Categoria
Full-Stack | 1.0 | [Remoto] SAP Architect na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/sap-architech-124813716?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A empresa <strong>Tech Social </strong>está buscando <strong>SAP Architect </strong>para compor a equipe! <br></p>
<p>Somos uma empresa de Soluções Tecnológicas, que busca transformar os dados e informações de nossos clientes em resultados. Evoluímos a partir de consultoria em Gestão Empresarial, somando as múltiplas competências e experiência de nossos profissionais às inovações tecnológicas.</p>
<p>A Tech é uma empresa inovadora! Desenvolvemos e aportamos inteligência em softwares, aplicativos, RPAs, APIs entre outras soluções digitais. Nossa missão é simplificar os processos de nossos clientes por meio da tecnologia e estruturar grandes bancos de dados para garimparmos e lapidarmos as melhores informações para as empresas.</p>
## Techsocial:
<p>Somos uma empresa de Soluções Tecnológicas, que busca transformar os dados e informações de nossos clientes em resultados. Evoluímos a partir de consultoria em Gestão Empresarial, somando as múltiplas competências e experiência de nossos profissionais às inovações tecnológicas.</p>
<p>A Tech é uma empresa inovadora! Desenvolvemos e aportamos inteligência em softwares, aplicativos, RPAs, APIs entre outras soluções digitais. Nossa missão é simplificar os processos de nossos clientes por meio da tecnologia e estruturar grandes bancos de dados para garimparmos e lapidarmos as melhores informações para as empresas.</p><a href='https://coodesh.com/empresas/techsocial-tecnologia-e-consultoria-ltda'>Veja mais no site</a>
## Habilidades:
- API
- Infraestruturas/Arquitetura de redes
- SAP
## Local:
100% Remoto
## Requisitos:
- Conhecimento em Integração SAP usando o midleware CPI;
- Experiência em Desenvolvimento em ABAP;
## Benefícios:
- Integração SAP ECC no CPI
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [SAP Architect na Techsocial](https://coodesh.com/vagas/sap-architech-124813716?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Categoria
Full-Stack | process | sap architect na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a empresa tech social está buscando sap architect para compor a equipe somos uma empresa de soluções tecnológicas que busca transformar os dados e informações de nossos clientes em resultados evoluímos a partir de consultoria em gestão empresarial somando as múltiplas competências e experiência de nossos profissionais às inovações tecnológicas a tech é uma empresa inovadora desenvolvemos e aportamos inteligência em softwares aplicativos rpas apis entre outras soluções digitais nossa missão é simplificar os processos de nossos clientes por meio da tecnologia e estruturar grandes bancos de dados para garimparmos e lapidarmos as melhores informações para as empresas techsocial somos uma empresa de soluções tecnológicas que busca transformar os dados e informações de nossos clientes em resultados evoluímos a partir de consultoria em gestão empresarial somando as múltiplas competências e experiência de nossos profissionais às inovações tecnológicas a tech é uma empresa inovadora desenvolvemos e aportamos inteligência em softwares aplicativos rpas apis entre outras soluções digitais nossa missão é simplificar os processos de nossos clientes por meio da tecnologia e estruturar grandes bancos de dados para garimparmos e lapidarmos as melhores informações para as empresas habilidades api infraestruturas arquitetura de redes sap local remoto requisitos conhecimento em integração sap usando o midleware cpi experiência em desenvolvimento em abap benefícios integração sap ecc no cpi como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto categoria full stack | 1 |
8,109 | 11,300,957,290 | IssuesEvent | 2020-01-17 14:40:55 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | closed | too many appressorium terms | multi-species process | GO:0075039 establishment of turgor in appressorium 1 annotations
GO:0075021 cAMP-mediated activation of appressorium formation
GO:0075016 appressorium formation on or near host 1 annotations
GO:0075003 adhesion of symbiont appressorium to host
GO:0075022 ethylene-mediated activation of appressorium formation
GO:0075023 MAPK-mediated regulation of appressorium formation
GO:0075040 regulation of establishment of turgor in appressorium
GO:0075043 maintenance of turgor in appressorium by melanization
GO:0075035 maturation of appressorium on or near host 1 annotations
GO:0075024 phospholipase C-mediated activation of appressorium formation
GO:0075025 initiation of appressorium on or near host
GO:0075020 calcium or calmodulin-mediated activation of appressorium formation
GO:0075017 regulation of appressorium formation on or near host
GO:0075041 positive regulation of establishment of turgor in appressorium
GO:0075042 negative regulation of establishment of turgor in appressorium
I am annotating a MAP kinease which regulates
appressorium formation
however i don't want to use
GO:0075023 MAPK-mediated regulation of appressorium formation
I want to do just
regulation of appressorium formation
I think the other terms
GO:0075024 phospholipase C-mediated activation of appressorium formation
GO:0075020 calcium or calmodulin-mediated activation of appressorium formation
should also go because they are just other MFs in the signalling pathway.
also merge
GO:0075025 initiation of appressorium on or near host
into positive regulation
| 1.0 | too many appressorium terms - GO:0075039 establishment of turgor in appressorium 1 annotations
GO:0075021 cAMP-mediated activation of appressorium formation
GO:0075016 appressorium formation on or near host 1 annotations
GO:0075003 adhesion of symbiont appressorium to host
GO:0075022 ethylene-mediated activation of appressorium formation
GO:0075023 MAPK-mediated regulation of appressorium formation
GO:0075040 regulation of establishment of turgor in appressorium
GO:0075043 maintenance of turgor in appressorium by melanization
GO:0075035 maturation of appressorium on or near host 1 annotations
GO:0075024 phospholipase C-mediated activation of appressorium formation
GO:0075025 initiation of appressorium on or near host
GO:0075020 calcium or calmodulin-mediated activation of appressorium formation
GO:0075017 regulation of appressorium formation on or near host
GO:0075041 positive regulation of establishment of turgor in appressorium
GO:0075042 negative regulation of establishment of turgor in appressorium
I am annotating a MAP kinease which regulates
appressorium formation
however i don't want to use
GO:0075023 MAPK-mediated regulation of appressorium formation
I want to do just
regulation of appressorium formation
I think the other terms
GO:0075024 phospholipase C-mediated activation of appressorium formation
GO:0075020 calcium or calmodulin-mediated activation of appressorium formation
should also go because they are just other MFs in the signalling pathway.
also merge
GO:0075025 initiation of appressorium on or near host
into positive regulation
| process | too many appressorium terms go establishment of turgor in appressorium annotations go camp mediated activation of appressorium formation go appressorium formation on or near host annotations go adhesion of symbiont appressorium to host go ethylene mediated activation of appressorium formation go mapk mediated regulation of appressorium formation go regulation of establishment of turgor in appressorium go maintenance of turgor in appressorium by melanization go maturation of appressorium on or near host annotations go phospholipase c mediated activation of appressorium formation go initiation of appressorium on or near host go calcium or calmodulin mediated activation of appressorium formation go regulation of appressorium formation on or near host go positive regulation of establishment of turgor in appressorium go negative regulation of establishment of turgor in appressorium i am annotating a map kinease which regulates appressorium formation however i don t want to use go mapk mediated regulation of appressorium formation i want to do just regulation of appressorium formation i think the other terms go phospholipase c mediated activation of appressorium formation go calcium or calmodulin mediated activation of appressorium formation should also go because they are just other mfs in the signalling pathway also merge go initiation of appressorium on or near host into positive regulation | 1 |
2,889 | 5,870,406,222 | IssuesEvent | 2017-05-15 04:28:34 | inasafe/inasafe-realtime | https://api.github.com/repos/inasafe/inasafe-realtime | closed | Fix inotify running out of resource | earthquake realtime processor | After BMKG changed their settings to follow our ipaddress, shakemaps is pushed but it didn't trigger new shakemap monitoring service (so it gets processed automatically).
I already checked using sftp connection and tried to put new shakemaps, and it does works.
So, I'm still figuring out why it didn't get processed before this. | 1.0 | Fix inotify running out of resource - After BMKG changed their settings to follow our ipaddress, shakemaps is pushed but it didn't trigger new shakemap monitoring service (so it gets processed automatically).
I already checked using sftp connection and tried to put new shakemaps, and it does works.
So, I'm still figuring out why it didn't get processed before this. | process | fix inotify running out of resource after bmkg changed their settings to follow our ipaddress shakemaps is pushed but it didn t trigger new shakemap monitoring service so it gets processed automatically i already checked using sftp connection and tried to put new shakemaps and it does works so i m still figuring out why it didn t get processed before this | 1 |
15,310 | 19,403,977,289 | IssuesEvent | 2021-12-19 17:26:57 | MasterPlayer/adxl345-sv | https://api.github.com/repos/MasterPlayer/adxl345-sv | closed | add support for single requests from ps to adxl devices for reading | enhancement hardware process software process | it allow perform interrupt processing | 2.0 | add support for single requests from ps to adxl devices for reading - it allow perform interrupt processing | process | add support for single requests from ps to adxl devices for reading it allow perform interrupt processing | 1 |
147,675 | 23,250,946,455 | IssuesEvent | 2022-08-04 03:37:09 | MozillaFoundation/Design | https://api.github.com/repos/MozillaFoundation/Design | closed | [RegretsReporter] Design Discovery Phase | design YouTube Regrets | **Preparation**
- [x] Watch video recording
- [x] Review last years work
- [x] Review roadmap
- [x] Start setting up docs for moodboard and concepts | 1.0 | [RegretsReporter] Design Discovery Phase - **Preparation**
- [x] Watch video recording
- [x] Review last years work
- [x] Review roadmap
- [x] Start setting up docs for moodboard and concepts | non_process | design discovery phase preparation watch video recording review last years work review roadmap start setting up docs for moodboard and concepts | 0 |
371,538 | 25,955,350,381 | IssuesEvent | 2022-12-18 06:05:21 | featbit/featbit | https://api.github.com/repos/featbit/featbit | opened | Performance reporting | documentation | ### Discussed in https://github.com/orgs/featbit/discussions/140
<div type='discussions-op-text'>
<sup>Originally posted by **cosmic-flood** December 5, 2022</sup>

</div> | 1.0 | Performance reporting - ### Discussed in https://github.com/orgs/featbit/discussions/140
<div type='discussions-op-text'>
<sup>Originally posted by **cosmic-flood** December 5, 2022</sup>

</div> | non_process | performance reporting discussed in originally posted by cosmic flood december | 0 |
3,107 | 6,123,277,799 | IssuesEvent | 2017-06-23 03:52:05 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Tests under: System.ServiceProcess.Tests.SafeServiceControllerTests failed with "System.InvalidOperationException" | area-System.ServiceProcess os-windows-uwp test-run-uwp-coreclr | Opened on behalf of @Jiayili1
The test `System.ServiceProcess.Tests.SafeServiceControllerTests/EnumerateDeviceService` has failed.
System.InvalidOperationException : Cannot open Service Control Manager on computer '.'. This operation might require other privileges.\r
---- System.ComponentModel.Win32Exception : Access is denied
Stack Trace:
at System.ServiceProcess.ServiceController.GetDataBaseHandleWithAccess(String machineName, Int32 serviceControlManagerAccess)
at System.ServiceProcess.ServiceController.GetDataBaseHandleWithEnumerateAccess(String machineName)
at System.ServiceProcess.ServiceController.GetServices[T](String machineName, Int32 serviceType, String group, Func`2 selector)
at System.ServiceProcess.ServiceController.GetServicesOfType(String machineName, Int32 serviceType)
at System.ServiceProcess.ServiceController.GetDevices(String machineName)
at System.ServiceProcess.ServiceController.GetDevices()
at System.ServiceProcess.Tests.SafeServiceControllerTests.EnumerateDeviceService()
----- Inner Stack Trace -----
Build : Master - 20170407.01 (UWP F5 Tests)
Failing configurations:
- Windows.10.Amd64
- x64-Debug
Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fuwp~2F/build/20170407.01/workItem/System.ServiceProcess.ServiceController.Tests | 1.0 | Tests under: System.ServiceProcess.Tests.SafeServiceControllerTests failed with "System.InvalidOperationException" - Opened on behalf of @Jiayili1
The test `System.ServiceProcess.Tests.SafeServiceControllerTests/EnumerateDeviceService` has failed.
System.InvalidOperationException : Cannot open Service Control Manager on computer '.'. This operation might require other privileges.\r
---- System.ComponentModel.Win32Exception : Access is denied
Stack Trace:
at System.ServiceProcess.ServiceController.GetDataBaseHandleWithAccess(String machineName, Int32 serviceControlManagerAccess)
at System.ServiceProcess.ServiceController.GetDataBaseHandleWithEnumerateAccess(String machineName)
at System.ServiceProcess.ServiceController.GetServices[T](String machineName, Int32 serviceType, String group, Func`2 selector)
at System.ServiceProcess.ServiceController.GetServicesOfType(String machineName, Int32 serviceType)
at System.ServiceProcess.ServiceController.GetDevices(String machineName)
at System.ServiceProcess.ServiceController.GetDevices()
at System.ServiceProcess.Tests.SafeServiceControllerTests.EnumerateDeviceService()
----- Inner Stack Trace -----
Build : Master - 20170407.01 (UWP F5 Tests)
Failing configurations:
- Windows.10.Amd64
- x64-Debug
Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fuwp~2F/build/20170407.01/workItem/System.ServiceProcess.ServiceController.Tests | process | tests under system serviceprocess tests safeservicecontrollertests failed with system invalidoperationexception opened on behalf of the test system serviceprocess tests safeservicecontrollertests enumeratedeviceservice has failed system invalidoperationexception cannot open service control manager on computer this operation might require other privileges r system componentmodel access is denied stack trace at system serviceprocess servicecontroller getdatabasehandlewithaccess string machinename servicecontrolmanageraccess at system serviceprocess servicecontroller getdatabasehandlewithenumerateaccess string machinename at system serviceprocess servicecontroller getservices string machinename servicetype string group func selector at system serviceprocess servicecontroller getservicesoftype string machinename servicetype at system serviceprocess servicecontroller getdevices string machinename at system serviceprocess servicecontroller getdevices at system serviceprocess tests safeservicecontrollertests enumeratedeviceservice inner stack trace build master uwp tests failing configurations windows debug detail | 1 |
282,315 | 30,889,271,288 | IssuesEvent | 2023-08-04 02:29:09 | maddyCode23/linux-4.1.15 | https://api.github.com/repos/maddyCode23/linux-4.1.15 | reopened | CVE-2019-20810 (Medium) detected in linux-stable-rtv4.1.33 | Mend: dependency security vulnerability | ## CVE-2019-20810 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/usb/go7007/snd-go7007.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/usb/go7007/snd-go7007.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
go7007_snd_init in drivers/media/usb/go7007/snd-go7007.c in the Linux kernel before 5.6 does not call snd_card_free for a failure path, which causes a memory leak, aka CID-9453264ef586.
<p>Publish Date: 2020-06-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-20810>CVE-2019-20810</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20810">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20810</a></p>
<p>Release Date: 2020-06-03</p>
<p>Fix Resolution: v5.6-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-20810 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2019-20810 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/usb/go7007/snd-go7007.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/usb/go7007/snd-go7007.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
go7007_snd_init in drivers/media/usb/go7007/snd-go7007.c in the Linux kernel before 5.6 does not call snd_card_free for a failure path, which causes a memory leak, aka CID-9453264ef586.
<p>Publish Date: 2020-06-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-20810>CVE-2019-20810</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20810">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20810</a></p>
<p>Release Date: 2020-06-03</p>
<p>Fix Resolution: v5.6-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files drivers media usb snd c drivers media usb snd c vulnerability details snd init in drivers media usb snd c in the linux kernel before does not call snd card free for a failure path which causes a memory leak aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
136,928 | 20,019,934,718 | IssuesEvent | 2022-02-01 15:31:45 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | opened | Ban a user from a space who currently isn't in that space | A-Spaces X-Needs-Design T-Enhancement A-Moderation O-Occasional | ### Your use case
#### What would you like to do?
Ban a user from a space, even though they're not currently in that space.
#### Why would you like to do it?
Perhaps a misbehaving user leaves of their own accord, but a moderator would like to prevent them from returning, or a moderator would like to prevent a user in one community from coming to other communities.
#### How would you like to achieve it?
Unsure. Apparently for rooms this is typically done using the /ban command, which isn't acvessible in spaces.
### Have you considered any alternatives?
No
### Additional context
This conversation in the spaces feedback room brought this up as a needed feature: https://matrix.to/#/!LPqOcoJQuGkweiTedq:riot.ovh/$g0jMZQia4AbcrJ8uFPPCxYY4C9wjVlMRvccVPp0EPRA?via=townsendandsmith.ml&via=matrix.org&via=envs.net | 1.0 | Ban a user from a space who currently isn't in that space - ### Your use case
#### What would you like to do?
Ban a user from a space, even though they're not currently in that space.
#### Why would you like to do it?
Perhaps a misbehaving user leaves of their own accord, but a moderator would like to prevent them from returning, or a moderator would like to prevent a user in one community from coming to other communities.
#### How would you like to achieve it?
Unsure. Apparently for rooms this is typically done using the /ban command, which isn't acvessible in spaces.
### Have you considered any alternatives?
No
### Additional context
This conversation in the spaces feedback room brought this up as a needed feature: https://matrix.to/#/!LPqOcoJQuGkweiTedq:riot.ovh/$g0jMZQia4AbcrJ8uFPPCxYY4C9wjVlMRvccVPp0EPRA?via=townsendandsmith.ml&via=matrix.org&via=envs.net | non_process | ban a user from a space who currently isn t in that space your use case what would you like to do ban a user from a space even though they re not currently in that space why would you like to do it perhaps a misbehaving user leaves of their own accord but a moderator would like to prevent them from returning or a moderator would like to prevent a user in one community from coming to other communities how would you like to achieve it unsure apparently for rooms this is typically done using the ban command which isn t acvessible in spaces have you considered any alternatives no additional context this conversation in the spaces feedback room brought this up as a needed feature | 0 |
7,627 | 10,729,815,926 | IssuesEvent | 2019-10-28 16:14:35 | googleapis/google-resumable-media-python | https://api.github.com/repos/googleapis/google-resumable-media-python | closed | Reimplement changes undone by #103 (always use raw response data) | type: process | Changes originally merged in #87 were undone in #103 in a way that can be opted in to to avoid breaking usage by users on older client library versions. Particularly GCS | 1.0 | Reimplement changes undone by #103 (always use raw response data) - Changes originally merged in #87 were undone in #103 in a way that can be opted in to to avoid breaking usage by users on older client library versions. Particularly GCS | process | reimplement changes undone by always use raw response data changes originally merged in were undone in in a way that can be opted in to to avoid breaking usage by users on older client library versions particularly gcs | 1 |
53,647 | 13,191,213,093 | IssuesEvent | 2020-08-13 11:41:47 | tarantool/tarantool | https://api.github.com/repos/tarantool/tarantool | closed | undefined symbol: curl_version | build feature | **Tarantool version**: 2.6.0-32-g6e11674dd
**Long story short**: tarantool bundles libcurl, but not entirely, and it's no good.
In the recent update of libcurl (807c7fa584f21ee955b2a14623d70f7510a3650d) its layout has changed: private function `Curl_version_init()` which used to fill-in info structure was eliminated. As a result, no symbols for `libcurl_la-version.o` remained used, so it wasn't included in tarantool binary. And `curl_version` and `curl_version_info` symbols went missing.
According to [libcurl naming conventions](https://github.com/curl/curl/blob/34e5ad21d2cb98475acdbf7a3a6ea973d8c12249/docs/INTERNALS.md#library-symbols) all exported symbols are named as `curl_*`. I think tarantool should respect it and preserve those symbols.
**Steps to reproduce**:
```lua
ffi = require('ffi')
ffi.cdef('char *curl_version();')
print(ffi.C.curl_version())
```
**Also useful**:
We've noticed the problem while working on static build. Here is a [test](https://github.com/tarantool/tarantool/commit/d7a94e4587159802cc295dd0a4a1e21c6c72b355#diff-250fb463d17f22a6c5abd5a4d757255b) which may help.
Related to #2971 | 1.0 | undefined symbol: curl_version - **Tarantool version**: 2.6.0-32-g6e11674dd
**Long story short**: tarantool bundles libcurl, but not entirely, and it's no good.
In the recent update of libcurl (807c7fa584f21ee955b2a14623d70f7510a3650d) its layout has changed: private function `Curl_version_init()` which used to fill-in info structure was eliminated. As a result, no symbols for `libcurl_la-version.o` remained used, so it wasn't included in tarantool binary. And `curl_version` and `curl_version_info` symbols went missing.
According to [libcurl naming conventions](https://github.com/curl/curl/blob/34e5ad21d2cb98475acdbf7a3a6ea973d8c12249/docs/INTERNALS.md#library-symbols) all exported symbols are named as `curl_*`. I think tarantool should respect it and preserve those symbols.
**Steps to reproduce**:
```lua
ffi = require('ffi')
ffi.cdef('char *curl_version();')
print(ffi.C.curl_version())
```
**Also useful**:
We've noticed the problem while working on static build. Here is a [test](https://github.com/tarantool/tarantool/commit/d7a94e4587159802cc295dd0a4a1e21c6c72b355#diff-250fb463d17f22a6c5abd5a4d757255b) which may help.
Related to #2971 | non_process | undefined symbol curl version tarantool version long story short tarantool bundles libcurl but not entirely and it s no good in the recent update of libcurl its layout has changed private function curl version init which used to fill in info structure was eliminated as a result no symbols for libcurl la version o remained used so it wasn t included in tarantool binary and curl version and curl version info symbols went missing according to all exported symbols are named as curl i think tarantool should respect it and preserve those symbols steps to reproduce lua ffi require ffi ffi cdef char curl version print ffi c curl version also useful we ve noticed the problem while working on static build here is a which may help related to | 0 |
7,451 | 10,559,123,893 | IssuesEvent | 2019-10-04 10:44:56 | ESMValGroup/ESMValCore | https://api.github.com/repos/ESMValGroup/ESMValCore | closed | Add SUM operator for daily_statistics preprocessor function | eWaterCycle enhancement preprocessor | This is useful to compute e.g. daily precipitation flux from hourly data. | 1.0 | Add SUM operator for daily_statistics preprocessor function - This is useful to compute e.g. daily precipitation flux from hourly data. | process | add sum operator for daily statistics preprocessor function this is useful to compute e g daily precipitation flux from hourly data | 1 |
269,110 | 8,425,850,988 | IssuesEvent | 2018-10-16 04:58:24 | CS2113-AY1819S1-W12-4/main | https://api.github.com/repos/CS2113-AY1819S1-W12-4/main | opened | View quantities of drinks sold over a certain period | priority.high type.story | **User Story 11:**
As an accountant, I want to keep track of the quantities of each drinks sold over specified periods so that I can inform the manager which items are selling well. | 1.0 | View quantities of drinks sold over a certain period - **User Story 11:**
As an accountant, I want to keep track of the quantities of each drinks sold over specified periods so that I can inform the manager which items are selling well. | non_process | view quantities of drinks sold over a certain period user story as an accountant i want to keep track of the quantities of each drinks sold over specified periods so that i can inform the manager which items are selling well | 0 |
381,521 | 26,457,469,070 | IssuesEvent | 2023-01-16 15:14:43 | onflow/flow-emulator | https://api.github.com/repos/onflow/flow-emulator | closed | ReadMe: Deprecated emulator start command | Feedback Documentation | The ReadMe contains this command: `flow emulator --init`
But the [tools doc](https://developers.flow.com/tools/flow-cli/start-emulator#initialize) says that it is deprecated. | 1.0 | ReadMe: Deprecated emulator start command - The ReadMe contains this command: `flow emulator --init`
But the [tools doc](https://developers.flow.com/tools/flow-cli/start-emulator#initialize) says that it is deprecated. | non_process | readme deprecated emulator start command the readme contains this command flow emulator init but the says that it is deprecated | 0 |
27,727 | 30,279,156,328 | IssuesEvent | 2023-07-07 23:37:06 | tailscale/tailscale | https://api.github.com/repos/tailscale/tailscale | reopened | Bug: Android 12 exit node receiving self disco out packets | OS-android L1 Very few P2 Aggravating T5 Usability exit-node bug | ### What is the issue?
HS # 3891
When Pixel 6 Pro connects to an exit node, Tailscale shows a successful connection but network traffic fails. Occasionally network traffic will start flowing after a couple of minutes. If it doesn't connect after a few minutes, the user has to disconnect and reconnect Tailscale to establish a successful connection.
Exit node is macOS Monterey.
```
2022-07-26 13:08:12.763359402 +0000 UTC: 19.1M/198.3M tstun: [RATELIMIT] format("[unexpected] received self disco out packet over tstun; dropping") (118 dropped)
2022-07-26 13:08:12.763921414 +0000 UTC: 19.1M/198.3M tstun: [unexpected] received self disco out packet over tstun; dropping
2022-07-26 13:08:12.76452615 +0000 UTC: 19.1M/198.3M tstun: [unexpected] received self disco out packet over tstun; dropping
2022-07-26 13:08:12.765575426 +0000 UTC: 19.1M/198.3M tstun: [RATELIMIT] format("[unexpected] received self disco out packet over tstun; dropping")
```
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
Android
### OS version
Android 12
### Tailscale version
1.28.0-taabca3a4c-gd900a87f4b4
### Bug report
BUG-9337943c466d7f2c92733f9a4804e1e459f4bceb2545e07dbd244cb652bca0bf-20220726130643Z-a50d6a9c089946fe | True | Bug: Android 12 exit node receiving self disco out packets - ### What is the issue?
HS # 3891
When Pixel 6 Pro connects to an exit node, Tailscale shows a successful connection but network traffic fails. Occasionally network traffic will start flowing after a couple of minutes. If it doesn't connect after a few minutes, the user has to disconnect and reconnect Tailscale to establish a successful connection.
Exit node is macOS Monterey.
```
2022-07-26 13:08:12.763359402 +0000 UTC: 19.1M/198.3M tstun: [RATELIMIT] format("[unexpected] received self disco out packet over tstun; dropping") (118 dropped)
2022-07-26 13:08:12.763921414 +0000 UTC: 19.1M/198.3M tstun: [unexpected] received self disco out packet over tstun; dropping
2022-07-26 13:08:12.76452615 +0000 UTC: 19.1M/198.3M tstun: [unexpected] received self disco out packet over tstun; dropping
2022-07-26 13:08:12.765575426 +0000 UTC: 19.1M/198.3M tstun: [RATELIMIT] format("[unexpected] received self disco out packet over tstun; dropping")
```
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
Android
### OS version
Android 12
### Tailscale version
1.28.0-taabca3a4c-gd900a87f4b4
### Bug report
BUG-9337943c466d7f2c92733f9a4804e1e459f4bceb2545e07dbd244cb652bca0bf-20220726130643Z-a50d6a9c089946fe | non_process | bug android exit node receiving self disco out packets what is the issue hs when pixel pro connects to an exit node tailscale shows a successful connection but network traffic fails occasionally network traffic will start flowing after a couple of minutes if it doesn t connect after a few minutes the user has to disconnect and reconnect tailscale to establish a successful connection exit node is macos monterey utc tstun format received self disco out packet over tstun dropping dropped utc tstun received self disco out packet over tstun dropping utc tstun received self disco out packet over tstun dropping utc tstun format received self disco out packet over tstun dropping steps to reproduce no response are there any recent changes that introduced the issue no response os android os version android tailscale version bug report bug | 0 |
228,978 | 17,496,268,853 | IssuesEvent | 2021-08-10 00:58:50 | Yuri-sc/Salaodeestetica-TCC | https://api.github.com/repos/Yuri-sc/Salaodeestetica-TCC | closed | Criar um domínio/hospedagem para o projeto | documentation | Gerar uma hospedagem para o projeto, lembrando que o mesmo deverá respeitar minimamente o nome oficial do sistema.
| 1.0 | Criar um domínio/hospedagem para o projeto - Gerar uma hospedagem para o projeto, lembrando que o mesmo deverá respeitar minimamente o nome oficial do sistema.
| non_process | criar um domínio hospedagem para o projeto gerar uma hospedagem para o projeto lembrando que o mesmo deverá respeitar minimamente o nome oficial do sistema | 0 |
19,929 | 26,397,459,301 | IssuesEvent | 2023-01-12 20:52:10 | berkeley-dsep-infra/datahub | https://api.github.com/repos/berkeley-dsep-infra/datahub | opened | Semester End Clean Up Tasks! | process | # Summary
At the end of every semester, we need to perform the following housekeeping tasks. Collating them so that we can prioritize them after the end of every semester.
- [ ] Remove packages that did not get used (For Python packages - the Python popularity dashboard would serve as a valuable data point)
- [ ] Remove auto-scaler calendar events that were added during the semester
- [ ] Remove all the compute increase requests that got requested during the semester
- [ ] [Optional] Remove course admins for that specific semester
- [ ] Run the archival process for all hub home directories
- [ ] Reduce the number of nodes allocated for each node pool
# Important information
Spring 23 semester ends May 12th!
Any other activity I am missing?
| 1.0 | Semester End Clean Up Tasks! - # Summary
At the end of every semester, we need to perform the following housekeeping tasks. Collating them so that we can prioritize them after the end of every semester.
- [ ] Remove packages that did not get used (For Python packages - the Python popularity dashboard would serve as a valuable data point)
- [ ] Remove auto-scaler calendar events that were added during the semester
- [ ] Remove all the compute increase requests that got requested during the semester
- [ ] [Optional] Remove course admins for that specific semester
- [ ] Run the archival process for all hub home directories
- [ ] Reduce the number of nodes allocated for each node pool
# Important information
Spring 23 semester ends May 12th!
Any other activity I am missing?
| process | semester end clean up tasks summary at the end of every semester we need to perform the following housekeeping tasks collating them so that we can prioritize them after the end of every semester remove packages that did not get used for python packages the python popularity dashboard would serve as a valuable data point remove auto scaler calendar events that were added during the semester remove all the compute increase requests that got requested during the semester remove course admins for that specific semester run the archival process for all hub home directories reduce the number of nodes allocated for each node pool important information spring semester ends may any other activity i am missing | 1 |
1,787 | 2,571,642,136 | IssuesEvent | 2015-02-10 17:39:17 | mozilla/webmaker-app | https://api.github.com/repos/mozilla/webmaker-app | closed | Redesign Profile view | design in progress | Let's redesign this.
- Remove the links at the bottom, they are not needed. Also, they don't look like buttons.
- We can be smarter in showing the Developer or Admin link
- Sign Out should have a lot more presence

| 1.0 | Redesign Profile view - Let's redesign this.
- Remove the links at the bottom, they are not needed. Also, they don't look like buttons.
- We can be smarter in showing the Developer or Admin link
- Sign Out should have a lot more presence

| non_process | redesign profile view let s redesign this remove the links at the bottom they are not needed also they don t look like buttons we can be smarter in showing the developer or admin link sign out should have a lot more presence | 0 |
13,220 | 15,688,879,012 | IssuesEvent | 2021-03-25 15:07:04 | GoogleCloudPlatform/dotnet-docs-samples | https://api.github.com/repos/GoogleCloudPlatform/dotnet-docs-samples | closed | Video: AnalyzeLabels test is failing. | api: videointelligence priority: p1 samples type: process | [This test](https://github.com/GoogleCloudPlatform/dotnet-docs-samples/blob/ff662994ce2224ce4838454fe5f5994f22e58a5d/video/api/Test/Tests.cs#L100) is failing. I've skipped it in #1001 but it should be fixed.
@SurferJeffAtGoogle Assigning to you since you wrote this sample and test. | 1.0 | Video: AnalyzeLabels test is failing. - [This test](https://github.com/GoogleCloudPlatform/dotnet-docs-samples/blob/ff662994ce2224ce4838454fe5f5994f22e58a5d/video/api/Test/Tests.cs#L100) is failing. I've skipped it in #1001 but it should be fixed.
@SurferJeffAtGoogle Assigning to you since you wrote this sample and test. | process | video analyzelabels test is failing is failing i ve skipped it in but it should be fixed surferjeffatgoogle assigning to you since you wrote this sample and test | 1 |
9,788 | 12,804,906,383 | IssuesEvent | 2020-07-03 06:11:10 | threefoldfoundation/tft-stellar | https://api.github.com/repos/threefoldfoundation/tft-stellar | closed | tfta to tft gradual conversion | priority_major process_wontfix type_story | specs: https://github.com/threefoldfoundation/tft-stellar/blob/master/specs/tfta_to_tft/tfta_to_tft.md
TODO:
- [ ] Further define specs
- [x] Check qr code compatibility: #173
- [ ] Implementation
- [ ] Test on testnet
- [ ] Production
- [ ] Wiki/marketing docs
| 1.0 | tfta to tft gradual conversion - specs: https://github.com/threefoldfoundation/tft-stellar/blob/master/specs/tfta_to_tft/tfta_to_tft.md
TODO:
- [ ] Further define specs
- [x] Check qr code compatibility: #173
- [ ] Implementation
- [ ] Test on testnet
- [ ] Production
- [ ] Wiki/marketing docs
| process | tfta to tft gradual conversion specs todo further define specs check qr code compatibility implementation test on testnet production wiki marketing docs | 1 |
1,757 | 4,462,161,227 | IssuesEvent | 2016-08-24 08:55:36 | opentrials/opentrials | https://api.github.com/repos/opentrials/opentrials | closed | We're not displaying a trial's secondary IDs | 4. Ready for Review bug Processors | For example, http://explorer.opentrials.net/trials/ef2accfa-f0b8-4c5a-a81f-7fd4f3b19f52 has multiple secondary ids (check the sources), but we're just displaying the primary ID. The UI for this is already done (https://github.com/opentrials/opentrials/issues/153), we just need to add the identifiers to the data (http://api.opentrials.net/v1/trials/ef2accfa-f0b8-4c5a-a81f-7fd4f3b19f52). | 1.0 | We're not displaying a trial's secondary IDs - For example, http://explorer.opentrials.net/trials/ef2accfa-f0b8-4c5a-a81f-7fd4f3b19f52 has multiple secondary ids (check the sources), but we're just displaying the primary ID. The UI for this is already done (https://github.com/opentrials/opentrials/issues/153), we just need to add the identifiers to the data (http://api.opentrials.net/v1/trials/ef2accfa-f0b8-4c5a-a81f-7fd4f3b19f52). | process | we re not displaying a trial s secondary ids for example has multiple secondary ids check the sources but we re just displaying the primary id the ui for this is already done we just need to add the identifiers to the data | 1 |
14,570 | 17,692,831,073 | IssuesEvent | 2021-08-24 12:13:17 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | Models wont start when opened via browser or drag&drop | Feedback Processing Bug | ### What is the bug or the crash?
Models can't be run by opening via the browser or by drap&drop. The GUI of the model appears, but hitting the "run" button doesn't result in action. There's also no error message.
Opening a model in the graphic modeller and starting it from there on the other hand works fine.
See https://gis.stackexchange.com/questions/404482/qgis-3-16-4-graphical-modeler-not-running
### Steps to reproduce the issue
1. Drag & drop any model into the QGIS main window
2. Enter necessary input
3. Run the model --> it wont start.
4. Error log shows nothing.
### Versions
3.16.4
3.16.6
### Additional context
_No response_ | 1.0 | Models wont start when opened via browser or drag&drop - ### What is the bug or the crash?
Models can't be run by opening via the browser or by drap&drop. The GUI of the model appears, but hitting the "run" button doesn't result in action. There's also no error message.
Opening a model in the graphic modeller and starting it from there on the other hand works fine.
See https://gis.stackexchange.com/questions/404482/qgis-3-16-4-graphical-modeler-not-running
### Steps to reproduce the issue
1. Drag & drop any model into the QGIS main window
2. Enter necessary input
3. Run the model --> it wont start.
4. Error log shows nothing.
### Versions
3.16.4
3.16.6
### Additional context
_No response_ | process | models wont start when opened via browser or drag drop what is the bug or the crash models can t be run by opening via the browser or by drap drop the gui of the model appears but hitting the run button doesn t result in action there s also no error message opening a model in the graphic modeller and starting it from there on the other hand works fine see steps to reproduce the issue drag drop any model into the qgis main window enter necessary input run the model it wont start error log shows nothing versions additional context no response | 1 |
8,443 | 11,613,819,462 | IssuesEvent | 2020-02-26 11:26:31 | prisma/prisma-client-js | https://api.github.com/repos/prisma/prisma-client-js | opened | Some schemas return `PANIC: Unable to resolve a primary identifier for model MODELNAME` after introspection | bug/2-confirmed process/candidate | Schemas
- mariadb/nextcloud
- mysql_public/allsquare
- mysql_public/dotclear
- mysql_public/nextcloud
- mysql_public/piwigo
- postgresql_public/discourse
- postgresql_public/opendota
- postgresql_public/sonarqube
Reproduction
- `prisma2 introspect`
- `prisma2 generate`
- query models with `prisma.model.findMany({})` | 1.0 | Some schemas return `PANIC: Unable to resolve a primary identifier for model MODELNAME` after introspection - Schemas
- mariadb/nextcloud
- mysql_public/allsquare
- mysql_public/dotclear
- mysql_public/nextcloud
- mysql_public/piwigo
- postgresql_public/discourse
- postgresql_public/opendota
- postgresql_public/sonarqube
Reproduction
- `prisma2 introspect`
- `prisma2 generate`
- query models with `prisma.model.findMany({})` | process | some schemas return panic unable to resolve a primary identifier for model modelname after introspection schemas mariadb nextcloud mysql public allsquare mysql public dotclear mysql public nextcloud mysql public piwigo postgresql public discourse postgresql public opendota postgresql public sonarqube reproduction introspect generate query models with prisma model findmany | 1 |
676,869 | 23,140,812,814 | IssuesEvent | 2022-07-28 18:18:09 | dotnet/aspnetcore | https://api.github.com/repos/dotnet/aspnetcore | reopened | Route syntax highlighting and autocompletion | enhancement Priority:2 feature-minimal-actions area-web-frameworks Cost:XL | ASP.NET Core has a stable, well-defined syntax for routing. It supports a variety of features:
- Parameters - /product/{id}
- Parameter constraints - /product/{id:int}
- Parameter defaults - /{page=Home}
- Optional parameters - /files/{filename}.{ext?}
- Catch-all parameters - /blog/{*slug}
- Complex segments - /cube/{width:int}-{height:int}
- Token replacement - /api/[controller]/[action]
Understanding routing is essential to successfully using ASP.NET Core MVC, Web APIs, and Minimal APIs. While a basic route with a parameter is simple enough to use, few developers will have dedicated the complete range of routing features and syntax to memory. Additionally, there are rules and gotchas to using some of the more advanced features. E.g., a catch-all parameter must be the last parameter, or an optional parameter can’t have a default value, etc.
The big problem here is the only way to figure out whether a route is correct or not is to run a web app, try to use the route, and see what happens. It’s not a good experience.
We can improve this with tooling. Route syntax highlighting, auto-complete, and code analysis, can make route templates easier to understand and use.
## Motivation and goals
Routing is a highly used feature. We have the tools to make it better: Roslyn embedded language and Roslyn code analyzers. We’ve done this before for regexes and date format strings.
Regular expressions:

Date format strings:

The same improvements can be applied to routes: syntax highlighting, code analysis and autocomplete. These features would be available in APIs that define routes.
Controller route attributes (Route, HttpGet, HttpPost, etc):

Action route attributes (Route, HttpGet, HttpPost, etc):

Minimal API map methods (MapGet, MapPost, etc):

MVC conventional routing methods (MapControllerRoute, MapAreaRoute):

Razor @page directive:

## Potential improvements
- Route syntax highlighting
- Code analysis that validates the overall route syntax is valid (e.g. catch-all parameter isn't in the wrong location)
- Code analysis of route parameters against minimal API or MVC parameters (e.g. a minimal API expects a parameter that the route doesn't capture, or there is a mismatch between a parameter's route constraint and the .NET type)
- Autocomplete dropdown of built-in constraints
- Autocomplete dropdown of route parameter names (inspect the minimal API delegate or MVC action arguments to get a list of candidates) | 1.0 | Route syntax highlighting and autocompletion - ASP.NET Core has a stable, well-defined syntax for routing. It supports a variety of features:
- Parameters - /product/{id}
- Parameter constraints - /product/{id:int}
- Parameter defaults - /{page=Home}
- Optional parameters - /files/{filename}.{ext?}
- Catch-all parameters - /blog/{*slug}
- Complex segments - /cube/{width:int}-{height:int}
- Token replacement - /api/[controller]/[action]
Understanding routing is essential to successfully using ASP.NET Core MVC, Web APIs, and Minimal APIs. While a basic route with a parameter is simple enough to use, few developers will have dedicated the complete range of routing features and syntax to memory. Additionally, there are rules and gotchas to using some of the more advanced features. E.g., a catch-all parameter must be the last parameter, or an optional parameter can’t have a default value, etc.
The big problem here is the only way to figure out whether a route is correct or not is to run a web app, try to use the route, and see what happens. It’s not a good experience.
We can improve this with tooling. Route syntax highlighting, auto-complete, and code analysis, can make route templates easier to understand and use.
## Motivation and goals
Routing is a highly used feature. We have the tools to make it better: Roslyn embedded language and Roslyn code analyzers. We’ve done this before for regexes and date format strings.
Regular expressions:

Date format strings:

The same improvements can be applied to routes: syntax highlighting, code analysis and autocomplete. These features would be available in APIs that define routes.
Controller route attributes (Route, HttpGet, HttpPost, etc):

Action route attributes (Route, HttpGet, HttpPost, etc):

Minimal API map methods (MapGet, MapPost, etc):

MVC conventional routing methods (MapControllerRoute, MapAreaRoute):

Razor @page directive:

## Potential improvements
- Route syntax highlighting
- Code analysis that validates the overall route syntax is valid (e.g. catch-all parameter isn't in the wrong location)
- Code analysis of route parameters against minimal API or MVC parameters (e.g. a minimal API expects a parameter that the route doesn't capture, or there is a mismatch between a parameter's route constraint and the .NET type)
- Autocomplete dropdown of built-in constraints
- Autocomplete dropdown of route parameter names (inspect the minimal API delegate or MVC action arguments to get a list of candidates) | non_process | route syntax highlighting and autocompletion asp net core has a stable well defined syntax for routing it supports a variety of features parameters product id parameter constraints product id int parameter defaults page home optional parameters files filename ext catch all parameters blog slug complex segments cube width int height int token replacement api understanding routing is essential to successfully using asp net core mvc web apis and minimal apis while a basic route with a parameter is simple enough to use few developers will have dedicated the complete range of routing features and syntax to memory additionally there are rules and gotchas to using some of the more advanced features e g a catch all parameter must be the last parameter or an optional parameter can’t have a default value etc the big problem here is the only way to figure out whether a route is correct or not is to run a web app try to use the route and see what happens it’s not a good experience we can improve this with tooling route syntax highlighting auto complete and code analysis can make route templates easier to understand and use motivation and goals routing is a highly used feature we have the tools to make it better roslyn embedded language and roslyn code analyzers we’ve done this before for regexes and date format strings regular expressions date format strings the same improvements can be applied to routes syntax highlighting code analysis and autocomplete these features would be available in apis that define routes controller route attributes route httpget httppost etc action route attributes route httpget httppost etc minimal api map methods mapget mappost etc mvc conventional routing methods mapcontrollerroute maparearoute razor page directive potential improvements route syntax highlighting code analysis that validates the overall route syntax is valid e g catch all parameter isn t in the wrong location code analysis of route parameters against minimal api or mvc parameters e g a minimal api expects a parameter that the route doesn t capture or there is a mismatch between a parameter s route constraint and the net type autocomplete dropdown of built in constraints autocomplete dropdown of route parameter names inspect the minimal api delegate or mvc action arguments to get a list of candidates | 0 |
4,326 | 7,237,923,757 | IssuesEvent | 2018-02-13 12:54:22 | dzhw/zofar | https://api.github.com/repos/dzhw/zofar | opened | datatransfer calendar interface | category: technical.processes prio: 1 status: development type: backlog.task | transfer of data from/to calendar to/from other questions/sources. | 1.0 | datatransfer calendar interface - transfer of data from/to calendar to/from other questions/sources. | process | datatransfer calendar interface transfer of data from to calendar to from other questions sources | 1 |
180,548 | 14,786,455,458 | IssuesEvent | 2021-01-12 05:33:23 | red-hat-storage/ocs-ci | https://api.github.com/repos/red-hat-storage/ocs-ci | closed | Document process to retrieve brew registry access | documentation team/ecosystem | In order to test LSO on non-GA OCP versions users will need to acquire credentials to brew.registry.redhat.io and update their pull secret. We need to document this process. | 1.0 | Document process to retrieve brew registry access - In order to test LSO on non-GA OCP versions users will need to acquire credentials to brew.registry.redhat.io and update their pull secret. We need to document this process. | non_process | document process to retrieve brew registry access in order to test lso on non ga ocp versions users will need to acquire credentials to brew registry redhat io and update their pull secret we need to document this process | 0 |
14,837 | 9,540,955,714 | IssuesEvent | 2019-04-30 20:55:43 | mitchellm7/Stopifu | https://api.github.com/repos/mitchellm7/Stopifu | closed | Update display of stoplist/metric to show if stopword is disabled | usability | And/or add a disabled stopword section in output | True | Update display of stoplist/metric to show if stopword is disabled - And/or add a disabled stopword section in output | non_process | update display of stoplist metric to show if stopword is disabled and or add a disabled stopword section in output | 0 |
19,250 | 25,444,353,922 | IssuesEvent | 2022-11-24 03:40:30 | python/cpython | https://api.github.com/repos/python/cpython | closed | asyncio signal handler receives signals from child processes | type-bug expert-asyncio expert-multiprocessing | **Bug report**
Today I ran into a very weird behaviour in one of my projects, that uses asyncio and multiprocessing.
When I interrupted the program with a Ctrl-C on the terminal, the signal handler set in asyncio with `loop.add_signal_handler` was triggered multiple times! Further inspection showed, that if I manually send a `SIGINT` to one of the worker child processes, it triggered the main process signal handler.
I could trace the issue to the socket used by asyncio to receive the signals via `signal.set_wakeup_fd`, which seems to receive the signal bytes also from the child processes.
Running `signal.set_wakeup_fd(-1)` at least once in the worker processes worked as a workaround for me but cannot be the final solution.
Also, it seems, that other signal operations also can clear the issue (for example some automatically manager child processes did not trigger the bug, as they seem to ignore `SIGINT`).
In my project I used `concurrent.futures.ProcessPoolExecutor`, but I don't think that matters, as long as the signal handler (and wakeup fd) is set before the child is started.
**Your environment**
- CPython versions tested on:
Python 3.10.4
- Operating system and architecture:
Kali Linux rolling with kernel: Debian 5.16.18-1kali1 (2022-04-01) x86_64
**Bug output**
[asyncio_signal_bug.py.txt](https://github.com/python/cpython/files/9021468/asyncio_signal_bug.py.txt)
```
# python3 asyncio_signal_bug.py
394236 Signal Handler set
394237 Worker 1 started
394240 Worker 2 started
^C394236 received signal: Signals.SIGINT
394236 received signal: Signals.SIGINT
394236 received signal: Signals.SIGINT
^C394236 received signal: Signals.SIGINT
394236 received signal: Signals.SIGINT
394236 received signal: Signals.SIGINT
# python3 asyncio_signal_bug.py --set-in-children
394241 Signal Handler set
394242 Worker 1 started
394245 Worker 2 started
^C394241 received signal: Signals.SIGINT
^C394241 received signal: Signals.SIGINT
^C394241 received signal: Signals.SIGINT
^C394241 received signal: Signals.SIGINT
^C394241 received signal: Signals.SIGINT
^C394241 received signal: Signals.SIGINT
```
Notice, that in the first block, for every ^C, there are three signals received, whereas in the second block (if `signal.set_wakeup_fd(-1)` was called in the workers) there is only one. | 1.0 | asyncio signal handler receives signals from child processes - **Bug report**
Today I ran into a very weird behaviour in one of my projects, that uses asyncio and multiprocessing.
When I interrupted the program with a Ctrl-C on the terminal, the signal handler set in asyncio with `loop.add_signal_handler` was triggered multiple times! Further inspection showed, that if I manually send a `SIGINT` to one of the worker child processes, it triggered the main process signal handler.
I could trace the issue to the socket used by asyncio to receive the signals via `signal.set_wakeup_fd`, which seems to receive the signal bytes also from the child processes.
Running `signal.set_wakeup_fd(-1)` at least once in the worker processes worked as a workaround for me but cannot be the final solution.
Also, it seems, that other signal operations also can clear the issue (for example some automatically manager child processes did not trigger the bug, as they seem to ignore `SIGINT`).
In my project I used `concurrent.futures.ProcessPoolExecutor`, but I don't think that matters, as long as the signal handler (and wakeup fd) is set before the child is started.
**Your environment**
- CPython versions tested on:
Python 3.10.4
- Operating system and architecture:
Kali Linux rolling with kernel: Debian 5.16.18-1kali1 (2022-04-01) x86_64
**Bug output**
[asyncio_signal_bug.py.txt](https://github.com/python/cpython/files/9021468/asyncio_signal_bug.py.txt)
```
# python3 asyncio_signal_bug.py
394236 Signal Handler set
394237 Worker 1 started
394240 Worker 2 started
^C394236 received signal: Signals.SIGINT
394236 received signal: Signals.SIGINT
394236 received signal: Signals.SIGINT
^C394236 received signal: Signals.SIGINT
394236 received signal: Signals.SIGINT
394236 received signal: Signals.SIGINT
# python3 asyncio_signal_bug.py --set-in-children
394241 Signal Handler set
394242 Worker 1 started
394245 Worker 2 started
^C394241 received signal: Signals.SIGINT
^C394241 received signal: Signals.SIGINT
^C394241 received signal: Signals.SIGINT
^C394241 received signal: Signals.SIGINT
^C394241 received signal: Signals.SIGINT
^C394241 received signal: Signals.SIGINT
```
Notice, that in the first block, for every ^C, there are three signals received, whereas in the second block (if `signal.set_wakeup_fd(-1)` was called in the workers) there is only one. | process | asyncio signal handler receives signals from child processes bug report today i ran into a very weird behaviour in one of my projects that uses asyncio and multiprocessing when i interrupted the program with a ctrl c on the terminal the signal handler set in asyncio with loop add signal handler was triggered multiple times further inspection showed that if i manually send a sigint to one of the worker child processes it triggered the main process signal handler i could trace the issue to the socket used by asyncio to receive the signals via signal set wakeup fd which seems to receive the signal bytes also from the child processes running signal set wakeup fd at least once in the worker processes worked as a workaround for me but cannot be the final solution also it seems that other signal operations also can clear the issue for example some automatically manager child processes did not trigger the bug as they seem to ignore sigint in my project i used concurrent futures processpoolexecutor but i don t think that matters as long as the signal handler and wakeup fd is set before the child is started your environment cpython versions tested on python operating system and architecture kali linux rolling with kernel debian bug output asyncio signal bug py signal handler set worker started worker started received signal signals sigint received signal signals sigint received signal signals sigint received signal signals sigint received signal signals sigint received signal signals sigint asyncio signal bug py set in children signal handler set worker started worker started received signal signals sigint received signal signals sigint received signal signals sigint received signal signals sigint received signal signals sigint received signal signals sigint notice that in the first block for every c there are three signals received whereas in the second block if signal set wakeup fd was called in the workers there is only one | 1 |
10,374 | 13,191,143,349 | IssuesEvent | 2020-08-13 11:33:21 | zammad/zammad | https://api.github.com/repos/zammad/zammad | closed | S/MIME signing fails because of message encoding | bug mail processing prioritised by payment | ### Infos:
* Used Zammad version: 3.4
* Installation method (source, package, ..): all
* Operating system: all
* Database + version: all
* Elasticsearch version: all
* Browser + version: all
* Ticket-ID: 1078053, 1078208
### Expected behavior:
S/MIME should be able to sign and verify messages like this:
```
<div>\n <div>\n <div>\n \n <div>\n <div><div>\n <div>\n <div>\n \n <div>\n <div>\n<div>xxx</div>\n<div><br></div>\n<div>\n<div><div><ul><li><p>im \nFeld „Von“ überein. <br></p></li></ul></div></div>\n\n<span></span><div><div>\n</div></div>\n\n</div>\n<div>xx</div>\n</div>\n</div>\n</div>\n</div>\n</div></div>\n</div>\n</div>\n</div>\n</div>\n\n
```
### Actual behavior:
Because of problems with the encoding of new lines it will not be able to verify the signed email.
Gitlab also has trouble with this problem. Here some interesting links:
https://github.com/mikel/mail/issues/1190#issuecomment-578824531
https://gitlab.com/gitlab-org/gitlab/-/merge_requests/24153
### Steps to reproduce the behavior:
* use the message
* send signed email
* verify email content
| 1.0 | S/MIME signing fails because of message encoding - ### Infos:
* Used Zammad version: 3.4
* Installation method (source, package, ..): all
* Operating system: all
* Database + version: all
* Elasticsearch version: all
* Browser + version: all
* Ticket-ID: 1078053, 1078208
### Expected behavior:
S/MIME should be able to sign and verify messages like this:
```
<div>\n <div>\n <div>\n \n <div>\n <div><div>\n <div>\n <div>\n \n <div>\n <div>\n<div>xxx</div>\n<div><br></div>\n<div>\n<div><div><ul><li><p>im \nFeld „Von“ überein. <br></p></li></ul></div></div>\n\n<span></span><div><div>\n</div></div>\n\n</div>\n<div>xx</div>\n</div>\n</div>\n</div>\n</div>\n</div></div>\n</div>\n</div>\n</div>\n</div>\n\n
```
### Actual behavior:
Because of problems with the encoding of new lines it will not be able to verify the signed email.
Gitlab also has trouble with this problem. Here some interesting links:
https://github.com/mikel/mail/issues/1190#issuecomment-578824531
https://gitlab.com/gitlab-org/gitlab/-/merge_requests/24153
### Steps to reproduce the behavior:
* use the message
* send signed email
* verify email content
| process | s mime signing fails because of message encoding infos used zammad version installation method source package all operating system all database version all elasticsearch version all browser version all ticket id expected behavior s mime should be able to sign and verify messages like this n n n n n n n n n n n xxx n n n im nfeld „von“ überein n n n n n n xx n n n n n n n n n n n actual behavior because of problems with the encoding of new lines it will not be able to verify the signed email gitlab also has trouble with this problem here some interesting links steps to reproduce the behavior use the message send signed email verify email content | 1 |
10,141 | 13,044,162,486 | IssuesEvent | 2020-07-29 03:47:32 | tikv/tikv | https://api.github.com/repos/tikv/tikv | closed | UCP: Migrate scalar function `JsonValidJsonSig` from TiDB | challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor |
## Description
Port the scalar function `JsonValidJsonSig` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
| 2.0 | UCP: Migrate scalar function `JsonValidJsonSig` from TiDB -
## Description
Port the scalar function `JsonValidJsonSig` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
| process | ucp migrate scalar function jsonvalidjsonsig from tidb description port the scalar function jsonvalidjsonsig from tidb to coprocessor score mentor s lonng recommended skills rust programming learning materials already implemented expressions ported from tidb | 1 |
9,530 | 7,742,415,216 | IssuesEvent | 2018-05-29 09:27:26 | unitystation/unitystation | https://api.github.com/repos/unitystation/unitystation | opened | Easy to use Network Tab Element interactions | Security feature | # Preliminary information
## Category
Feature Request
## Current situation
It's hard to write a secure networked tab. It involves writing a lot of code in many different places, and is difficult to navigate and understand, especially for new devs.
## Criteria:
Develop a layer that would take care of security and remove the burden of writing lower-level network code from devs that want networked Tabs/Windows just to work like they do in single player games:
- [ ] Window elements should be easily bindable (single- or bidirectional) to server methods.
Server-to-client updates should be implicit.
This should also have dynamic element support (like Vend buttons in item list for vendors)
- [ ] Built-in implicit serverside security checks
- [ ] Multiple instances of the same tab type for one player should be possible
- [ ] Server must know if tab is opened at the moment and automatically close it for player if he isn't in range
| True | Easy to use Network Tab Element interactions - # Preliminary information
## Category
Feature Request
## Current situation
It's hard to write a secure networked tab. It involves writing a lot of code in many different places, and is difficult to navigate and understand, especially for new devs.
## Criteria:
Develop a layer that would take care of security and remove the burden of writing lower-level network code from devs that want networked Tabs/Windows just to work like they do in single player games:
- [ ] Window elements should be easily bindable (single- or bidirectional) to server methods.
Server-to-client updates should be implicit.
This should also have dynamic element support (like Vend buttons in item list for vendors)
- [ ] Built-in implicit serverside security checks
- [ ] Multiple instances of the same tab type for one player should be possible
- [ ] Server must know if tab is opened at the moment and automatically close it for player if he isn't in range
| non_process | easy to use network tab element interactions preliminary information category feature request current situation it s hard to write a secure networked tab it involves writing a lot of code in many different places and is difficult to navigate and understand especially for new devs criteria develop a layer that would take care of security and remove the burden of writing lower level network code from devs that want networked tabs windows just to work like they do in single player games window elements should be easily bindable single or bidirectional to server methods server to client updates should be implicit this should also have dynamic element support like vend buttons in item list for vendors built in implicit serverside security checks multiple instances of the same tab type for one player should be possible server must know if tab is opened at the moment and automatically close it for player if he isn t in range | 0 |
18,347 | 24,468,326,986 | IssuesEvent | 2022-10-07 17:05:49 | googleapis/release-please | https://api.github.com/repos/googleapis/release-please | closed | Manually tag Firestore Go v1.7.0 | type: process | The Firestore Go client library v1.7.0 is stuck and cannot be released. It seems like this release will need to be manually tagged.
PRs:
+ https://github.com/googleapis/google-cloud-go/pull/5339
+ https://github.com/googleapis/google-cloud-go/pull/5493
| 1.0 | Manually tag Firestore Go v1.7.0 - The Firestore Go client library v1.7.0 is stuck and cannot be released. It seems like this release will need to be manually tagged.
PRs:
+ https://github.com/googleapis/google-cloud-go/pull/5339
+ https://github.com/googleapis/google-cloud-go/pull/5493
| process | manually tag firestore go the firestore go client library is stuck and cannot be released it seems like this release will need to be manually tagged prs | 1 |
68,020 | 9,114,962,259 | IssuesEvent | 2019-02-22 02:35:37 | mostjs/core | https://api.github.com/repos/mostjs/core | closed | Help finishing an example/POC of most + redis + docker? | :book: Type: Documentation | One of my customers wanted to process pub-sub events from redis. I suggested using most.js and started creating a POC for them, but stopped early on because they decided to go a different direction. I decided to try to finish the POC on my own, anyways.
The code is [in this gist](https://gist.github.com/unscriptable/5fc29408bedfdceb545c69e22f1f0bc0). It's a very rough example, atm, but I could make it more interesting. The POC includes a Dockerfile and a docker-compose.yml file, but can also be run without docker.
Is this of interest? If so, what would you want to see in such an example and where should I put it? | 1.0 | Help finishing an example/POC of most + redis + docker? - One of my customers wanted to process pub-sub events from redis. I suggested using most.js and started creating a POC for them, but stopped early on because they decided to go a different direction. I decided to try to finish the POC on my own, anyways.
The code is [in this gist](https://gist.github.com/unscriptable/5fc29408bedfdceb545c69e22f1f0bc0). It's a very rough example, atm, but I could make it more interesting. The POC includes a Dockerfile and a docker-compose.yml file, but can also be run without docker.
Is this of interest? If so, what would you want to see in such an example and where should I put it? | non_process | help finishing an example poc of most redis docker one of my customers wanted to process pub sub events from redis i suggested using most js and started creating a poc for them but stopped early on because they decided to go a different direction i decided to try to finish the poc on my own anyways the code is it s a very rough example atm but i could make it more interesting the poc includes a dockerfile and a docker compose yml file but can also be run without docker is this of interest if so what would you want to see in such an example and where should i put it | 0 |
4,808 | 7,699,331,286 | IssuesEvent | 2018-05-19 11:11:14 | hashicorp/packer | https://api.github.com/repos/hashicorp/packer | reopened | amazon-import doesn't recognize s3_bucket_name when using env variables | need-more-info post-processor/amazon-import waiting-reply | Hi,
I'm trying to use the post-processor amazon-import post-processor, but for some reason packer is not recognizing the s3_bucket_name if I try to use a environment variable instead a hardcoded string.
- Packer version from 1.2.1 but tested with 1.2.3 too
- Darwin and Linux
- Debug log output from `PACKER_LOG=1 packer build template.json`.
This is happening during the build, but is not passing neither by validation
- The _simplest example template and scripts_ needed to reproduce the bug.
All artefacts to reproduce can be found here https://gist.github.com/flamarion/58ca7a453d3a4190eceec68af355beec
| 1.0 | amazon-import doesn't recognize s3_bucket_name when using env variables - Hi,
I'm trying to use the post-processor amazon-import post-processor, but for some reason packer is not recognizing the s3_bucket_name if I try to use a environment variable instead a hardcoded string.
- Packer version from 1.2.1 but tested with 1.2.3 too
- Darwin and Linux
- Debug log output from `PACKER_LOG=1 packer build template.json`.
This is happening during the build, but is not passing neither by validation
- The _simplest example template and scripts_ needed to reproduce the bug.
All artefacts to reproduce can be found here https://gist.github.com/flamarion/58ca7a453d3a4190eceec68af355beec
| process | amazon import doesn t recognize bucket name when using env variables hi i m trying to use the post processor amazon import post processor but for some reason packer is not recognizing the bucket name if i try to use a environment variable instead a hardcoded string packer version from but tested with too darwin and linux debug log output from packer log packer build template json this is happening during the build but is not passing neither by validation the simplest example template and scripts needed to reproduce the bug all artefacts to reproduce can be found here | 1 |
10,112 | 3,086,844,129 | IssuesEvent | 2015-08-25 07:38:40 | sass/libsass | https://api.github.com/repos/sass/libsass | opened | Should throw an error when passing a block into a mixin without @content | Bug - Confirmed Dev - Needs Test | ```scss
@mixin foo() {
foo: &;
}
foo {
@include foo { bar: baz }
}
```
Ruby Sass
```
Error: Mixin "foo" does not accept a content block.
on line 34 of test.scss, in `foo'
from line 34 of test.scss
```
LibSass
```
foo {
foo: foo; }
``` | 1.0 | Should throw an error when passing a block into a mixin without @content - ```scss
@mixin foo() {
foo: &;
}
foo {
@include foo { bar: baz }
}
```
Ruby Sass
```
Error: Mixin "foo" does not accept a content block.
on line 34 of test.scss, in `foo'
from line 34 of test.scss
```
LibSass
```
foo {
foo: foo; }
``` | non_process | should throw an error when passing a block into a mixin without content scss mixin foo foo foo include foo bar baz ruby sass error mixin foo does not accept a content block on line of test scss in foo from line of test scss libsass foo foo foo | 0 |
428,625 | 30,003,127,306 | IssuesEvent | 2023-06-26 10:37:10 | keptn/keptn.github.io | https://api.github.com/repos/keptn/keptn.github.io | closed | Provide search for Keptn docs | idea documentation | Feedback from the community: there is a search box needed for the docs.
Right now there is a lot of "domain knowledge" needed to find the things you are looking for.
A search box would help a lot. | 1.0 | Provide search for Keptn docs - Feedback from the community: there is a search box needed for the docs.
Right now there is a lot of "domain knowledge" needed to find the things you are looking for.
A search box would help a lot. | non_process | provide search for keptn docs feedback from the community there is a search box needed for the docs right now there is a lot of domain knowledge needed to find the things you are looking for a search box would help a lot | 0 |
388,325 | 11,486,502,592 | IssuesEvent | 2020-02-11 10:04:48 | jdi-testing/jdi-light | https://api.github.com/repos/jdi-testing/jdi-light | closed | Integrate with Applitools | complexity:high language:java priority:middle+ | https://applitools.com/tutorials/selenium-java.html#quick-start-%F0%9F%9A%80
Add JDI Light package with Aplitools functionality on each action
Add property to switch on and off Applitools eyes | 1.0 | Integrate with Applitools - https://applitools.com/tutorials/selenium-java.html#quick-start-%F0%9F%9A%80
Add JDI Light package with Aplitools functionality on each action
Add property to switch on and off Applitools eyes | non_process | integrate with applitools add jdi light package with aplitools functionality on each action add property to switch on and off applitools eyes | 0 |
14,260 | 17,194,026,274 | IssuesEvent | 2021-07-16 14:49:44 | qgis/QGIS-Documentation | https://api.github.com/repos/qgis/QGIS-Documentation | closed | [FEATURE][processing] New standalone console tool for running processing algorithms | 3.14 Automatic new feature Processing | Original commit: https://github.com/qgis/QGIS/commit/019035b1c18985bd16e24770b30309795739086d by nyalldawson
This new qgis_transform tool allows users to run processing algorithms
(both built-in, and those provided by plugins) directly from the console.
Running:
- "qgis_transform list" will output a complete list of all available
algorithms, grouped by provider.
- "qgis_transform plugins" lists available and activated plugins which
advertise the hasProcessingProvider metadata option (only these plugins
are loaded by the tool)
- "qgis_transform help algid" outputs the help and input descriptions
for the specified algorithm, e.g. "qgis_transform help native:centroids"
"qgis_transform run": runs an algorithm. Parameters are specified by a
"--param=value" syntax. E.g.
qgis_transform run native:centroids --INPUT="my_shapefile.shp" --OUTPUT="centroids.kml"
or
qgis_transform run native:buffer --INPUT=/home/me/my.shp --DISTANCE=20 --OUTPUT=/home/me/buffered.shp
While running an algorithm a text-based feedback bar is shown, and the
operation can be cancelled via CTRL+C
Sponsored by the Swedish User Group | 1.0 | [FEATURE][processing] New standalone console tool for running processing algorithms - Original commit: https://github.com/qgis/QGIS/commit/019035b1c18985bd16e24770b30309795739086d by nyalldawson
This new qgis_transform tool allows users to run processing algorithms
(both built-in, and those provided by plugins) directly from the console.
Running:
- "qgis_transform list" will output a complete list of all available
algorithms, grouped by provider.
- "qgis_transform plugins" lists available and activated plugins which
advertise the hasProcessingProvider metadata option (only these plugins
are loaded by the tool)
- "qgis_transform help algid" outputs the help and input descriptions
for the specified algorithm, e.g. "qgis_transform help native:centroids"
"qgis_transform run": runs an algorithm. Parameters are specified by a
"--param=value" syntax. E.g.
qgis_transform run native:centroids --INPUT="my_shapefile.shp" --OUTPUT="centroids.kml"
or
qgis_transform run native:buffer --INPUT=/home/me/my.shp --DISTANCE=20 --OUTPUT=/home/me/buffered.shp
While running an algorithm a text-based feedback bar is shown, and the
operation can be cancelled via CTRL+C
Sponsored by the Swedish User Group | process | new standalone console tool for running processing algorithms original commit by nyalldawson this new qgis transform tool allows users to run processing algorithms both built in and those provided by plugins directly from the console running qgis transform list will output a complete list of all available algorithms grouped by provider qgis transform plugins lists available and activated plugins which advertise the hasprocessingprovider metadata option only these plugins are loaded by the tool qgis transform help algid outputs the help and input descriptions for the specified algorithm e g qgis transform help native centroids qgis transform run runs an algorithm parameters are specified by a param value syntax e g qgis transform run native centroids input my shapefile shp output centroids kml or qgis transform run native buffer input home me my shp distance output home me buffered shp while running an algorithm a text based feedback bar is shown and the operation can be cancelled via ctrl c sponsored by the swedish user group | 1 |
12,214 | 14,742,953,915 | IssuesEvent | 2021-01-07 13:10:24 | kdjstudios/SABillingGitlab | https://api.github.com/repos/kdjstudios/SABillingGitlab | closed | Winnipeg - Unable to upload | anc-process anp-1 ant-bug has attachment | In GitLab by @kdjstudios on Jun 27, 2019, 09:04
**Submitted by:** "Elizabeth Fed" <elizabeth.fed@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-06-27-39518
**Server:** Internal
**Client/Site:** Winnipeg
**Account:** NA
**Issue:**
I am not able to upload the billing export report into SAB Billing as receiving the message as shown below.
 | 1.0 | Winnipeg - Unable to upload - In GitLab by @kdjstudios on Jun 27, 2019, 09:04
**Submitted by:** "Elizabeth Fed" <elizabeth.fed@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-06-27-39518
**Server:** Internal
**Client/Site:** Winnipeg
**Account:** NA
**Issue:**
I am not able to upload the billing export report into SAB Billing as receiving the message as shown below.
 | process | winnipeg unable to upload in gitlab by kdjstudios on jun submitted by elizabeth fed helpdesk server internal client site winnipeg account na issue i am not able to upload the billing export report into sab billing as receiving the message as shown below uploads image png | 1 |
826,700 | 31,709,873,789 | IssuesEvent | 2023-09-09 06:00:47 | SparkDevNetwork/Rock | https://api.github.com/repos/SparkDevNetwork/Rock | closed | Workflows: "Add Action" expands all lists of action attributes | Type: Bug Status: Confirmed Topic: UI Priority: Low Topic: Workflows x-bugapalooza: Low | v5.1 (unknown if new bug or not)
I have a workflow with a number of actions. Each action has at least one action attribute. When I click the "Add Action" button, once the orange "working" bubble goes away, all of the lists of action attributes are expanded. This is not visible if the action is collapsed, but is visible for any expanded activities (for instance, if I'm trying to match actions from one activity to another) and the attribute lists do show as expanded if I later expand one of the collapsed activities.
It's not a big deal but is a bit of a pain if I'm trying to keep things compact while working on the workflow. If I collapse something it should stay collapsed :wink:
**[Update: Steps to reproduce:]**
1. Start a new workflow
1. Add a single Activity Attribute to the default `Start` activity.
1. Collapse the `Attributes` pane on the `Start` activity.
1. Click `Add Action` in the `Start` activity
1. Observe that above the new first action, the Attributes pane has expanded again
| 1.0 | Workflows: "Add Action" expands all lists of action attributes - v5.1 (unknown if new bug or not)
I have a workflow with a number of actions. Each action has at least one action attribute. When I click the "Add Action" button, once the orange "working" bubble goes away, all of the lists of action attributes are expanded. This is not visible if the action is collapsed, but is visible for any expanded activities (for instance, if I'm trying to match actions from one activity to another) and the attribute lists do show as expanded if I later expand one of the collapsed activities.
It's not a big deal but is a bit of a pain if I'm trying to keep things compact while working on the workflow. If I collapse something it should stay collapsed :wink:
**[Update: Steps to reproduce:]**
1. Start a new workflow
1. Add a single Activity Attribute to the default `Start` activity.
1. Collapse the `Attributes` pane on the `Start` activity.
1. Click `Add Action` in the `Start` activity
1. Observe that above the new first action, the Attributes pane has expanded again
| non_process | workflows add action expands all lists of action attributes unknown if new bug or not i have a workflow with a number of actions each action has at least one action attribute when i click the add action button once the orange working bubble goes away all of the lists of action attributes are expanded this is not visible if the action is collapsed but is visible for any expanded activities for instance if i m trying to match actions from one activity to another and the attribute lists do show as expanded if i later expand one of the collapsed activities it s not a big deal but is a bit of a pain if i m trying to keep things compact while working on the workflow if i collapse something it should stay collapsed wink start a new workflow add a single activity attribute to the default start activity collapse the attributes pane on the start activity click add action in the start activity observe that above the new first action the attributes pane has expanded again | 0 |
6,730 | 9,842,656,184 | IssuesEvent | 2019-06-18 09:45:11 | EthVM/EthVM | https://api.github.com/repos/EthVM/EthVM | closed | You might need to increase max_locks_per_transaction | bug project:processing | * **I'm submitting a ...**
- [ ] feature request
- [x] bug report
* **Bug Report**
```
[2019-06-12 16:59:28,760] DEBUG Next = 5627096..5627138 (com.ethvm.kafka.connect.sources.web3.sources.ParityFullBlockSource)
[2019-06-12 16:59:28,996] WARN Write of 500 records failed, remainingRetries=1 (io.confluent.connect.jdbc.sink.JdbcSinkTask)
java.sql.BatchUpdateException: Batch entry 223 INSERT INTO "transaction_trace" ("block_hash","transaction_hash","root_error","timestamp","trace_count","traces") VALUES ('0x68a360c0fc5c04458430ef1ea8b556815d4c576111a321d9feae98728c16b9b7','0x4405cca01999977cc03ec5c8fd40b269ec0658be06ba6da6d794cbfcf37475e9',NULL,'2018-07-16 00:37:34+00'::timestamp,1,'[
{
"action": {
"TraceRewardActionRecord": null,
"TraceCallActionRecord": {
"callType": "call",
"from": "0x687422eea2cb73b5d3e242ba5456b782919afc85",
"to": "0x0d19dcfa70ed06f6dac909abeee33ccd130d9a62",
"gas": 293150,
"input": "",
"value": 1000000000000000000
},
"TraceCreateActionRecord": null,
"TraceDestroyActionRecord": null
},
"error": null,
"result": {
"address": null,
"code": null,
"gasUsed": 0,
"output": "0x"
},
"subtraces": 0,
"traceAddress": [],
"type": "call",
"blockHash": "0x68a360c0fc5c04458430ef1ea8b556815d4c576111a321d9feae98728c16b9b7",
"blockNumber": 3645736,
"timestamp": null,
"transactionHash": "0x4405cca01999977cc03ec5c8fd40b269ec0658be06ba6da6d794cbfcf37475e9",
"transactionPosition": 12
}
]') ON CONFLICT ("block_hash","transaction_hash") DO UPDATE SET "root_error"=EXCLUDED."root_error","timestamp"=EXCLUDED."timestamp","trace_count"=EXCLUDED."trace_count","traces"=EXCLUDED."traces" was aborted: ERROR: out of shared memory
Hint: You might need to increase max_locks_per_transaction. Call getNextException to see other errors in the batch.
at org.postgresql.jdbc.BatchResultHandler.handleError(BatchResultHandler.java:145)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2156)
at org.postgresql.core.v3.QueryExecutorImpl.flushIfDeadlockRisk(QueryExecutorImpl.java:1265)
at org.postgresql.core.v3.QueryExecutorImpl.sendQuery(QueryExecutorImpl.java:1290)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:446)
at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:793)
at org.postgresql.jdbc.PgPreparedStatement.executeBatch(PgPreparedStatement.java:1659)
at io.confluent.connect.jdbc.sink.BufferedRecords.flush(BufferedRecords.java:143)
at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:72)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:74)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:538)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.postgresql.util.PSQLException: ERROR: out of shared memory
Hint: You might need to increase max_locks_per_transaction.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2455)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2155)
... 19 more
``` | 1.0 | You might need to increase max_locks_per_transaction - * **I'm submitting a ...**
- [ ] feature request
- [x] bug report
* **Bug Report**
```
[2019-06-12 16:59:28,760] DEBUG Next = 5627096..5627138 (com.ethvm.kafka.connect.sources.web3.sources.ParityFullBlockSource)
[2019-06-12 16:59:28,996] WARN Write of 500 records failed, remainingRetries=1 (io.confluent.connect.jdbc.sink.JdbcSinkTask)
java.sql.BatchUpdateException: Batch entry 223 INSERT INTO "transaction_trace" ("block_hash","transaction_hash","root_error","timestamp","trace_count","traces") VALUES ('0x68a360c0fc5c04458430ef1ea8b556815d4c576111a321d9feae98728c16b9b7','0x4405cca01999977cc03ec5c8fd40b269ec0658be06ba6da6d794cbfcf37475e9',NULL,'2018-07-16 00:37:34+00'::timestamp,1,'[
{
"action": {
"TraceRewardActionRecord": null,
"TraceCallActionRecord": {
"callType": "call",
"from": "0x687422eea2cb73b5d3e242ba5456b782919afc85",
"to": "0x0d19dcfa70ed06f6dac909abeee33ccd130d9a62",
"gas": 293150,
"input": "",
"value": 1000000000000000000
},
"TraceCreateActionRecord": null,
"TraceDestroyActionRecord": null
},
"error": null,
"result": {
"address": null,
"code": null,
"gasUsed": 0,
"output": "0x"
},
"subtraces": 0,
"traceAddress": [],
"type": "call",
"blockHash": "0x68a360c0fc5c04458430ef1ea8b556815d4c576111a321d9feae98728c16b9b7",
"blockNumber": 3645736,
"timestamp": null,
"transactionHash": "0x4405cca01999977cc03ec5c8fd40b269ec0658be06ba6da6d794cbfcf37475e9",
"transactionPosition": 12
}
]') ON CONFLICT ("block_hash","transaction_hash") DO UPDATE SET "root_error"=EXCLUDED."root_error","timestamp"=EXCLUDED."timestamp","trace_count"=EXCLUDED."trace_count","traces"=EXCLUDED."traces" was aborted: ERROR: out of shared memory
Hint: You might need to increase max_locks_per_transaction. Call getNextException to see other errors in the batch.
at org.postgresql.jdbc.BatchResultHandler.handleError(BatchResultHandler.java:145)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2156)
at org.postgresql.core.v3.QueryExecutorImpl.flushIfDeadlockRisk(QueryExecutorImpl.java:1265)
at org.postgresql.core.v3.QueryExecutorImpl.sendQuery(QueryExecutorImpl.java:1290)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:446)
at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:793)
at org.postgresql.jdbc.PgPreparedStatement.executeBatch(PgPreparedStatement.java:1659)
at io.confluent.connect.jdbc.sink.BufferedRecords.flush(BufferedRecords.java:143)
at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:72)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:74)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:538)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.postgresql.util.PSQLException: ERROR: out of shared memory
Hint: You might need to increase max_locks_per_transaction.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2455)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2155)
... 19 more
``` | process | you might need to increase max locks per transaction i m submitting a feature request bug report bug report debug next com ethvm kafka connect sources sources parityfullblocksource warn write of records failed remainingretries io confluent connect jdbc sink jdbcsinktask java sql batchupdateexception batch entry insert into transaction trace block hash transaction hash root error timestamp trace count traces values null timestamp action tracerewardactionrecord null tracecallactionrecord calltype call from to gas input value tracecreateactionrecord null tracedestroyactionrecord null error null result address null code null gasused output subtraces traceaddress type call blockhash blocknumber timestamp null transactionhash transactionposition on conflict block hash transaction hash do update set root error excluded root error timestamp excluded timestamp trace count excluded trace count traces excluded traces was aborted error out of shared memory hint you might need to increase max locks per transaction call getnextexception to see other errors in the batch at org postgresql jdbc batchresulthandler handleerror batchresulthandler java at org postgresql core queryexecutorimpl processresults queryexecutorimpl java at org postgresql core queryexecutorimpl flushifdeadlockrisk queryexecutorimpl java at org postgresql core queryexecutorimpl sendquery queryexecutorimpl java at org postgresql core queryexecutorimpl execute queryexecutorimpl java at org postgresql jdbc pgstatement executebatch pgstatement java at org postgresql jdbc pgpreparedstatement executebatch pgpreparedstatement java at io confluent connect jdbc sink bufferedrecords flush bufferedrecords java at io confluent connect jdbc sink jdbcdbwriter write jdbcdbwriter java at io confluent connect jdbc sink jdbcsinktask put jdbcsinktask java at org apache kafka connect runtime workersinktask delivermessages workersinktask java at org apache kafka connect runtime workersinktask poll workersinktask java at org apache kafka connect runtime workersinktask iteration workersinktask java at org apache kafka connect runtime workersinktask execute workersinktask java at org apache kafka connect runtime workertask dorun workertask java at org apache kafka connect runtime workertask run workertask java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by org postgresql util psqlexception error out of shared memory hint you might need to increase max locks per transaction at org postgresql core queryexecutorimpl receiveerrorresponse queryexecutorimpl java at org postgresql core queryexecutorimpl processresults queryexecutorimpl java more | 1 |
47,517 | 6,061,623,470 | IssuesEvent | 2017-06-14 07:14:41 | geetsisbac/WCVVENIXYFVIRBXH3BYTI6TE | https://api.github.com/repos/geetsisbac/WCVVENIXYFVIRBXH3BYTI6TE | reopened | k6FFuZaTsj/r/ee7SLHp1D7Rm8M/KrGnSm0FCTMvqT4Emnmu/BL1u2oU9XUyHvecsLgIx90gN3XO1EDFXIuV5k2/gb+UNwMUkkNl1A/f7iy9kHjYCfNgfNsBVhIIzPiE9vPQLcYIX4diU2K/FBbgbHi+3YVZ1si9AVR86G5Pf8E= | design | jLtWY+ClSjtqOvYbjOLq0iaiZX9w0oiIbBxLzosxbe+1wb8tK7VjTTA9wANMtfOAn1JA7dI9Dz38T86UqsyBWDgu3NkEtJ8QHs7iUu+JWNoUCFRSC9NrJCCtZUSnCRVGh29OdfKFcLqoFscUnhxi8FpPebL7vsvK3zoK8/Ay4fO+Lh8w9Opp+EskD9gXsTOzNIGdl6S93HF/tYjeBMu2di7Kwv8HiEkMLmP9n1Sl/MmsPcggVnCTOM971+yDKxRWBPLtf0F2rMgckUFb4cOGgacjBGL7h4Fgq1v4tUKvHPW1ZmyqsBRhloIKM8cznzx507uhlpoHU5bms+0Veh5kRoPgeOgOfMqWziYro6gFdsIrXo744wxLo4VKmUm5KR7U/eTEwfQmeZCdpPC+jOkRgDwJoqbbGH79bzfhymFI66XC6GZ5QF0tsLk71h6HPwVx0WZzlvODh+9KTl70QInuxVPma0lUdJQAH7fbnt4WG+8jB7K7UtRZa/WsLFRFICP0h3UrbAFAIhrFVj9H2ORXrGhwpCJ2Nu4RaPku0BcE/nD9tHVvSeTXVoKFlNxYpkzkt0N4jhHvTsGegzIqD6qlLEi39iqyrU+E4Qfb8VVHqezRbQI4wCYl2zfcEO7MgsO0YTkBq1QTYFRH3xy3+wlM9XQNnHgGxmHmH5JbEIXD8F67V4o/rbluAUCrJRxHt3CRU+ZrSVR0lAAft9ue3hYb7/E1EFzbestf09i9r9Lo9/UTJ7kbcuigVtOjzlFszv/axwVnTNGBrB9vS0KvEjClJl/Ug74wXh51jqBY5yRMZu2HdStsAUAiGsVWP0fY5FesaHCkInY27hFo+S7QFwT+cM3XmimpgBhphzdQ8LBhaOQ9y28dmRMOF4ENiFSOJTmRTdHkRQvCb3pedLBS+FNHdMk+Yiu1Sph6Yz3duM7I9V+CxuChAxZ7WwlCbtXdnsxB4rcquRosYek8UMDogvs+z3lVq2kmtQ6Okw/fvxXFKsSqL2b3H81apDDe3VMFSMVbYTkBq1QTYFRH3xy3+wlM9XQNnHgGxmHmH5JbEIXD8F6eaU6dbzhuFRYGw7DafIm+oC+OTOauTxvj6LP+EPvgubiSZ2BF1G4TXg9cX0BfboPnU3aWsBDiPzts8lhq3ii8U+ZrSVR0lAAft9ue3hYb7xlWqy23g8i2WfTI/tR9OjXjOnmIX8qWrfvV/tJtgQZuxwVnTNGBrB9vS0KvEjClJk07uaXKVOanHCuq17T4u37ityq5Gixh6TxQwOiC+z7PeVWraSa1Do6TD9+/FcUqxLSCNflX0tDX7Lg99C2M4Mpcu2qVFid45yGml9CjqTCqplkQlVIS60YlSDjXZtMOzsTSD4q8MngGEuP5xrjjm7RIyQEvcbmdB3SY5b5pocVY4rcquRosYek8UMDogvs+z3lVq2kmtQ6Okw/fvxXFKsQQsN9Ohtf1T1OIGnk0lptow6DrEGKsO7FyPnANT+1vE8cFZ0zRgawfb0tCrxIwpSZOv79YlXq5XExp2WzgQrj5Bbt3X/hVUWDB2vRUvXjYmOpzFhvDFjyK9hR40vZPsp8Umdu5CdROB9kpY+RTjgXOIqCv2vGtssOEDulIZ+eTBmE5AatUE2BUR98ct/sJTPV0DZx4BsZh5h+SWxCFw/BekoPmkf04zbPSrxnF314WRfA4tlGI5D671HRiQOIJfC3qcxYbwxY8ivYUeNL2T7KfFJnbuQnUTgfZKWPkU44FzmgM8vVAQDKdwAMej9+fW8LDoOsQYqw7sXI+cA1P7W8TxwVnTNGBrB9vS0KvEjClJk6/v1iVerlcTGnZbOBCuPl7eC/R59jVVeIp6WkfjYYFplkQlVIS60YlSDjXZtMOzsTSD4q8MngGEuP5xrjjm7TcJkwsMpBwRuIkAIr2S0xRPAmiptsYfv1vN+HKYUjrpcLoZnlAXS2wuTvWHoc/BXHq9mCsomqHSZTxhHPh67fCdO2VqiS0rP2IyqJbPWBK7q994A67V77wLJkVj3g+wuhPdy7rBFOCZ1L4memOBtDrOv1zH2XsG8X+nGKBjQKZPmE5AatUE2BUR98ct/sJTPV0DZx4BsZh5h+SWxCFw/Be+i3RpuYboPnLp2nOgwJNssg7XlW2JbiUFU09WpM6MOdlORQXD32az/6OqiW4/5GLsFZZspfX4JUh47/3iIb2OwVAya+NlvpG/ILQE+w0Zi6gL45M5q5PG+Pos/4Q++C5Q5jc4SU6fixEZdkIy+Jajx3g2QJBE/yC6R0ATPgGAdNKLzvTh3K15sxGn8QK4yD+2kJEynYn08yngJ/mNNkTOnpxyQkIeR5cC3fYMlzJb4W3Q3iOEe9OwZ6DMioPqqUsSLf2KrKtT4ThB9vxVUep7EAp1hCRFBoOsjuhCIB6maxYv9ejXaKPaUzVcr02+/x5JAOBuefcBOjI/x3ZypwozquYsxVS0xGtzsFzZ0hLqyxf0HLmvqJA1EiRCB/IcILRhOVpALOAxHevOay6yS/XpS/g1lFD0P1swcv0YzcoCqYD14M9HFpSzapc1n73hmWwplkQlVIS60YlSDjXZtMOzr/RJoNTOjobVGxe0USPotXqCHNXU9KXB4vJRYvhDf5gJAOBuefcBOjI/x3ZypwozkH4Hkfuhv5ub4SDm5yoUTjIO15VtiW4lBVNPVqTOjDnZTkUFw99ms/+jqoluP+Ri2W40F2/qpig/0vwOlutepC3Q3iOEe9OwZ6DMioPqqUsSLf2KrKtT4ThB9vxVUep7PZ8lNUEcHDi4qzw1LVAv/A8CaKm2xh+/W834cphSOulwuhmeUBdLbC5O9Yehz8FcbY5SGh10ZwvWoUeNIkDejhhOQGrVBNgVEffHLf7CUz1dA2ceAbGYeYfklsQhcPwXikxHfWze/q9i4O4dlHlJatN0eRFC8Jvel50sFL4U0d0yT5iK7VKmHpjPd24zsj1XzNDDp7F0z70tKQTVaXG3qOmWRCVUhLrRiVIONdm0w7Ot7t8kH3yU7WLblWCm7tIs+Eyhd8VFxFyCRqkBi2pjgYkA4G559wE6Mj/HdnKnCjOnd3Ayc3AHA/CqZeH3zsjULdDeI4R707BnoMyKg+qpSxIt/Yqsq1PhOEH2/FVR6nszWTCl8A6pv4vrwNGHeHwjoTlaQCzgMR3rzmsuskv16Uv4NZRQ9D9bMHL9GM3KAqm1RdyiuRIyFan9OgciaQKL03R5EULwm96XnSwUvhTR3TJPmIrtUqYemM93bjOyPVfuo7aUu+SrpgOF34JAuX466ZZEJVSEutGJUg412bTDs5UClmw0NY6Qdw3cNVcuDCLAzMcpJHc3Fel7MCBnFsNj+pzFhvDFjyK9hR40vZPsp/6zeZC6Y8auUToYl7v9NN84yfahy/XZlY7NduvRJiatccFZ0zRgawfb0tCrxIwpSZXbCKl3JXCAyVvehbk8/5AyDteVbYluJQVTT1akzow52U5FBcPfZrP/o6qJbj/kYt/9XMy0d1GyjGIKhpj4fygt0N4jhHvTsGegzIqD6qlLEi39iqyrU+E4Qfb8VVHqez1ZS9LziedspV0CfP+/OvcSi8704dytebMRp/ECuMg/tpCRMp2J9PMp4Cf5jTZEzp74MuFuP58SNWC+ZXL7RQGYTkBq1QTYFRH3xy3+wlM9XQNnHgGxmHmH5JbEIXD8F7DM17x/4Q5RNivqKmwtEZsr33gDrtXvvAsmRWPeD7C6POpdcwmqpjwuhI5RBTKGuxcHZLgrLSILSICYnIHJkfSJAOBuefcBOjI/x3ZypwozobncTJMw7WHpkBEcCtng4w8CaKm2xh+/W834cphSOulwuhmeUBdLbC5O9Yehz8FcdE93eemxFXVG/NKwOoqsk+E5WkAs4DEd685rLrJL9elL+DWUUPQ/WzBy/RjNygKppBxdKTX3Pe1m8rqzA9RhQdhOQGrVBNgVEffHLf7CUz1dA2ceAbGYeYfklsQhcPwXsCUnMeUEz9EfB+DdPsKO06E5WkAs4DEd685rLrJL9elL+DWUUPQ/WzBy/RjNygKptL1+jnz8RIijAjc9bs2oSNN0eRFC8Jvel50sFL4U0d0yT5iK7VKmHpjPd24zsj1X0MhA/wACQ82O4tTvu7H77WmWRCVUhLrRiVIONdm0w7O+HdsCRrfKK4tkXy25RpN8U1RIOzMARERMv4Llhr1ysjqcxYbwxY8ivYUeNL2T7KfI26BsLedUKtka8u9GfDkR4d1K2wBQCIaxVY/R9jkV6xocKQidjbuEWj5LtAXBP5wd72fFKR1zcRqboWcoiDlEod1K2wBQCIaxVY/R9jkV6xocKQidjbuEWj5LtAXBP5w1kP4cikRsiWA0I7BH/UhH+M6eYhfypat+9X+0m2BBm7HBWdM0YGsH29LQq8SMKUmAe3PC+3EXS9+MhffnYSdjsg7XlW2JbiUFU09WpM6MOdlORQXD32az/6OqiW4/5GLIVYOzzugyPOVohUNUOkpy8g7XlW2JbiUFU09WpM6MOdlORQXD32az/6OqiW4/5GLHMj+LOQ5J0PMTtOPPQzjz1PogooXkR0H44jWe7W0WxRYdQNDYulexIbkUw4rr/I6FZSin81jJ/HgtiQ/xKrPk8C5zp45tGtSXB5/bGqGwjKlIHq5tqvA3P0R0Gw7Asib5NAxUBZfX3WrHcv0ZhpWDJZYIa6rd3nUVHDvMkNMM88X/uPeDn9VOrmhfR9yyxyP7WObLmoAPwxndvcHwRyYpZoT6Ac82s5kIRckWR4SwOE= | 1.0 | k6FFuZaTsj/r/ee7SLHp1D7Rm8M/KrGnSm0FCTMvqT4Emnmu/BL1u2oU9XUyHvecsLgIx90gN3XO1EDFXIuV5k2/gb+UNwMUkkNl1A/f7iy9kHjYCfNgfNsBVhIIzPiE9vPQLcYIX4diU2K/FBbgbHi+3YVZ1si9AVR86G5Pf8E= - jLtWY+ClSjtqOvYbjOLq0iaiZX9w0oiIbBxLzosxbe+1wb8tK7VjTTA9wANMtfOAn1JA7dI9Dz38T86UqsyBWDgu3NkEtJ8QHs7iUu+JWNoUCFRSC9NrJCCtZUSnCRVGh29OdfKFcLqoFscUnhxi8FpPebL7vsvK3zoK8/Ay4fO+Lh8w9Opp+EskD9gXsTOzNIGdl6S93HF/tYjeBMu2di7Kwv8HiEkMLmP9n1Sl/MmsPcggVnCTOM971+yDKxRWBPLtf0F2rMgckUFb4cOGgacjBGL7h4Fgq1v4tUKvHPW1ZmyqsBRhloIKM8cznzx507uhlpoHU5bms+0Veh5kRoPgeOgOfMqWziYro6gFdsIrXo744wxLo4VKmUm5KR7U/eTEwfQmeZCdpPC+jOkRgDwJoqbbGH79bzfhymFI66XC6GZ5QF0tsLk71h6HPwVx0WZzlvODh+9KTl70QInuxVPma0lUdJQAH7fbnt4WG+8jB7K7UtRZa/WsLFRFICP0h3UrbAFAIhrFVj9H2ORXrGhwpCJ2Nu4RaPku0BcE/nD9tHVvSeTXVoKFlNxYpkzkt0N4jhHvTsGegzIqD6qlLEi39iqyrU+E4Qfb8VVHqezRbQI4wCYl2zfcEO7MgsO0YTkBq1QTYFRH3xy3+wlM9XQNnHgGxmHmH5JbEIXD8F67V4o/rbluAUCrJRxHt3CRU+ZrSVR0lAAft9ue3hYb7/E1EFzbestf09i9r9Lo9/UTJ7kbcuigVtOjzlFszv/axwVnTNGBrB9vS0KvEjClJl/Ug74wXh51jqBY5yRMZu2HdStsAUAiGsVWP0fY5FesaHCkInY27hFo+S7QFwT+cM3XmimpgBhphzdQ8LBhaOQ9y28dmRMOF4ENiFSOJTmRTdHkRQvCb3pedLBS+FNHdMk+Yiu1Sph6Yz3duM7I9V+CxuChAxZ7WwlCbtXdnsxB4rcquRosYek8UMDogvs+z3lVq2kmtQ6Okw/fvxXFKsSqL2b3H81apDDe3VMFSMVbYTkBq1QTYFRH3xy3+wlM9XQNnHgGxmHmH5JbEIXD8F6eaU6dbzhuFRYGw7DafIm+oC+OTOauTxvj6LP+EPvgubiSZ2BF1G4TXg9cX0BfboPnU3aWsBDiPzts8lhq3ii8U+ZrSVR0lAAft9ue3hYb7xlWqy23g8i2WfTI/tR9OjXjOnmIX8qWrfvV/tJtgQZuxwVnTNGBrB9vS0KvEjClJk07uaXKVOanHCuq17T4u37ityq5Gixh6TxQwOiC+z7PeVWraSa1Do6TD9+/FcUqxLSCNflX0tDX7Lg99C2M4Mpcu2qVFid45yGml9CjqTCqplkQlVIS60YlSDjXZtMOzsTSD4q8MngGEuP5xrjjm7RIyQEvcbmdB3SY5b5pocVY4rcquRosYek8UMDogvs+z3lVq2kmtQ6Okw/fvxXFKsQQsN9Ohtf1T1OIGnk0lptow6DrEGKsO7FyPnANT+1vE8cFZ0zRgawfb0tCrxIwpSZOv79YlXq5XExp2WzgQrj5Bbt3X/hVUWDB2vRUvXjYmOpzFhvDFjyK9hR40vZPsp8Umdu5CdROB9kpY+RTjgXOIqCv2vGtssOEDulIZ+eTBmE5AatUE2BUR98ct/sJTPV0DZx4BsZh5h+SWxCFw/BekoPmkf04zbPSrxnF314WRfA4tlGI5D671HRiQOIJfC3qcxYbwxY8ivYUeNL2T7KfFJnbuQnUTgfZKWPkU44FzmgM8vVAQDKdwAMej9+fW8LDoOsQYqw7sXI+cA1P7W8TxwVnTNGBrB9vS0KvEjClJk6/v1iVerlcTGnZbOBCuPl7eC/R59jVVeIp6WkfjYYFplkQlVIS60YlSDjXZtMOzsTSD4q8MngGEuP5xrjjm7TcJkwsMpBwRuIkAIr2S0xRPAmiptsYfv1vN+HKYUjrpcLoZnlAXS2wuTvWHoc/BXHq9mCsomqHSZTxhHPh67fCdO2VqiS0rP2IyqJbPWBK7q994A67V77wLJkVj3g+wuhPdy7rBFOCZ1L4memOBtDrOv1zH2XsG8X+nGKBjQKZPmE5AatUE2BUR98ct/sJTPV0DZx4BsZh5h+SWxCFw/Be+i3RpuYboPnLp2nOgwJNssg7XlW2JbiUFU09WpM6MOdlORQXD32az/6OqiW4/5GLsFZZspfX4JUh47/3iIb2OwVAya+NlvpG/ILQE+w0Zi6gL45M5q5PG+Pos/4Q++C5Q5jc4SU6fixEZdkIy+Jajx3g2QJBE/yC6R0ATPgGAdNKLzvTh3K15sxGn8QK4yD+2kJEynYn08yngJ/mNNkTOnpxyQkIeR5cC3fYMlzJb4W3Q3iOEe9OwZ6DMioPqqUsSLf2KrKtT4ThB9vxVUep7EAp1hCRFBoOsjuhCIB6maxYv9ejXaKPaUzVcr02+/x5JAOBuefcBOjI/x3ZypwozquYsxVS0xGtzsFzZ0hLqyxf0HLmvqJA1EiRCB/IcILRhOVpALOAxHevOay6yS/XpS/g1lFD0P1swcv0YzcoCqYD14M9HFpSzapc1n73hmWwplkQlVIS60YlSDjXZtMOzr/RJoNTOjobVGxe0USPotXqCHNXU9KXB4vJRYvhDf5gJAOBuefcBOjI/x3ZypwozkH4Hkfuhv5ub4SDm5yoUTjIO15VtiW4lBVNPVqTOjDnZTkUFw99ms/+jqoluP+Ri2W40F2/qpig/0vwOlutepC3Q3iOEe9OwZ6DMioPqqUsSLf2KrKtT4ThB9vxVUep7PZ8lNUEcHDi4qzw1LVAv/A8CaKm2xh+/W834cphSOulwuhmeUBdLbC5O9Yehz8FcbY5SGh10ZwvWoUeNIkDejhhOQGrVBNgVEffHLf7CUz1dA2ceAbGYeYfklsQhcPwXikxHfWze/q9i4O4dlHlJatN0eRFC8Jvel50sFL4U0d0yT5iK7VKmHpjPd24zsj1XzNDDp7F0z70tKQTVaXG3qOmWRCVUhLrRiVIONdm0w7Ot7t8kH3yU7WLblWCm7tIs+Eyhd8VFxFyCRqkBi2pjgYkA4G559wE6Mj/HdnKnCjOnd3Ayc3AHA/CqZeH3zsjULdDeI4R707BnoMyKg+qpSxIt/Yqsq1PhOEH2/FVR6nszWTCl8A6pv4vrwNGHeHwjoTlaQCzgMR3rzmsuskv16Uv4NZRQ9D9bMHL9GM3KAqm1RdyiuRIyFan9OgciaQKL03R5EULwm96XnSwUvhTR3TJPmIrtUqYemM93bjOyPVfuo7aUu+SrpgOF34JAuX466ZZEJVSEutGJUg412bTDs5UClmw0NY6Qdw3cNVcuDCLAzMcpJHc3Fel7MCBnFsNj+pzFhvDFjyK9hR40vZPsp/6zeZC6Y8auUToYl7v9NN84yfahy/XZlY7NduvRJiatccFZ0zRgawfb0tCrxIwpSZXbCKl3JXCAyVvehbk8/5AyDteVbYluJQVTT1akzow52U5FBcPfZrP/o6qJbj/kYt/9XMy0d1GyjGIKhpj4fygt0N4jhHvTsGegzIqD6qlLEi39iqyrU+E4Qfb8VVHqez1ZS9LziedspV0CfP+/OvcSi8704dytebMRp/ECuMg/tpCRMp2J9PMp4Cf5jTZEzp74MuFuP58SNWC+ZXL7RQGYTkBq1QTYFRH3xy3+wlM9XQNnHgGxmHmH5JbEIXD8F7DM17x/4Q5RNivqKmwtEZsr33gDrtXvvAsmRWPeD7C6POpdcwmqpjwuhI5RBTKGuxcHZLgrLSILSICYnIHJkfSJAOBuefcBOjI/x3ZypwozobncTJMw7WHpkBEcCtng4w8CaKm2xh+/W834cphSOulwuhmeUBdLbC5O9Yehz8FcdE93eemxFXVG/NKwOoqsk+E5WkAs4DEd685rLrJL9elL+DWUUPQ/WzBy/RjNygKppBxdKTX3Pe1m8rqzA9RhQdhOQGrVBNgVEffHLf7CUz1dA2ceAbGYeYfklsQhcPwXsCUnMeUEz9EfB+DdPsKO06E5WkAs4DEd685rLrJL9elL+DWUUPQ/WzBy/RjNygKptL1+jnz8RIijAjc9bs2oSNN0eRFC8Jvel50sFL4U0d0yT5iK7VKmHpjPd24zsj1X0MhA/wACQ82O4tTvu7H77WmWRCVUhLrRiVIONdm0w7O+HdsCRrfKK4tkXy25RpN8U1RIOzMARERMv4Llhr1ysjqcxYbwxY8ivYUeNL2T7KfI26BsLedUKtka8u9GfDkR4d1K2wBQCIaxVY/R9jkV6xocKQidjbuEWj5LtAXBP5wd72fFKR1zcRqboWcoiDlEod1K2wBQCIaxVY/R9jkV6xocKQidjbuEWj5LtAXBP5w1kP4cikRsiWA0I7BH/UhH+M6eYhfypat+9X+0m2BBm7HBWdM0YGsH29LQq8SMKUmAe3PC+3EXS9+MhffnYSdjsg7XlW2JbiUFU09WpM6MOdlORQXD32az/6OqiW4/5GLIVYOzzugyPOVohUNUOkpy8g7XlW2JbiUFU09WpM6MOdlORQXD32az/6OqiW4/5GLHMj+LOQ5J0PMTtOPPQzjz1PogooXkR0H44jWe7W0WxRYdQNDYulexIbkUw4rr/I6FZSin81jJ/HgtiQ/xKrPk8C5zp45tGtSXB5/bGqGwjKlIHq5tqvA3P0R0Gw7Asib5NAxUBZfX3WrHcv0ZhpWDJZYIa6rd3nUVHDvMkNMM88X/uPeDn9VOrmhfR9yyxyP7WObLmoAPwxndvcHwRyYpZoT6Ac82s5kIRckWR4SwOE= | non_process | r gb fbbgbhi jltwy etewfqmezcdppc fnhdmk oc swxcfw swxcfw be nlvpg ilqe pos xps jqolup qpig qpsxit kyt ecumg nkwooqsk dwuupq wzby dwuupq wzby uhh hgtiq | 0 |
7,104 | 10,260,825,877 | IssuesEvent | 2019-08-22 08:24:24 | didi/mpx | https://api.github.com/repos/didi/mpx | closed | 使用ts时,类型推断 mpx.getStorageSync 返回的是promise | processing | ```
mpx.getStorageSync('xxx') // (property) getStorageSync: (key: string) => Promise<any>
wx.getStorageSync('xxx') // function wx.getStorageSync(key: string): any
```
是不是应该跟微信的保持一致,类型推断是promise,会造成一些困扰 | 1.0 | 使用ts时,类型推断 mpx.getStorageSync 返回的是promise - ```
mpx.getStorageSync('xxx') // (property) getStorageSync: (key: string) => Promise<any>
wx.getStorageSync('xxx') // function wx.getStorageSync(key: string): any
```
是不是应该跟微信的保持一致,类型推断是promise,会造成一些困扰 | process | 使用ts时,类型推断 mpx getstoragesync 返回的是promise mpx getstoragesync xxx property getstoragesync key string promise wx getstoragesync xxx function wx getstoragesync key string any 是不是应该跟微信的保持一致,类型推断是promise,会造成一些困扰 | 1 |
19,871 | 26,287,243,468 | IssuesEvent | 2023-01-08 00:37:19 | devssa/onde-codar-em-salvador | https://api.github.com/repos/devssa/onde-codar-em-salvador | closed | [QUALIDADE] [SALVADOR] [REMOTO] [TAMBEM PCD] Analista de Qualidade na [SOLUTIS] | SALVADOR ISO 9001 Norma ISO 27001 REMOTO PROCESSOS QUALIDADE GOVERNANÇA DE TI QUALIDADE DE SOFTWARE HELP WANTED VAGA PARA PCD TAMBÉM Stale | <!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Descrição da vaga
- Buscamos profissional para atuar na área de Qualidade e Processos e que curtam trabalhar em um ambiente colaborativo e voltado para soluções.
RESPONSABILIDADES E ATRIBUIÇÕES
- Esse profissional atuará com as seguintes atividades:
- Auxiliará o gestor na implantação de processos ITIL / ISO 20000 / ISO 27000 / ISO 9001:2015;
- Auxílio na elaboração de planos de ação para eliminação de não conformidades; Revisão e promoção de melhorias;
- Auxílio na elaboração de materiais de treinamentos;
- Participação ativa em auditoria interna e externa;
entre outras atividades.
## Local
- Salvador e Remoto
## Benefícios
- Informações diretamente com o responsável/ recrutador da vaga
## Requisitos
**Obrigatórios:**
- Necessário experiência anterior em processos corporativos - desenho, definição, implantação e internalização;
- Desejável experiência anterior em processos de TI e ADM relacionados a normas ISO (ISO-9001, ISO-20000, ISO-27001, etc);
- Desejável Conhecimento em implantação, treinamento, auditoria e melhoria de processos de governança de serviços de TI, como:
- Gerenciamento de Mudanças, Riscos, Problemas, Capacidade, Disponibilidade, Configuração etc;
- Desejável Conhecimento em Processos de Software e modelo CMMI;
- Necessário curso Superior - em andamento ou Completo;
- Formação Auditor Interno ISO 9001:2015 será um diferencial.
## Contratação
- a combinar
## Nossa empresa
- Somos apaixonados por tecnologia e esse valor está presente em nossas ações e proposta de trabalho. Ambiente descontraído e criativo, horário flexível, possibilidade de trabalho home office, eventos e programas internos com games... Isso faz parte do nosso dia-a-dia.
- Engajamento, busca intensa por conhecimento, empatia e criatividade são a nossa receita para cultivar e colher, sempre, o melhor resultado tecnológico.
- Essa é a Solutis, flexível, acolhedora, inquieta, amante da tecnologia e de tudo que ela nos proporciona de mais incrível.
- Venha fazer parte do nosso time!
## Como se candidatar
- [Clique aqui para se candidatar](https://solutis.gupy.io/jobs/516268?jobBoardSource=gupy_public_page)
| 1.0 | [QUALIDADE] [SALVADOR] [REMOTO] [TAMBEM PCD] Analista de Qualidade na [SOLUTIS] - <!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Descrição da vaga
- Buscamos profissional para atuar na área de Qualidade e Processos e que curtam trabalhar em um ambiente colaborativo e voltado para soluções.
RESPONSABILIDADES E ATRIBUIÇÕES
- Esse profissional atuará com as seguintes atividades:
- Auxiliará o gestor na implantação de processos ITIL / ISO 20000 / ISO 27000 / ISO 9001:2015;
- Auxílio na elaboração de planos de ação para eliminação de não conformidades; Revisão e promoção de melhorias;
- Auxílio na elaboração de materiais de treinamentos;
- Participação ativa em auditoria interna e externa;
entre outras atividades.
## Local
- Salvador e Remoto
## Benefícios
- Informações diretamente com o responsável/ recrutador da vaga
## Requisitos
**Obrigatórios:**
- Necessário experiência anterior em processos corporativos - desenho, definição, implantação e internalização;
- Desejável experiência anterior em processos de TI e ADM relacionados a normas ISO (ISO-9001, ISO-20000, ISO-27001, etc);
- Desejável Conhecimento em implantação, treinamento, auditoria e melhoria de processos de governança de serviços de TI, como:
- Gerenciamento de Mudanças, Riscos, Problemas, Capacidade, Disponibilidade, Configuração etc;
- Desejável Conhecimento em Processos de Software e modelo CMMI;
- Necessário curso Superior - em andamento ou Completo;
- Formação Auditor Interno ISO 9001:2015 será um diferencial.
## Contratação
- a combinar
## Nossa empresa
- Somos apaixonados por tecnologia e esse valor está presente em nossas ações e proposta de trabalho. Ambiente descontraído e criativo, horário flexível, possibilidade de trabalho home office, eventos e programas internos com games... Isso faz parte do nosso dia-a-dia.
- Engajamento, busca intensa por conhecimento, empatia e criatividade são a nossa receita para cultivar e colher, sempre, o melhor resultado tecnológico.
- Essa é a Solutis, flexível, acolhedora, inquieta, amante da tecnologia e de tudo que ela nos proporciona de mais incrível.
- Venha fazer parte do nosso time!
## Como se candidatar
- [Clique aqui para se candidatar](https://solutis.gupy.io/jobs/516268?jobBoardSource=gupy_public_page)
| process | analista de qualidade na por favor só poste se a vaga for para salvador e cidades vizinhas use desenvolvedor front end ao invés de front end developer o exemplo desenvolvedor front end na descrição da vaga buscamos profissional para atuar na área de qualidade e processos e que curtam trabalhar em um ambiente colaborativo e voltado para soluções responsabilidades e atribuições esse profissional atuará com as seguintes atividades auxiliará o gestor na implantação de processos itil iso iso iso auxílio na elaboração de planos de ação para eliminação de não conformidades revisão e promoção de melhorias auxílio na elaboração de materiais de treinamentos participação ativa em auditoria interna e externa entre outras atividades local salvador e remoto benefícios informações diretamente com o responsável recrutador da vaga requisitos obrigatórios necessário experiência anterior em processos corporativos desenho definição implantação e internalização desejável experiência anterior em processos de ti e adm relacionados a normas iso iso iso iso etc desejável conhecimento em implantação treinamento auditoria e melhoria de processos de governança de serviços de ti como gerenciamento de mudanças riscos problemas capacidade disponibilidade configuração etc desejável conhecimento em processos de software e modelo cmmi necessário curso superior em andamento ou completo formação auditor interno iso será um diferencial contratação a combinar nossa empresa somos apaixonados por tecnologia e esse valor está presente em nossas ações e proposta de trabalho ambiente descontraído e criativo horário flexível possibilidade de trabalho home office eventos e programas internos com games isso faz parte do nosso dia a dia engajamento busca intensa por conhecimento empatia e criatividade são a nossa receita para cultivar e colher sempre o melhor resultado tecnológico essa é a solutis flexível acolhedora inquieta amante da tecnologia e de tudo que ela nos proporciona de mais incrível venha fazer parte do nosso time como se candidatar | 1 |
18,159 | 24,193,807,832 | IssuesEvent | 2022-09-23 20:39:50 | GradienceTeam/Gradience | https://api.github.com/repos/GradienceTeam/Gradience | closed | feat: release 0.3 | twig release release-process before-release doing-release 3-days-release | # Release 0.3.0
<details>
<summary> Checklist</summary>
## Before a release
### A week before release
- [x] Announce the upcoming release by creating a new issue one week before the release.
- [x] Ask translators to translate new strings.
- [x] In the issue, prepare release notes :
- [x] The first section would be a summary of big changes.
- [x] The second section should list new dependencies, including python dependencies, and the reason they were added.
- [x] The third section would be the list of contributions.
### 3 days before release
- [x] Sign off on the release notes (or at least the first section).
- [x] Update the meson version number.
- [x] Add the release notes' first section's content to the AppData.
- [ ] Create a new branch for the release with the name being the release number and freeze new feature, only merge in bug fixes and translation updates.
- [x] Create a flathub test build (by creating a pull request in the flathub repo, bumping the release tag in it, and asking Flathub's buildbot to build it).
- [x] Ask contributors to test the build. Any identified bug should halt the update until fixed.
## Doing the release
- [ ] Tag the lastest commit in the release branch with the version number.
- [ ] Create a new GitHub release using the approved release notes.
## After the release
- [ ] Upgrade the flathub package by bumping the release tag.
- [ ] Notify packagers.
- [x] Write a TWIG announcement.
</details>
## Changes
### App
- Added back the quick preset switcher
- Autoload theme from applied css
- Added plugins support
- Show save dialog if user want to close with unsaved changes
### Plugins
- First plugin for customizing GNOME Firefox Theme
### Preset manager
- Added custom repos
- Added dropdown repo selctor to the search
- Added repo badge (report if colours are bad)
- Added Adw.ExpanderRow for installed presets
- Added Description in installed presets
- Added dropdown menu for reporting bug about preset
- Download and fetch is now asynchronous
## Dependencies
- `python-cssutils`
- `python-yapsy`
- `python-aiohttp`
- `python-jinja2`
- `python-svglib`
- `python-material-color-utilities-python`
- `python-anyascii`
- `python-gobject`
- `blueprint-compiler`
- `gtk4`
- `libadwaita >= 1.2`
- `meson`
- `ninja-build`
- `python`
- `sassc`
## Contributors
As always, biggest contributors are members of the @GradienceTeam:
- @0xMRTT
- @daudix-UFO
- @tfuxu
- @LyesSaadi
### Translators
Thanks to all translators who did an amazing work on Weblate or Crowdin (if your name isn't here, ping me on Matrix or here)
- Korean: @vbalien
- Portugese: @renatocrrs
- English US: @BritishBenji
- French: @rene-coty
- Tamil: @kbdharun
- Swedish: @bittin
- Italian: @phaerrax
- Dutch: @emansom
- German: @rene-coty | 1.0 | feat: release 0.3 - # Release 0.3.0
<details>
<summary> Checklist</summary>
## Before a release
### A week before release
- [x] Announce the upcoming release by creating a new issue one week before the release.
- [x] Ask translators to translate new strings.
- [x] In the issue, prepare release notes :
- [x] The first section would be a summary of big changes.
- [x] The second section should list new dependencies, including python dependencies, and the reason they were added.
- [x] The third section would be the list of contributions.
### 3 days before release
- [x] Sign off on the release notes (or at least the first section).
- [x] Update the meson version number.
- [x] Add the release notes' first section's content to the AppData.
- [ ] Create a new branch for the release with the name being the release number and freeze new feature, only merge in bug fixes and translation updates.
- [x] Create a flathub test build (by creating a pull request in the flathub repo, bumping the release tag in it, and asking Flathub's buildbot to build it).
- [x] Ask contributors to test the build. Any identified bug should halt the update until fixed.
## Doing the release
- [ ] Tag the lastest commit in the release branch with the version number.
- [ ] Create a new GitHub release using the approved release notes.
## After the release
- [ ] Upgrade the flathub package by bumping the release tag.
- [ ] Notify packagers.
- [x] Write a TWIG announcement.
</details>
## Changes
### App
- Added back the quick preset switcher
- Autoload theme from applied css
- Added plugins support
- Show save dialog if user want to close with unsaved changes
### Plugins
- First plugin for customizing GNOME Firefox Theme
### Preset manager
- Added custom repos
- Added dropdown repo selctor to the search
- Added repo badge (report if colours are bad)
- Added Adw.ExpanderRow for installed presets
- Added Description in installed presets
- Added dropdown menu for reporting bug about preset
- Download and fetch is now asynchronous
## Dependencies
- `python-cssutils`
- `python-yapsy`
- `python-aiohttp`
- `python-jinja2`
- `python-svglib`
- `python-material-color-utilities-python`
- `python-anyascii`
- `python-gobject`
- `blueprint-compiler`
- `gtk4`
- `libadwaita >= 1.2`
- `meson`
- `ninja-build`
- `python`
- `sassc`
## Contributors
As always, biggest contributors are members of the @GradienceTeam:
- @0xMRTT
- @daudix-UFO
- @tfuxu
- @LyesSaadi
### Translators
Thanks to all translators who did an amazing work on Weblate or Crowdin (if your name isn't here, ping me on Matrix or here)
- Korean: @vbalien
- Portugese: @renatocrrs
- English US: @BritishBenji
- French: @rene-coty
- Tamil: @kbdharun
- Swedish: @bittin
- Italian: @phaerrax
- Dutch: @emansom
- German: @rene-coty | process | feat release release checklist before a release a week before release announce the upcoming release by creating a new issue one week before the release ask translators to translate new strings in the issue prepare release notes the first section would be a summary of big changes the second section should list new dependencies including python dependencies and the reason they were added the third section would be the list of contributions days before release sign off on the release notes or at least the first section update the meson version number add the release notes first section s content to the appdata create a new branch for the release with the name being the release number and freeze new feature only merge in bug fixes and translation updates create a flathub test build by creating a pull request in the flathub repo bumping the release tag in it and asking flathub s buildbot to build it ask contributors to test the build any identified bug should halt the update until fixed doing the release tag the lastest commit in the release branch with the version number create a new github release using the approved release notes after the release upgrade the flathub package by bumping the release tag notify packagers write a twig announcement changes app added back the quick preset switcher autoload theme from applied css added plugins support show save dialog if user want to close with unsaved changes plugins first plugin for customizing gnome firefox theme preset manager added custom repos added dropdown repo selctor to the search added repo badge report if colours are bad added adw expanderrow for installed presets added description in installed presets added dropdown menu for reporting bug about preset download and fetch is now asynchronous dependencies python cssutils python yapsy python aiohttp python python svglib python material color utilities python python anyascii python gobject blueprint compiler libadwaita meson ninja build python sassc contributors as always biggest contributors are members of the gradienceteam daudix ufo tfuxu lyessaadi translators thanks to all translators who did an amazing work on weblate or crowdin if your name isn t here ping me on matrix or here korean vbalien portugese renatocrrs english us britishbenji french rene coty tamil kbdharun swedish bittin italian phaerrax dutch emansom german rene coty | 1 |
15,951 | 20,169,932,586 | IssuesEvent | 2022-02-10 09:30:29 | ooi-data/CE09OSSM-SBD12-05-WAVSSA000-recovered_host-wavss_a_dcl_motion_recovered | https://api.github.com/repos/ooi-data/CE09OSSM-SBD12-05-WAVSSA000-recovered_host-wavss_a_dcl_motion_recovered | opened | 🛑 Processing failed: ValueError | process | ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-02-10T09:30:28.991105.
## Details
Flow name: `CE09OSSM-SBD12-05-WAVSSA000-recovered_host-wavss_a_dcl_motion_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: shape of data to append is not compatible with the array; all dimensions must match except for the dimension being appended
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 157, in processing
process_dataset(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 147, in process_dataset
append_to_zarr(mod_ds, store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2305, in append
return self._write_op(self._append_nosync, data, axis=axis)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2211, in _write_op
return self._synchronized_op(f, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2201, in _synchronized_op
result = f(*args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2319, in _append_nosync
raise ValueError('shape of data to append is not compatible with the array; '
ValueError: shape of data to append is not compatible with the array; all dimensions must match except for the dimension being appended
```
</details>
| 1.0 | 🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-02-10T09:30:28.991105.
## Details
Flow name: `CE09OSSM-SBD12-05-WAVSSA000-recovered_host-wavss_a_dcl_motion_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: shape of data to append is not compatible with the array; all dimensions must match except for the dimension being appended
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 157, in processing
process_dataset(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 147, in process_dataset
append_to_zarr(mod_ds, store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2305, in append
return self._write_op(self._append_nosync, data, axis=axis)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2211, in _write_op
return self._synchronized_op(f, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2201, in _synchronized_op
result = f(*args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2319, in _append_nosync
raise ValueError('shape of data to append is not compatible with the array; '
ValueError: shape of data to append is not compatible with the array; all dimensions must match except for the dimension being appended
```
</details>
| process | 🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name recovered host wavss a dcl motion recovered task name processing task error type valueerror error message shape of data to append is not compatible with the array all dimensions must match except for the dimension being appended traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing process dataset file srv conda envs notebook lib site packages ooi harvester processor init py line in process dataset append to zarr mod ds store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages zarr core py line in append return self write op self append nosync data axis axis file srv conda envs notebook lib site packages zarr core py line in write op return self synchronized op f args kwargs file srv conda envs notebook lib site packages zarr core py line in synchronized op result f args kwargs file srv conda envs notebook lib site packages zarr core py line in append nosync raise valueerror shape of data to append is not compatible with the array valueerror shape of data to append is not compatible with the array all dimensions must match except for the dimension being appended | 1 |
20,350 | 27,009,990,645 | IssuesEvent | 2023-02-10 14:42:24 | sysflow-telemetry/sysflow | https://api.github.com/repos/sysflow-telemetry/sysflow | opened | Add Kafka transport | enhancement sf-processor | **Indicate project**
Processor
**Overview**
We want to enable Kafka transport in the SysFlow Processor, using our encoder/transport architecture as the base framework.
**Tasks**
- [ ] Encoder
- [ ] Add Avro serialization of SysFlow records
- [ ] Transport
- [ ] Add Kafka transport (for avro-, json-, and ecs- encoded records)
- [ ] Tests
- [ ] Documentation
**Additional context**
Working branch: TBD
| 1.0 | Add Kafka transport - **Indicate project**
Processor
**Overview**
We want to enable Kafka transport in the SysFlow Processor, using our encoder/transport architecture as the base framework.
**Tasks**
- [ ] Encoder
- [ ] Add Avro serialization of SysFlow records
- [ ] Transport
- [ ] Add Kafka transport (for avro-, json-, and ecs- encoded records)
- [ ] Tests
- [ ] Documentation
**Additional context**
Working branch: TBD
| process | add kafka transport indicate project processor overview we want to enable kafka transport in the sysflow processor using our encoder transport architecture as the base framework tasks encoder add avro serialization of sysflow records transport add kafka transport for avro json and ecs encoded records tests documentation additional context working branch tbd | 1 |
639 | 3,097,944,984 | IssuesEvent | 2015-08-28 07:37:32 | e-government-ua/i | https://api.github.com/repos/e-government-ua/i | closed | На главном портале починить заход в форму услуги, когда она находится только в регионе | active bug hi priority In process of testing test version | https://test.igov.org.ua/service/161/general/region
зайдя в услугу в Херсонской области, уже сразу после авторизации возвращает обратно на выбор способа авторизации (вмест оформы услуги)...
П.С.: проблемма именно в том, что услуга находится в области | 1.0 | На главном портале починить заход в форму услуги, когда она находится только в регионе - https://test.igov.org.ua/service/161/general/region
зайдя в услугу в Херсонской области, уже сразу после авторизации возвращает обратно на выбор способа авторизации (вмест оформы услуги)...
П.С.: проблемма именно в том, что услуга находится в области | process | на главном портале починить заход в форму услуги когда она находится только в регионе зайдя в услугу в херсонской области уже сразу после авторизации возвращает обратно на выбор способа авторизации вмест оформы услуги п с проблемма именно в том что услуга находится в области | 1 |
9,098 | 12,168,815,357 | IssuesEvent | 2020-04-27 13:16:59 | Arch666Angel/mods | https://api.github.com/repos/Arch666Angel/mods | closed | [BUG] Productivity not allows on liquid resin/plastic/rubber recipe from bio chain | Angels Bio Processing | While same petrochem recipes allow productivity | 1.0 | [BUG] Productivity not allows on liquid resin/plastic/rubber recipe from bio chain - While same petrochem recipes allow productivity | process | productivity not allows on liquid resin plastic rubber recipe from bio chain while same petrochem recipes allow productivity | 1 |
78,997 | 15,586,092,453 | IssuesEvent | 2021-03-18 01:09:21 | Farsene1/Object-Oriented-Programming-Project | https://api.github.com/repos/Farsene1/Object-Oriented-Programming-Project | opened | CVE-2019-10072 (High) detected in tomcat-embed-core-8.5.34.jar | security vulnerability | ## CVE-2019-10072 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-8.5.34.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /Object-Oriented-Programming-Project/_2_client/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar,/root/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.0.5.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.0.5.RELEASE.jar
- :x: **tomcat-embed-core-8.5.34.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The fix for CVE-2019-0199 was incomplete and did not address HTTP/2 connection window exhaustion on write in Apache Tomcat versions 9.0.0.M1 to 9.0.19 and 8.5.0 to 8.5.40 . By not sending WINDOW_UPDATE messages for the connection window (stream 0) clients were able to cause server-side threads to block eventually leading to thread exhaustion and a DoS.
<p>Publish Date: 2019-06-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10072>CVE-2019-10072</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://tomcat.apache.org/security-8.html#Fixed_in_Apache_Tomcat_8.5.41">http://tomcat.apache.org/security-8.html#Fixed_in_Apache_Tomcat_8.5.41</a></p>
<p>Release Date: 2019-06-21</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:9.0.20,8.5.41,org.apache.tomcat:tomcat-coyote:9.0.20,8.5.41</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-10072 (High) detected in tomcat-embed-core-8.5.34.jar - ## CVE-2019-10072 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-8.5.34.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /Object-Oriented-Programming-Project/_2_client/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar,/root/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.0.5.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.0.5.RELEASE.jar
- :x: **tomcat-embed-core-8.5.34.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The fix for CVE-2019-0199 was incomplete and did not address HTTP/2 connection window exhaustion on write in Apache Tomcat versions 9.0.0.M1 to 9.0.19 and 8.5.0 to 8.5.40 . By not sending WINDOW_UPDATE messages for the connection window (stream 0) clients were able to cause server-side threads to block eventually leading to thread exhaustion and a DoS.
<p>Publish Date: 2019-06-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10072>CVE-2019-10072</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://tomcat.apache.org/security-8.html#Fixed_in_Apache_Tomcat_8.5.41">http://tomcat.apache.org/security-8.html#Fixed_in_Apache_Tomcat_8.5.41</a></p>
<p>Release Date: 2019-06-21</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:9.0.20,8.5.41,org.apache.tomcat:tomcat-coyote:9.0.20,8.5.41</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in tomcat embed core jar cve high severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file object oriented programming project client pom xml path to vulnerable library root repository org apache tomcat embed tomcat embed core tomcat embed core jar root repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library vulnerability details the fix for cve was incomplete and did not address http connection window exhaustion on write in apache tomcat versions to and to by not sending window update messages for the connection window stream clients were able to cause server side threads to block eventually leading to thread exhaustion and a dos publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core org apache tomcat tomcat coyote step up your open source security game with whitesource | 0 |
8,138 | 11,339,795,002 | IssuesEvent | 2020-01-23 03:34:42 | ryankeefe92/Episodes | https://api.github.com/repos/ryankeefe92/Episodes | closed | When episodes are in a queue (for downloading with aria or to be processed), they should be prioritized chronologically (does this already happen because everything goes alphabetically?) (see comments) | download: feature process: question | - If S02E01 and S02E03 are added at the same time, S02E01 would go first, and then if S02E02 is added while S02E01 is still downloading/processing, it should go next even though S02E03 has been there for longer
| 1.0 | When episodes are in a queue (for downloading with aria or to be processed), they should be prioritized chronologically (does this already happen because everything goes alphabetically?) (see comments) - - If S02E01 and S02E03 are added at the same time, S02E01 would go first, and then if S02E02 is added while S02E01 is still downloading/processing, it should go next even though S02E03 has been there for longer
| process | when episodes are in a queue for downloading with aria or to be processed they should be prioritized chronologically does this already happen because everything goes alphabetically see comments if and are added at the same time would go first and then if is added while is still downloading processing it should go next even though has been there for longer | 1 |
422,351 | 28,435,732,063 | IssuesEvent | 2023-04-15 09:29:22 | pylint-dev/pylint | https://api.github.com/repos/pylint-dev/pylint | closed | Run only selected plugins | Question Documentation :green_book: | ### Question
I have a custom plugin for pylint. Is it possible to run only it on the code?
### Documentation for future user
Probably there is no such functionality at all? `pylint --help` didn't show an option to do this.
### Additional context
_No response_ | 1.0 | Run only selected plugins - ### Question
I have a custom plugin for pylint. Is it possible to run only it on the code?
### Documentation for future user
Probably there is no such functionality at all? `pylint --help` didn't show an option to do this.
### Additional context
_No response_ | non_process | run only selected plugins question i have a custom plugin for pylint is it possible to run only it on the code documentation for future user probably there is no such functionality at all pylint help didn t show an option to do this additional context no response | 0 |
3,051 | 6,044,561,718 | IssuesEvent | 2017-06-12 06:22:52 | javabird25/long-hour-and-a-half | https://api.github.com/repos/javabird25/long-hour-and-a-half | closed | Diaper + Cheat Bug | bug will be processed soon | Neither diapers nor pads stop/absorb the flow of pee
Pressing the cheat button in v1.3 crashes the game | 1.0 | Diaper + Cheat Bug - Neither diapers nor pads stop/absorb the flow of pee
Pressing the cheat button in v1.3 crashes the game | process | diaper cheat bug neither diapers nor pads stop absorb the flow of pee pressing the cheat button in crashes the game | 1 |
9,305 | 12,313,332,283 | IssuesEvent | 2020-05-12 15:09:48 | arunkumar9t2/scabbard | https://api.github.com/repos/arunkumar9t2/scabbard | closed | Highlight nodes declared via @BindsInstance differently | enhancement module:processor | Currently bindings exposed via `@BindsInstance` are not differentiated in graph and rendered like any other node. Might be worth highlighting this via color or label.
Notes:
Could use `binding.kind() = BindingKind.BOUND_INSTANCE` to check for this. | 1.0 | Highlight nodes declared via @BindsInstance differently - Currently bindings exposed via `@BindsInstance` are not differentiated in graph and rendered like any other node. Might be worth highlighting this via color or label.
Notes:
Could use `binding.kind() = BindingKind.BOUND_INSTANCE` to check for this. | process | highlight nodes declared via bindsinstance differently currently bindings exposed via bindsinstance are not differentiated in graph and rendered like any other node might be worth highlighting this via color or label notes could use binding kind bindingkind bound instance to check for this | 1 |
226,047 | 17,295,764,140 | IssuesEvent | 2021-07-25 17:42:29 | oracle/opengrok | https://api.github.com/repos/oracle/opengrok | closed | glassfish link on "how to setup opengrok" page has expired | documentation | **Describe the bug**
Glassfish link (https://glassfish.dev.java.net/) on https://github.com/oracle/opengrok/wiki/How-to-setup-OpenGrok redirects to "We're sorry the java.net site has closed." page (https://www.oracle.com/splash/java.net/maintenance/index.html). | 1.0 | glassfish link on "how to setup opengrok" page has expired - **Describe the bug**
Glassfish link (https://glassfish.dev.java.net/) on https://github.com/oracle/opengrok/wiki/How-to-setup-OpenGrok redirects to "We're sorry the java.net site has closed." page (https://www.oracle.com/splash/java.net/maintenance/index.html). | non_process | glassfish link on how to setup opengrok page has expired describe the bug glassfish link on redirects to we re sorry the java net site has closed page | 0 |
21,085 | 28,039,656,270 | IssuesEvent | 2023-03-28 17:29:37 | AssetRipper/AssetRipper | https://api.github.com/repos/AssetRipper/AssetRipper | closed | [Bug]: Separated static meshes not flagged as static | bug mesh processing | ### Are you on the latest version of AssetRipper?
Yes, I'm on the latest release of AssetRipper.
### Which release are you using?
Windows x64
### Which game did this occur on?
_No response_
### Which Unity version did this occur on?
2020.3.33
### Is the game Mono or IL2Cpp?
Mono
### Describe the issue.
Hello again ;)
After export and checking in Unity, I noticed that when I choose the 'separate static mesh option', the corresponding gameObject (the one holding the meshfilter of a static mesh, in a scene or prefab) is not marked as 'static'.
I think the 'm_StaticEditorFlags' Field of the gameObject is not updated, when separating the combined meshes... ?
Just to lt you know...
Have a nice day ;)
Zbuffer
### Relevant log output
_No response_ | 1.0 | [Bug]: Separated static meshes not flagged as static - ### Are you on the latest version of AssetRipper?
Yes, I'm on the latest release of AssetRipper.
### Which release are you using?
Windows x64
### Which game did this occur on?
_No response_
### Which Unity version did this occur on?
2020.3.33
### Is the game Mono or IL2Cpp?
Mono
### Describe the issue.
Hello again ;)
After export and checking in Unity, I noticed that when I choose the 'separate static mesh option', the corresponding gameObject (the one holding the meshfilter of a static mesh, in a scene or prefab) is not marked as 'static'.
I think the 'm_StaticEditorFlags' Field of the gameObject is not updated, when separating the combined meshes... ?
Just to lt you know...
Have a nice day ;)
Zbuffer
### Relevant log output
_No response_ | process | separated static meshes not flagged as static are you on the latest version of assetripper yes i m on the latest release of assetripper which release are you using windows which game did this occur on no response which unity version did this occur on is the game mono or mono describe the issue hello again after export and checking in unity i noticed that when i choose the separate static mesh option the corresponding gameobject the one holding the meshfilter of a static mesh in a scene or prefab is not marked as static i think the m staticeditorflags field of the gameobject is not updated when separating the combined meshes just to lt you know have a nice day zbuffer relevant log output no response | 1 |
15,675 | 19,847,448,147 | IssuesEvent | 2022-01-21 08:28:51 | ooi-data/CE09OSSM-MFD37-03-DOSTAD000-telemetered-dosta_abcdjm_ctdbp_dcl_instrument | https://api.github.com/repos/ooi-data/CE09OSSM-MFD37-03-DOSTAD000-telemetered-dosta_abcdjm_ctdbp_dcl_instrument | opened | 🛑 Processing failed: ValueError | process | ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T08:28:50.542508.
## Details
Flow name: `CE09OSSM-MFD37-03-DOSTAD000-telemetered-dosta_abcdjm_ctdbp_dcl_instrument`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
| 1.0 | 🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T08:28:50.542508.
## Details
Flow name: `CE09OSSM-MFD37-03-DOSTAD000-telemetered-dosta_abcdjm_ctdbp_dcl_instrument`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
| process | 🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name telemetered dosta abcdjm ctdbp dcl instrument task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got | 1 |
130,448 | 27,700,314,904 | IssuesEvent | 2023-03-14 07:25:37 | UnitTestBot/UTBotJava | https://api.github.com/repos/UnitTestBot/UTBotJava | closed | `IllegalArgumentException: Static mocking cannot be used without mock framework` | ctg-bug comp-codegen | **Description**
In case user generates tests with mocking static methods and later removes dependency on Mockito then tests can not generated any more.
**To Reproduce**
Steps to reproduce the behavior:
1. Create new Java project
2. Create or copy some Java class there, for instance copy `org.utbot.examples.exceptions.ExceptionExamples` from UTBot Java project.
3. Invoke test generation on that class with enabled:
- **Mock everything outside the class**
- **Mock static methods**
4. After tests are generated remove added dependency on Mockito from Gradle/Maven config file. That's emulate situation when user doesn't want mocks in the project anymore.
5. Invoke test generation again with "**Do not mock**" option now
**Expected behavior**
Test without mocks are generated.
**Actual behavior**
Exception is thrown at first execution. No tests are generated.
**Visual proofs (screenshots, logs, images)**
Exception thrown:
~~~
java.lang.IllegalArgumentException: Static mocking cannot be used without mock framework
at org.utbot.framework.plugin.api.StandardApplicationContext.<init>(Api.kt:1163)
at org.utbot.intellij.plugin.generator.UtTestsDialogProcessor$createTests$1$1.run(UtTestsDialogProcessor.kt:173)
at com.intellij.openapi.progress.impl.CoreProgressManager.startTask(CoreProgressManager.java:442)
at com.intellij.openapi.progress.impl.ProgressManagerImpl.startTask(ProgressManagerImpl.java:114)
at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$runProcessWithProgressAsynchronously$5(CoreProgressManager.java:493)
at com.intellij.openapi.progress.impl.ProgressRunner.lambda$submit$3(ProgressRunner.java:244)
at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$runProcess$2(CoreProgressManager.java:189)
at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$executeProcessUnderProgress$12(CoreProgressManager.java:608)
at com.intellij.openapi.progress.impl.CoreProgressManager.registerIndicatorAndRun(CoreProgressManager.java:683)
at com.intellij.openapi.progress.impl.CoreProgressManager.computeUnderProgress(CoreProgressManager.java:639)
at com.intellij.openapi.progress.impl.CoreProgressManager.executeProcessUnderProgress(CoreProgressManager.java:607)
at com.intellij.openapi.progress.impl.ProgressManagerImpl.executeProcessUnderProgress(ProgressManagerImpl.java:60)
at com.intellij.openapi.progress.impl.CoreProgressManager.runProcess(CoreProgressManager.java:176)
at com.intellij.openapi.progress.impl.ProgressRunner.lambda$submit$4(ProgressRunner.java:244)
at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1$1.run(Executors.java:668)
at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1$1.run(Executors.java:665)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1.run(Executors.java:665)
at java.base/java.lang.Thread.run(Thread.java:829)
~~~
**Additional context**
The problem is in `resources/mockito-extensions/org.mockito.plugins.MockMaker` file generated at step #3, should user remove it then no problems occur.
| 1.0 | `IllegalArgumentException: Static mocking cannot be used without mock framework` - **Description**
In case user generates tests with mocking static methods and later removes dependency on Mockito then tests can not generated any more.
**To Reproduce**
Steps to reproduce the behavior:
1. Create new Java project
2. Create or copy some Java class there, for instance copy `org.utbot.examples.exceptions.ExceptionExamples` from UTBot Java project.
3. Invoke test generation on that class with enabled:
- **Mock everything outside the class**
- **Mock static methods**
4. After tests are generated remove added dependency on Mockito from Gradle/Maven config file. That's emulate situation when user doesn't want mocks in the project anymore.
5. Invoke test generation again with "**Do not mock**" option now
**Expected behavior**
Test without mocks are generated.
**Actual behavior**
Exception is thrown at first execution. No tests are generated.
**Visual proofs (screenshots, logs, images)**
Exception thrown:
~~~
java.lang.IllegalArgumentException: Static mocking cannot be used without mock framework
at org.utbot.framework.plugin.api.StandardApplicationContext.<init>(Api.kt:1163)
at org.utbot.intellij.plugin.generator.UtTestsDialogProcessor$createTests$1$1.run(UtTestsDialogProcessor.kt:173)
at com.intellij.openapi.progress.impl.CoreProgressManager.startTask(CoreProgressManager.java:442)
at com.intellij.openapi.progress.impl.ProgressManagerImpl.startTask(ProgressManagerImpl.java:114)
at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$runProcessWithProgressAsynchronously$5(CoreProgressManager.java:493)
at com.intellij.openapi.progress.impl.ProgressRunner.lambda$submit$3(ProgressRunner.java:244)
at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$runProcess$2(CoreProgressManager.java:189)
at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$executeProcessUnderProgress$12(CoreProgressManager.java:608)
at com.intellij.openapi.progress.impl.CoreProgressManager.registerIndicatorAndRun(CoreProgressManager.java:683)
at com.intellij.openapi.progress.impl.CoreProgressManager.computeUnderProgress(CoreProgressManager.java:639)
at com.intellij.openapi.progress.impl.CoreProgressManager.executeProcessUnderProgress(CoreProgressManager.java:607)
at com.intellij.openapi.progress.impl.ProgressManagerImpl.executeProcessUnderProgress(ProgressManagerImpl.java:60)
at com.intellij.openapi.progress.impl.CoreProgressManager.runProcess(CoreProgressManager.java:176)
at com.intellij.openapi.progress.impl.ProgressRunner.lambda$submit$4(ProgressRunner.java:244)
at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1$1.run(Executors.java:668)
at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1$1.run(Executors.java:665)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1.run(Executors.java:665)
at java.base/java.lang.Thread.run(Thread.java:829)
~~~
**Additional context**
The problem is in `resources/mockito-extensions/org.mockito.plugins.MockMaker` file generated at step #3, should user remove it then no problems occur.
| non_process | illegalargumentexception static mocking cannot be used without mock framework description in case user generates tests with mocking static methods and later removes dependency on mockito then tests can not generated any more to reproduce steps to reproduce the behavior create new java project create or copy some java class there for instance copy org utbot examples exceptions exceptionexamples from utbot java project invoke test generation on that class with enabled mock everything outside the class mock static methods after tests are generated remove added dependency on mockito from gradle maven config file that s emulate situation when user doesn t want mocks in the project anymore invoke test generation again with do not mock option now expected behavior test without mocks are generated actual behavior exception is thrown at first execution no tests are generated visual proofs screenshots logs images exception thrown java lang illegalargumentexception static mocking cannot be used without mock framework at org utbot framework plugin api standardapplicationcontext api kt at org utbot intellij plugin generator uttestsdialogprocessor createtests run uttestsdialogprocessor kt at com intellij openapi progress impl coreprogressmanager starttask coreprogressmanager java at com intellij openapi progress impl progressmanagerimpl starttask progressmanagerimpl java at com intellij openapi progress impl coreprogressmanager lambda runprocesswithprogressasynchronously coreprogressmanager java at com intellij openapi progress impl progressrunner lambda submit progressrunner java at com intellij openapi progress impl coreprogressmanager lambda runprocess coreprogressmanager java at com intellij openapi progress impl coreprogressmanager lambda executeprocessunderprogress coreprogressmanager java at com intellij openapi progress impl coreprogressmanager registerindicatorandrun coreprogressmanager java at com intellij openapi progress impl coreprogressmanager computeunderprogress coreprogressmanager java at com intellij openapi progress impl coreprogressmanager executeprocessunderprogress coreprogressmanager java at com intellij openapi progress impl progressmanagerimpl executeprocessunderprogress progressmanagerimpl java at com intellij openapi progress impl coreprogressmanager runprocess coreprogressmanager java at com intellij openapi progress impl progressrunner lambda submit progressrunner java at java base java util concurrent completablefuture asyncsupply run completablefuture java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java util concurrent executors privilegedthreadfactory run executors java at java base java util concurrent executors privilegedthreadfactory run executors java at java base java security accesscontroller doprivileged native method at java base java util concurrent executors privilegedthreadfactory run executors java at java base java lang thread run thread java additional context the problem is in resources mockito extensions org mockito plugins mockmaker file generated at step should user remove it then no problems occur | 0 |
269,526 | 8,439,816,091 | IssuesEvent | 2018-10-18 03:58:44 | ExchangeUnion/xud | https://api.github.com/repos/ExchangeUnion/xud | closed | ownOrders are removed from queues after connection failure to peer | critical bug in progress order book p2p top priority | ### Background
`OrderBook` is listening on `peer.close` events to trigger `MatchingEngine.removePeerOrders` on the peer pubKey. If the peer was never successfully connected & handshaked, his pubKey is not available. This could have gone unnoticed if not to another bug, on `MatchingEngine.removePeerOrders`, which cause it to delete all ownOrders from the queues if invoked with an undefined `peerPubKey`.
### Steps to reproduce the issue
1. Have a non-reachable node in `Nodes` table
2. Place an ownOrder immediately after `xud` launch
### Current behaviour
`MatchingEngine.removePeerOrders` is being called after connection failure to peer, ownOrder is removed from queue.
### Expected behaviour
1. `MatchingEngine.removePeerOrders` shouldn't be called after `peer.close` of a non-handshaked peer, or even better, a peer which we haven't recieved any orders from.
2. `MatchingEngine.removePeerOrders`should protect better against undefined `peerPubKey`
| 1.0 | ownOrders are removed from queues after connection failure to peer - ### Background
`OrderBook` is listening on `peer.close` events to trigger `MatchingEngine.removePeerOrders` on the peer pubKey. If the peer was never successfully connected & handshaked, his pubKey is not available. This could have gone unnoticed if not to another bug, on `MatchingEngine.removePeerOrders`, which cause it to delete all ownOrders from the queues if invoked with an undefined `peerPubKey`.
### Steps to reproduce the issue
1. Have a non-reachable node in `Nodes` table
2. Place an ownOrder immediately after `xud` launch
### Current behaviour
`MatchingEngine.removePeerOrders` is being called after connection failure to peer, ownOrder is removed from queue.
### Expected behaviour
1. `MatchingEngine.removePeerOrders` shouldn't be called after `peer.close` of a non-handshaked peer, or even better, a peer which we haven't recieved any orders from.
2. `MatchingEngine.removePeerOrders`should protect better against undefined `peerPubKey`
| non_process | ownorders are removed from queues after connection failure to peer background orderbook is listening on peer close events to trigger matchingengine removepeerorders on the peer pubkey if the peer was never successfully connected handshaked his pubkey is not available this could have gone unnoticed if not to another bug on matchingengine removepeerorders which cause it to delete all ownorders from the queues if invoked with an undefined peerpubkey steps to reproduce the issue have a non reachable node in nodes table place an ownorder immediately after xud launch current behaviour matchingengine removepeerorders is being called after connection failure to peer ownorder is removed from queue expected behaviour matchingengine removepeerorders shouldn t be called after peer close of a non handshaked peer or even better a peer which we haven t recieved any orders from matchingengine removepeerorders should protect better against undefined peerpubkey | 0 |
34,453 | 16,563,655,471 | IssuesEvent | 2021-05-29 02:09:03 | prisma/prisma | https://api.github.com/repos/prisma/prisma | closed | DataLoaders not combining findUnique queries that can be combined | bug/0-needs-info kind/bug team/client topic: performance | <!--
Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client.
Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports
-->
## Bug description
<!-- A clear and concise description of what the bug is. -->
When calling separate `findUnique()` calls, Prisma combines all the queries very well, thanks to the DataLoader.
This works well even when the `where` clause of the `findUnique()` calls are different, by using `IN`.
However, I found that it doesn't combine queries that have the `include` field.
I have logged the queries and found that the exact same two queries were getting fired.
I know that there is a plan to resolve `include`s using joins and that will make this useless. However if implementing this isn't that big deal, I want to see it implemented soon.
## How to reproduce
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
```ts
const prisma = new PrismaClient({ log: ['query] })
prisma.someTable.findUnique({ where: { id: 1 } }).then(console.log)
prisma.someTable.findUnique({ where: { id: 1 } }).then(console.log)
prisma.someTable.findUnique({
where: { id: 1 },
include: { someRelation: true }
}).then(console.log)
```
Then total 3 queries are fired(for `findUnique()` calls without `include`, for `findUnique()` calls with `include`, and for the relation).
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
The number of queries can be reduced to 2 if the issue is resolved, and eventually can be 1 if joins are implemented.
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]--> macOS 11.2.3 ~~Does this even matter~~
- Database: MySQL 5.7
- Node.js version: v15.5.0
- Prisma version:
<!--[Run `prisma -v` to see your Prisma version and paste it between the ´´´]-->
```
prisma : 2.19.0
@prisma/client : 2.19.0
Current platform : darwin
Query Engine : query-engine c1455d0b443d66b0d9db9bcb1bb9ee0d5bbc511d (at ../../node_modules/@prisma/engines/query-engine-darwin)
Migration Engine : migration-engine-cli c1455d0b443d66b0d9db9bcb1bb9ee0d5bbc511d (at ../../node_modules/@prisma/engines/migration-engine-darwin)
Introspection Engine : introspection-core c1455d0b443d66b0d9db9bcb1bb9ee0d5bbc511d (at ../../node_modules/@prisma/engines/introspection-engine-darwin)
Format Binary : prisma-fmt c1455d0b443d66b0d9db9bcb1bb9ee0d5bbc511d (at ../../node_modules/@prisma/engines/prisma-fmt-darwin)
Studio : 0.358.0
```
| True | DataLoaders not combining findUnique queries that can be combined - <!--
Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client.
Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports
-->
## Bug description
<!-- A clear and concise description of what the bug is. -->
When calling separate `findUnique()` calls, Prisma combines all the queries very well, thanks to the DataLoader.
This works well even when the `where` clause of the `findUnique()` calls are different, by using `IN`.
However, I found that it doesn't combine queries that have the `include` field.
I have logged the queries and found that the exact same two queries were getting fired.
I know that there is a plan to resolve `include`s using joins and that will make this useless. However if implementing this isn't that big deal, I want to see it implemented soon.
## How to reproduce
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
```ts
const prisma = new PrismaClient({ log: ['query] })
prisma.someTable.findUnique({ where: { id: 1 } }).then(console.log)
prisma.someTable.findUnique({ where: { id: 1 } }).then(console.log)
prisma.someTable.findUnique({
where: { id: 1 },
include: { someRelation: true }
}).then(console.log)
```
Then total 3 queries are fired(for `findUnique()` calls without `include`, for `findUnique()` calls with `include`, and for the relation).
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
The number of queries can be reduced to 2 if the issue is resolved, and eventually can be 1 if joins are implemented.
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]--> macOS 11.2.3 ~~Does this even matter~~
- Database: MySQL 5.7
- Node.js version: v15.5.0
- Prisma version:
<!--[Run `prisma -v` to see your Prisma version and paste it between the ´´´]-->
```
prisma : 2.19.0
@prisma/client : 2.19.0
Current platform : darwin
Query Engine : query-engine c1455d0b443d66b0d9db9bcb1bb9ee0d5bbc511d (at ../../node_modules/@prisma/engines/query-engine-darwin)
Migration Engine : migration-engine-cli c1455d0b443d66b0d9db9bcb1bb9ee0d5bbc511d (at ../../node_modules/@prisma/engines/migration-engine-darwin)
Introspection Engine : introspection-core c1455d0b443d66b0d9db9bcb1bb9ee0d5bbc511d (at ../../node_modules/@prisma/engines/introspection-engine-darwin)
Format Binary : prisma-fmt c1455d0b443d66b0d9db9bcb1bb9ee0d5bbc511d (at ../../node_modules/@prisma/engines/prisma-fmt-darwin)
Studio : 0.358.0
```
| non_process | dataloaders not combining findunique queries that can be combined thanks for helping us improve prisma 🙏 please follow the sections in the template and provide as much information as possible about your problem e g by setting the debug environment variable and enabling additional logging output in prisma client learn more about writing proper bug reports here bug description when calling separate findunique calls prisma combines all the queries very well thanks to the dataloader this works well even when the where clause of the findunique calls are different by using in however i found that it doesn t combine queries that have the include field i have logged the queries and found that the exact same two queries were getting fired i know that there is a plan to resolve include s using joins and that will make this useless however if implementing this isn t that big deal i want to see it implemented soon how to reproduce steps to reproduce the behavior go to change run see error ts const prisma new prismaclient log prisma sometable findunique where id then console log prisma sometable findunique where id then console log prisma sometable findunique where id include somerelation true then console log then total queries are fired for findunique calls without include for findunique calls with include and for the relation expected behavior the number of queries can be reduced to if the issue is resolved and eventually can be if joins are implemented environment setup os macos does this even matter database mysql node js version prisma version prisma prisma client current platform darwin query engine query engine at node modules prisma engines query engine darwin migration engine migration engine cli at node modules prisma engines migration engine darwin introspection engine introspection core at node modules prisma engines introspection engine darwin format binary prisma fmt at node modules prisma engines prisma fmt darwin studio | 0 |
16,653 | 21,722,715,244 | IssuesEvent | 2022-05-11 03:06:19 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | Editing Model with bad reference crashes QGIS | Feedback stale Processing Bug Windows Modeller | Using 3.16.2, a model I had created crashed QGIS when I attempting to open the model for editing. Was able to edit, find the source of the problem, and fix it using 3.10.
**How to Reproduce**
In 3.10 I was able to see that one of the inputs that the Merge Vector Layers algorithm was using was empty. I removed the reference, saved and after that, I was able to edit the model in 3.16 again.
I'm not sure how the model got corrupted in the first place. Generally, if you change the handle of an algorithm in the designer it changes the references to it downstream. For some reason that did not happen in this case. In any case, the Model Designer should be able to work with models that have broken references.
Sadly, I have fixed my model so it would not be of much use here.
| 1.0 | Editing Model with bad reference crashes QGIS - Using 3.16.2, a model I had created crashed QGIS when I attempting to open the model for editing. Was able to edit, find the source of the problem, and fix it using 3.10.
**How to Reproduce**
In 3.10 I was able to see that one of the inputs that the Merge Vector Layers algorithm was using was empty. I removed the reference, saved and after that, I was able to edit the model in 3.16 again.
I'm not sure how the model got corrupted in the first place. Generally, if you change the handle of an algorithm in the designer it changes the references to it downstream. For some reason that did not happen in this case. In any case, the Model Designer should be able to work with models that have broken references.
Sadly, I have fixed my model so it would not be of much use here.
| process | editing model with bad reference crashes qgis using a model i had created crashed qgis when i attempting to open the model for editing was able to edit find the source of the problem and fix it using how to reproduce in i was able to see that one of the inputs that the merge vector layers algorithm was using was empty i removed the reference saved and after that i was able to edit the model in again i m not sure how the model got corrupted in the first place generally if you change the handle of an algorithm in the designer it changes the references to it downstream for some reason that did not happen in this case in any case the model designer should be able to work with models that have broken references sadly i have fixed my model so it would not be of much use here | 1 |
6,467 | 9,546,653,813 | IssuesEvent | 2019-05-01 20:34:55 | openopps/openopps-platform | https://api.github.com/repos/openopps/openopps-platform | closed | Update Before you apply text on applicant view of internship opportunity | Apply Process Approved Opportunity Create Requirements Ready State Dept. | Who: Interns
What: Viewing of the community name
Why: State has requested all references to their program be U.S. Department of State Unpaid Internship Program (Unpaid)
Acceptance Criteria:
On the applicant view of the internship, update the text in the Before you apply box to reflect the full program name
- Current text reads, "To apply to the unpaid internship student program..."
- Text should now read, "To apply to the U.S. Department of State Student Internship Program (Unpaid), you must...."
Mock:
https://opm.invisionapp.com/share/ZEPNZR09Q54#/320303454_State_-_Opportunity_-Desktop- | 1.0 | Update Before you apply text on applicant view of internship opportunity - Who: Interns
What: Viewing of the community name
Why: State has requested all references to their program be U.S. Department of State Unpaid Internship Program (Unpaid)
Acceptance Criteria:
On the applicant view of the internship, update the text in the Before you apply box to reflect the full program name
- Current text reads, "To apply to the unpaid internship student program..."
- Text should now read, "To apply to the U.S. Department of State Student Internship Program (Unpaid), you must...."
Mock:
https://opm.invisionapp.com/share/ZEPNZR09Q54#/320303454_State_-_Opportunity_-Desktop- | process | update before you apply text on applicant view of internship opportunity who interns what viewing of the community name why state has requested all references to their program be u s department of state unpaid internship program unpaid acceptance criteria on the applicant view of the internship update the text in the before you apply box to reflect the full program name current text reads to apply to the unpaid internship student program text should now read to apply to the u s department of state student internship program unpaid you must mock | 1 |
162,585 | 12,681,687,016 | IssuesEvent | 2020-06-19 15:51:18 | symfony/symfony-docs | https://api.github.com/repos/symfony/symfony-docs | closed | PHPUnit binary path is wrong | PHPUnitBridge Testing | I noticed that phpunit binary path is wrong in this section: https://symfony.com/doc/current/testing.html#the-phpunit-testing-framework
Path is mentioned as `./bin/phpunit` while it should be `./vendor/bin/simple-phpunit` like it is mentioned in this section: https://symfony.com/doc/current/components/phpunit_bridge.html#usage | 1.0 | PHPUnit binary path is wrong - I noticed that phpunit binary path is wrong in this section: https://symfony.com/doc/current/testing.html#the-phpunit-testing-framework
Path is mentioned as `./bin/phpunit` while it should be `./vendor/bin/simple-phpunit` like it is mentioned in this section: https://symfony.com/doc/current/components/phpunit_bridge.html#usage | non_process | phpunit binary path is wrong i noticed that phpunit binary path is wrong in this section path is mentioned as bin phpunit while it should be vendor bin simple phpunit like it is mentioned in this section | 0 |
179,898 | 21,605,133,956 | IssuesEvent | 2022-05-04 01:10:01 | ibm-cio-vulnerability-scanning/insomnia | https://api.github.com/repos/ibm-cio-vulnerability-scanning/insomnia | opened | CVE-2022-1214 (High) detected in axios-0.21.4.tgz, axios-0.21.2.tgz | security vulnerability | ## CVE-2022-1214 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>axios-0.21.4.tgz</b>, <b>axios-0.21.2.tgz</b></p></summary>
<p>
<details><summary><b>axios-0.21.4.tgz</b></p></summary>
<p>Promise based HTTP client for the browser and node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/axios/-/axios-0.21.4.tgz">https://registry.npmjs.org/axios/-/axios-0.21.4.tgz</a></p>
<p>
Dependency Hierarchy:
- :x: **axios-0.21.4.tgz** (Vulnerable Library)
</details>
<details><summary><b>axios-0.21.2.tgz</b></p></summary>
<p>Promise based HTTP client for the browser and node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/axios/-/axios-0.21.2.tgz">https://registry.npmjs.org/axios/-/axios-0.21.2.tgz</a></p>
<p>
Dependency Hierarchy:
- :x: **axios-0.21.2.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/ibm-cio-vulnerability-scanning/insomnia/commit/6584b84b5580d875cdc382437add0bd24e27b39e">6584b84b5580d875cdc382437add0bd24e27b39e</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository axios/axios prior to 0.26.
<p>Publish Date: 2022-05-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-1214>CVE-2022-1214</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/ef7b4ab6-a3f6-4268-a21a-e7104d344607/">https://huntr.dev/bounties/ef7b4ab6-a3f6-4268-a21a-e7104d344607/</a></p>
<p>Release Date: 2022-05-03</p>
<p>Fix Resolution: 0.26.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-1214 (High) detected in axios-0.21.4.tgz, axios-0.21.2.tgz - ## CVE-2022-1214 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>axios-0.21.4.tgz</b>, <b>axios-0.21.2.tgz</b></p></summary>
<p>
<details><summary><b>axios-0.21.4.tgz</b></p></summary>
<p>Promise based HTTP client for the browser and node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/axios/-/axios-0.21.4.tgz">https://registry.npmjs.org/axios/-/axios-0.21.4.tgz</a></p>
<p>
Dependency Hierarchy:
- :x: **axios-0.21.4.tgz** (Vulnerable Library)
</details>
<details><summary><b>axios-0.21.2.tgz</b></p></summary>
<p>Promise based HTTP client for the browser and node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/axios/-/axios-0.21.2.tgz">https://registry.npmjs.org/axios/-/axios-0.21.2.tgz</a></p>
<p>
Dependency Hierarchy:
- :x: **axios-0.21.2.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/ibm-cio-vulnerability-scanning/insomnia/commit/6584b84b5580d875cdc382437add0bd24e27b39e">6584b84b5580d875cdc382437add0bd24e27b39e</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository axios/axios prior to 0.26.
<p>Publish Date: 2022-05-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-1214>CVE-2022-1214</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/ef7b4ab6-a3f6-4268-a21a-e7104d344607/">https://huntr.dev/bounties/ef7b4ab6-a3f6-4268-a21a-e7104d344607/</a></p>
<p>Release Date: 2022-05-03</p>
<p>Fix Resolution: 0.26.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in axios tgz axios tgz cve high severity vulnerability vulnerable libraries axios tgz axios tgz axios tgz promise based http client for the browser and node js library home page a href dependency hierarchy x axios tgz vulnerable library axios tgz promise based http client for the browser and node js library home page a href dependency hierarchy x axios tgz vulnerable library found in head commit a href found in base branch develop vulnerability details exposure of sensitive information to an unauthorized actor in github repository axios axios prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
650,380 | 21,389,663,294 | IssuesEvent | 2022-04-21 05:19:43 | unoplatform/uno.extensions | https://api.github.com/repos/unoplatform/uno.extensions | closed | Off ui-thread navigation | kind/bug priority/critical-urgent | Navigation should be able to be triggered from a background thread
## What would you like to be added:
Dispatch any ui related parts of navigation to the ui-thread
## Why is this needed:
Viewmodels often need to run async logic on a background thread. Being able to trigger navigation from a background thread would reduce issues with cross thread exceptions
## For which Platform:
- [X ] iOS
- [X ] Android
- [X ] WebAssembly
- [ ] WebAssembly renders for Xamarin.Forms
- [X ] Windows
- [ ] Build tasks
## Anything else we need to know?
| 1.0 | Off ui-thread navigation - Navigation should be able to be triggered from a background thread
## What would you like to be added:
Dispatch any ui related parts of navigation to the ui-thread
## Why is this needed:
Viewmodels often need to run async logic on a background thread. Being able to trigger navigation from a background thread would reduce issues with cross thread exceptions
## For which Platform:
- [X ] iOS
- [X ] Android
- [X ] WebAssembly
- [ ] WebAssembly renders for Xamarin.Forms
- [X ] Windows
- [ ] Build tasks
## Anything else we need to know?
| non_process | off ui thread navigation navigation should be able to be triggered from a background thread what would you like to be added dispatch any ui related parts of navigation to the ui thread why is this needed viewmodels often need to run async logic on a background thread being able to trigger navigation from a background thread would reduce issues with cross thread exceptions for which platform ios android webassembly webassembly renders for xamarin forms windows build tasks anything else we need to know | 0 |
150,584 | 5,780,390,708 | IssuesEvent | 2017-04-29 00:15:17 | wl500g/wl500g | https://api.github.com/repos/wl500g/wl500g | closed | New 3G modem | auto-migrated Priority-Medium Type-Support | ```
Hello!
How can I add a new 3G modem? In particular, interested in Alcatel OneTouch
X310E.
Thanks in advance!
```
Original issue reported on code.google.com by `icera...@gmail.com` on 14 Oct 2013 at 5:50
| 1.0 | New 3G modem - ```
Hello!
How can I add a new 3G modem? In particular, interested in Alcatel OneTouch
X310E.
Thanks in advance!
```
Original issue reported on code.google.com by `icera...@gmail.com` on 14 Oct 2013 at 5:50
| non_process | new modem hello how can i add a new modem in particular interested in alcatel onetouch thanks in advance original issue reported on code google com by icera gmail com on oct at | 0 |
29,111 | 5,580,082,310 | IssuesEvent | 2017-03-28 15:51:38 | scikit-learn/scikit-learn | https://api.github.com/repos/scikit-learn/scikit-learn | opened | Add example on t-sne perplexity | Documentation Easy Need Contributor | I think we should add an example on t-sne perplexity. Tuning it can be really important.
See http://distill.pub/2016/misread-tsne/
We should see if there is a good example using any of our standard datasets. Maybe lfw?
I played around with the "water treatment plant" from UCI which was quite interesting, though I'm not sure we can get that from mldata.
We can also use a higher-dim synthetic example as in the blog-post above. | 1.0 | Add example on t-sne perplexity - I think we should add an example on t-sne perplexity. Tuning it can be really important.
See http://distill.pub/2016/misread-tsne/
We should see if there is a good example using any of our standard datasets. Maybe lfw?
I played around with the "water treatment plant" from UCI which was quite interesting, though I'm not sure we can get that from mldata.
We can also use a higher-dim synthetic example as in the blog-post above. | non_process | add example on t sne perplexity i think we should add an example on t sne perplexity tuning it can be really important see we should see if there is a good example using any of our standard datasets maybe lfw i played around with the water treatment plant from uci which was quite interesting though i m not sure we can get that from mldata we can also use a higher dim synthetic example as in the blog post above | 0 |
757,858 | 26,532,706,631 | IssuesEvent | 2023-01-19 13:39:53 | matrixorigin/matrixone | https://api.github.com/repos/matrixorigin/matrixone | closed | [Bug]: bit_wise function result different in ARM and X86 | priority/p0 kind/feature needs-triage severity/s1 | ### Is there an existing issue for the same bug?
- [X] #6974
### Environment
```Markdown
- Version or commit-id (e.g. v0.1.0 or 8b23a93):
- Hardware parameters:
- OS type:
- Others:
```
### Actual Behavior
These cases succeed in local bvt test with Mac M1 ARM, but it failed in bvt case in github under CentOS with x86 chipset.
The expected result is in Mac, the actual result is from CentOS in github.
select BIT_AND(999999999999999933193939.99999),BIT_OR(999999999999999933193939.99999),BIT_XOR(999999999999999933193939.99999);
[EXPECT RESULT]:
bit_and(999999999999999933193939.99999) bit_or(999999999999999933193939.99999) bit_xor(999999999999999933193939.99999)
18446744073709551615 18446744073709551615 18446744073709551615
[ACTUAL RESULT]:
bit_and(999999999999999933193939.99999) bit_or(999999999999999933193939.99999) bit_xor(999999999999999933193939.99999)
9223372036854775808 9223372036854775808 9223372036854775808
select BIT_AND(9999999999999999999999999999999999.9999999999999),BIT_OR(9999999999999999999999999999999999.9999999999999),BIT_XOR(9999999999999999999999999999999999.9999999999999);
[EXPECT RESULT]:
bit_and(9999999999999999999999999999999999.9999999999999) bit_or(9999999999999999999999999999999999.9999999999999) bit_xor(9999999999999999999999999999999999.9999999999999)
18446744073709551615 18446744073709551615 18446744073709551615
[ACTUAL RESULT]:
bit_and(9999999999999999999999999999999999.9999999999999) bit_or(9999999999999999999999999999999999.9999999999999) bit_xor(9999999999999999999999999999999999.9999999999999)
9223372036854775808 9223372036854775808 9223372036854775808
select BIT_AND(-99999999999999999.99999),BIT_OR(-99999999999999999.99999),BIT_XOR(-99999999999999999.99999);
[EXPECT RESULT]:
bit_and(-99999999999999999.99999) bit_or(-99999999999999999.99999) bit_xor(-99999999999999999.99999)
0 0 0
[ACTUAL RESULT]:
bit_and(-99999999999999999.99999) bit_or(-99999999999999999.99999) bit_xor(-99999999999999999.99999)
18346744073709551616 18346744073709551616 18346744073709551616
select BIT_AND(-999999999999999933193939.99999),BIT_OR(-999999999999999933193939.99999),BIT_XOR(-999999999999999933193939.99999);
[EXPECT RESULT]:
bit_and(-999999999999999933193939.99999) bit_or(-999999999999999933193939.99999) bit_xor(-999999999999999933193939.99999)
0 0 0
[ACTUAL RESULT]:
bit_and(-999999999999999933193939.99999) bit_or(-999999999999999933193939.99999) bit_xor(-999999999999999933193939.99999)
9223372036854775808 9223372036854775808 9223372036854775808
### Expected Behavior
_No response_
### Steps to Reproduce
_No response_
### Additional information
_No response_ | 1.0 | [Bug]: bit_wise function result different in ARM and X86 - ### Is there an existing issue for the same bug?
- [X] #6974
### Environment
```Markdown
- Version or commit-id (e.g. v0.1.0 or 8b23a93):
- Hardware parameters:
- OS type:
- Others:
```
### Actual Behavior
These cases succeed in local bvt test with Mac M1 ARM, but it failed in bvt case in github under CentOS with x86 chipset.
The expected result is in Mac, the actual result is from CentOS in github.
select BIT_AND(999999999999999933193939.99999),BIT_OR(999999999999999933193939.99999),BIT_XOR(999999999999999933193939.99999);
[EXPECT RESULT]:
bit_and(999999999999999933193939.99999) bit_or(999999999999999933193939.99999) bit_xor(999999999999999933193939.99999)
18446744073709551615 18446744073709551615 18446744073709551615
[ACTUAL RESULT]:
bit_and(999999999999999933193939.99999) bit_or(999999999999999933193939.99999) bit_xor(999999999999999933193939.99999)
9223372036854775808 9223372036854775808 9223372036854775808
select BIT_AND(9999999999999999999999999999999999.9999999999999),BIT_OR(9999999999999999999999999999999999.9999999999999),BIT_XOR(9999999999999999999999999999999999.9999999999999);
[EXPECT RESULT]:
bit_and(9999999999999999999999999999999999.9999999999999) bit_or(9999999999999999999999999999999999.9999999999999) bit_xor(9999999999999999999999999999999999.9999999999999)
18446744073709551615 18446744073709551615 18446744073709551615
[ACTUAL RESULT]:
bit_and(9999999999999999999999999999999999.9999999999999) bit_or(9999999999999999999999999999999999.9999999999999) bit_xor(9999999999999999999999999999999999.9999999999999)
9223372036854775808 9223372036854775808 9223372036854775808
select BIT_AND(-99999999999999999.99999),BIT_OR(-99999999999999999.99999),BIT_XOR(-99999999999999999.99999);
[EXPECT RESULT]:
bit_and(-99999999999999999.99999) bit_or(-99999999999999999.99999) bit_xor(-99999999999999999.99999)
0 0 0
[ACTUAL RESULT]:
bit_and(-99999999999999999.99999) bit_or(-99999999999999999.99999) bit_xor(-99999999999999999.99999)
18346744073709551616 18346744073709551616 18346744073709551616
select BIT_AND(-999999999999999933193939.99999),BIT_OR(-999999999999999933193939.99999),BIT_XOR(-999999999999999933193939.99999);
[EXPECT RESULT]:
bit_and(-999999999999999933193939.99999) bit_or(-999999999999999933193939.99999) bit_xor(-999999999999999933193939.99999)
0 0 0
[ACTUAL RESULT]:
bit_and(-999999999999999933193939.99999) bit_or(-999999999999999933193939.99999) bit_xor(-999999999999999933193939.99999)
9223372036854775808 9223372036854775808 9223372036854775808
### Expected Behavior
_No response_
### Steps to Reproduce
_No response_
### Additional information
_No response_ | non_process | bit wise function result different in arm and is there an existing issue for the same bug environment markdown version or commit id e g or hardware parameters os type others actual behavior these cases succeed in local bvt test with mac arm but it failed in bvt case in github under centos with chipset the expected result is in mac the actual result is from centos in github select bit and bit or bit xor bit and bit or bit xor bit and bit or bit xor select bit and bit or bit xor bit and bit or bit xor bit and bit or bit xor select bit and bit or bit xor bit and bit or bit xor bit and bit or bit xor select bit and bit or bit xor bit and bit or bit xor bit and bit or bit xor expected behavior no response steps to reproduce no response additional information no response | 0 |
157,305 | 12,369,531,500 | IssuesEvent | 2020-05-18 15:23:26 | ImisDevelopers/1_011_a_infektionsfall_uebermittellung | https://api.github.com/repos/ImisDevelopers/1_011_a_infektionsfall_uebermittellung | closed | Prefilled Form-Text überlappt Feld | netlight-testing | **Beschreibung**
Der vorausgefüllte Text wie ein Datum erfasst werden soll, ist abgeschnitten (siehe Screen)
**Eingeloggt als**
DPH (Department of Health)
**Link**
https://staging.imis-prototyp.de/app/request-quarantine
**Browser**
Edge
**Screenshot**

| 1.0 | Prefilled Form-Text überlappt Feld - **Beschreibung**
Der vorausgefüllte Text wie ein Datum erfasst werden soll, ist abgeschnitten (siehe Screen)
**Eingeloggt als**
DPH (Department of Health)
**Link**
https://staging.imis-prototyp.de/app/request-quarantine
**Browser**
Edge
**Screenshot**

| non_process | prefilled form text überlappt feld beschreibung der vorausgefüllte text wie ein datum erfasst werden soll ist abgeschnitten siehe screen eingeloggt als dph department of health link browser edge screenshot | 0 |
21,388 | 29,202,231,327 | IssuesEvent | 2023-05-21 00:37:32 | devssa/onde-codar-em-salvador | https://api.github.com/repos/devssa/onde-codar-em-salvador | closed | [Remoto] Business Analyst na Coodesh | SALVADOR AGILE SQL REQUISITOS REMOTO PROCESSOS GITHUB SEGURANÇA UMA LIDERANÇA METODOLOGIAS ÁGEIS MANUTENÇÃO BPMN NEGÓCIOS SUPORTE Stale | ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/business-analyst-172721593?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Grupo Fácil</strong> busca <strong><ins>Business Analyst</ins></strong> para compor seu time!</p>
<p><strong>RESPONSABILIDADES E ATRIBUIÇÕES</strong></p>
<ul>
<li>Prospectar, planejar e suportar a implementação de projetos de sistemas alinhados com as necessidades do negócio e metas da companhia;</li>
<li>Exercer a liderança técnica em equipes mistas de projetos usando metodologia Agile;</li>
<li>Gerir requisitos, avaliar as necessidades levantadas pelos clientes e administrar alocação de recursos e customizações;</li>
<li>Exercer a liderança técnica em equipes mistas de projetos usando metodologia Agile;</li>
<li>Garantir o suporte aos usuários, intermediando o atendimento dos fornecedores técnicos e tratando também a manutenção dos sistemas da sua área.</li>
</ul>
## Grupo Fácil:
<p>Ao longo de 27 anos de história, o Grupo Fácil se tornou referência nacional em sistemas, softwares e serviços para a gestão de negócios nas áreas financeira e de crédito, da saúde e no setor imobiliário.</p>
<p>O Grupo Fácil é formado por um conjunto de empresas que se destacam pela solidez e ousadia em projetos que otimizam processos e oferecem mais segurança e rentabilidade para seus clientes. </p><a href='https://coodesh.com/empresas/grupo-facil'>Veja mais no site</a>
## Habilidades:
- SQL
- API
- Metodologias ágeis
## Local:
100% Remoto
## Requisitos:
- Superior completo em Administração de Empresas, Sistemas de Informação ou áreas correlatas;
- Vivência em rotinas administrativas;
- Experiência em projetos de implantação de sistemas;
- Conhecimento em linguagem SQL;
- Conhecimento em integração de sistemas (conceitos de Webservice, API, etc).
## Diferenciais:
- Boa comunicação;
- Foco em resultado;
- Excelente relacionamento interpessoal;
- Pró-atividade;
- Desejável experiência no segmento de Saúde;
- Desejável certificação em gestão de projeto e BPMN.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Business Analyst na Grupo Fácil](https://coodesh.com/vagas/business-analyst-172721593?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
CLT
#### Categoria
Gestão em TI | 1.0 | [Remoto] Business Analyst na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/business-analyst-172721593?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Grupo Fácil</strong> busca <strong><ins>Business Analyst</ins></strong> para compor seu time!</p>
<p><strong>RESPONSABILIDADES E ATRIBUIÇÕES</strong></p>
<ul>
<li>Prospectar, planejar e suportar a implementação de projetos de sistemas alinhados com as necessidades do negócio e metas da companhia;</li>
<li>Exercer a liderança técnica em equipes mistas de projetos usando metodologia Agile;</li>
<li>Gerir requisitos, avaliar as necessidades levantadas pelos clientes e administrar alocação de recursos e customizações;</li>
<li>Exercer a liderança técnica em equipes mistas de projetos usando metodologia Agile;</li>
<li>Garantir o suporte aos usuários, intermediando o atendimento dos fornecedores técnicos e tratando também a manutenção dos sistemas da sua área.</li>
</ul>
## Grupo Fácil:
<p>Ao longo de 27 anos de história, o Grupo Fácil se tornou referência nacional em sistemas, softwares e serviços para a gestão de negócios nas áreas financeira e de crédito, da saúde e no setor imobiliário.</p>
<p>O Grupo Fácil é formado por um conjunto de empresas que se destacam pela solidez e ousadia em projetos que otimizam processos e oferecem mais segurança e rentabilidade para seus clientes. </p><a href='https://coodesh.com/empresas/grupo-facil'>Veja mais no site</a>
## Habilidades:
- SQL
- API
- Metodologias ágeis
## Local:
100% Remoto
## Requisitos:
- Superior completo em Administração de Empresas, Sistemas de Informação ou áreas correlatas;
- Vivência em rotinas administrativas;
- Experiência em projetos de implantação de sistemas;
- Conhecimento em linguagem SQL;
- Conhecimento em integração de sistemas (conceitos de Webservice, API, etc).
## Diferenciais:
- Boa comunicação;
- Foco em resultado;
- Excelente relacionamento interpessoal;
- Pró-atividade;
- Desejável experiência no segmento de Saúde;
- Desejável certificação em gestão de projeto e BPMN.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Business Analyst na Grupo Fácil](https://coodesh.com/vagas/business-analyst-172721593?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
CLT
#### Categoria
Gestão em TI | process | business analyst na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a grupo fácil busca business analyst para compor seu time responsabilidades e atribuições prospectar planejar e suportar a implementação de projetos de sistemas alinhados com as necessidades do negócio e metas da companhia exercer a liderança técnica em equipes mistas de projetos usando metodologia agile gerir requisitos avaliar as necessidades levantadas pelos clientes e administrar alocação de recursos e customizações exercer a liderança técnica em equipes mistas de projetos usando metodologia agile garantir o suporte aos usuários intermediando o atendimento dos fornecedores técnicos e tratando também a manutenção dos sistemas da sua área grupo fácil ao longo de anos de história o grupo fácil se tornou referência nacional em sistemas softwares e serviços para a gestão de negócios nas áreas financeira e de crédito da saúde e no setor imobiliário o grupo fácil é formado por um conjunto de empresas que se destacam pela solidez e ousadia em projetos que otimizam processos e oferecem mais segurança e rentabilidade para seus clientes nbsp habilidades sql api metodologias ágeis local remoto requisitos superior completo em administração de empresas sistemas de informação ou áreas correlatas vivência em rotinas administrativas experiência em projetos de implantação de sistemas conhecimento em linguagem sql conhecimento em integração de sistemas conceitos de webservice api etc diferenciais boa comunicação foco em resultado excelente relacionamento interpessoal pró atividade desejável experiência no segmento de saúde desejável certificação em gestão de projeto e bpmn como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto regime clt categoria gestão em ti | 1 |
14,130 | 17,023,228,082 | IssuesEvent | 2021-07-03 00:57:27 | CAVaccineInventory/vial | https://api.github.com/repos/CAVaccineInventory/vial | reopened | need more info in the report view | django-admin and tools nice-to-have qa-process usability web-banking | WB/QA often need to reference additional information from the location when creating/viewing reports.
Could the following fields be added (as indicated in the mock up below, preferably in the order shown):
- `full address`
- `location type`
- `phone number`
Also, could `Location:` be changed to display `Location name:` And could the date be formatted to split in to two lines and be formatted as follows (removing the ordinal number and replacing it with an integer):
`1st May 2021 6:11:45 PM PDT`
becomes
`1 May 2021`
`6:11 PM PDT`
Current info at the top of a record:

Proposed info/mock up:

| 1.0 | need more info in the report view - WB/QA often need to reference additional information from the location when creating/viewing reports.
Could the following fields be added (as indicated in the mock up below, preferably in the order shown):
- `full address`
- `location type`
- `phone number`
Also, could `Location:` be changed to display `Location name:` And could the date be formatted to split in to two lines and be formatted as follows (removing the ordinal number and replacing it with an integer):
`1st May 2021 6:11:45 PM PDT`
becomes
`1 May 2021`
`6:11 PM PDT`
Current info at the top of a record:

Proposed info/mock up:

| process | need more info in the report view wb qa often need to reference additional information from the location when creating viewing reports could the following fields be added as indicated in the mock up below preferably in the order shown full address location type phone number also could location be changed to display location name and could the date be formatted to split in to two lines and be formatted as follows removing the ordinal number and replacing it with an integer may pm pdt becomes may pm pdt current info at the top of a record proposed info mock up | 1 |
16,150 | 20,508,291,672 | IssuesEvent | 2022-03-01 01:47:57 | g4he/g4he | https://api.github.com/repos/g4he/g4he | closed | Calculate total value of all projects involved in, and add it to the project record | feature: data processing wontfix | Whilst not the collaboration report answer, useful for other overview purposes.
Perhaps calculate once a day so it can be used for sorting
| 1.0 | Calculate total value of all projects involved in, and add it to the project record - Whilst not the collaboration report answer, useful for other overview purposes.
Perhaps calculate once a day so it can be used for sorting
| process | calculate total value of all projects involved in and add it to the project record whilst not the collaboration report answer useful for other overview purposes perhaps calculate once a day so it can be used for sorting | 1 |
1,436 | 4,004,053,962 | IssuesEvent | 2016-05-12 04:44:42 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | reopened | NTRs: (positive/negative) regulation of NFAT transcription factor activity | BHF-UCL miRNA New term request RNA processes |
Dear Editors,
I would like to request new terms:
- regulation of NFAT transcription factor activity
- positive regulation of NFAT transcription factor activity
- negative) regulation of NFAT transcription factor activity
Re: PMID:24117217, Figure 4.
These (positive regulation) terms would become children of:
GO:0051091 positive regulation of sequence-specific DNA binding transcription factor activity
And siblings of e.g.
GO:0032793 positive regulation of CREB transcription factor activity
or
GO:0051092 positive regulation of NF-kappaB transcription factor activity
Thanks,
Barbara
GOC:BHF, GOC:BHF_miRNA and GOC:bc
@rachhuntley
@RLovering | 1.0 | NTRs: (positive/negative) regulation of NFAT transcription factor activity -
Dear Editors,
I would like to request new terms:
- regulation of NFAT transcription factor activity
- positive regulation of NFAT transcription factor activity
- negative) regulation of NFAT transcription factor activity
Re: PMID:24117217, Figure 4.
These (positive regulation) terms would become children of:
GO:0051091 positive regulation of sequence-specific DNA binding transcription factor activity
And siblings of e.g.
GO:0032793 positive regulation of CREB transcription factor activity
or
GO:0051092 positive regulation of NF-kappaB transcription factor activity
Thanks,
Barbara
GOC:BHF, GOC:BHF_miRNA and GOC:bc
@rachhuntley
@RLovering | process | ntrs positive negative regulation of nfat transcription factor activity dear editors i would like to request new terms regulation of nfat transcription factor activity positive regulation of nfat transcription factor activity negative regulation of nfat transcription factor activity re pmid figure these positive regulation terms would become children of go positive regulation of sequence specific dna binding transcription factor activity and siblings of e g go positive regulation of creb transcription factor activity or go positive regulation of nf kappab transcription factor activity thanks barbara goc bhf goc bhf mirna and goc bc rachhuntley rlovering | 1 |
10,616 | 13,439,050,945 | IssuesEvent | 2020-09-07 19:52:35 | timberio/vector | https://api.github.com/repos/timberio/vector | opened | New `sha1` remap function | domain: mapping domain: processing type: feature | As requested in #3691, the `sha1` remap function hashes the provided argument with the SHA1 algorithm.
## Examples
For all examples assume the following event:
```js
{
"message": "Hello world",
"remote_addr": "54.23.22.123"
}
```
### Path
```
.fingerprint = sha1(.message)
```
### String literal
```
.fingerprint = md5sha1my string")
```
### Operators
```
.fingerprint = sha1(.message + .remote_addr)
```
I realize this example is outside of the scope of this function, but I wanted to include it for completeness. | 1.0 | New `sha1` remap function - As requested in #3691, the `sha1` remap function hashes the provided argument with the SHA1 algorithm.
## Examples
For all examples assume the following event:
```js
{
"message": "Hello world",
"remote_addr": "54.23.22.123"
}
```
### Path
```
.fingerprint = sha1(.message)
```
### String literal
```
.fingerprint = md5sha1my string")
```
### Operators
```
.fingerprint = sha1(.message + .remote_addr)
```
I realize this example is outside of the scope of this function, but I wanted to include it for completeness. | process | new remap function as requested in the remap function hashes the provided argument with the algorithm examples for all examples assume the following event js message hello world remote addr path fingerprint message string literal fingerprint string operators fingerprint message remote addr i realize this example is outside of the scope of this function but i wanted to include it for completeness | 1 |
692,597 | 23,742,042,209 | IssuesEvent | 2022-08-31 13:13:53 | milvus-io/milvus | https://api.github.com/repos/milvus-io/milvus | closed | [Bug]: [scale] Standalone load fails and flush hangs after oomkilled and scaling-up restart | kind/bug priority/critical-urgent triage/accepted | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Environment
```markdown
- Milvus version: 2.1.0-20220812-bc62ca1f
- Deployment mode(standalone or cluster): standalone
- SDK version(e.g. pymilvus v2.0.0rc2): pymilvus 2.2.0.dev6
- OS(Ubuntu or CentOS):
- CPU/Memory:
- GPU:
- Others:
```
### Current Behavior
argo-workflow name: `scale-up-oom-sk85w`
1. Deploy standalone and limit resources
```
resources:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 9Gi
cpu: 2
```
standalone pod:
```
standalone-timeout-milvus-standalone-79887ffc9d-mgmc6 1/1 Running 2 (76s ago) 11m
```
2. Run the case
```
def test_insert_oom(self, host):
default_search_params = {"metric_type": "L2", "params": {"nprobe": 10}}
# collection fields and schema
fields = [cf.gen_int64_field(is_primary=True), cf.gen_double_field(), cf.gen_float_vec_field(dim=3840)]
def do_collection():
""" create collection, insert, load, search, query"""
schema, _ = ApiCollectionSchemaWrapper().init_collection_schema(fields=fields, auto_id=True)
collection_w = ApiCollectionWrapper()
collection_w.init_collection(name=cf.gen_unique_str("oom_"), schema=schema, shards_num=16)
for i in range(10):
df = pd.DataFrame({
ct.default_double_field_name: pd.Series(data=[np.double(i) for i in range(0, 2000)],
dtype="double"),
ct.default_float_vec_field_name: cf.gen_vectors(2000, dim=3840)
})
insert_res, _ = collection_w.insert(df, timeout=180, check_task=CheckTasks.check_nothing)
# log.debug(collection_w.num_entities)
collection_w.load(timeout=120)
log.debug(collection_w.get_replicas(), check_task=CheckTasks.check_nothing)
query_res, _ = collection_w.query(expr=f"{ct.default_int64_field_name} in {insert_res.primary_keys[0:10]}", timeout=120)
assert len(query_res) == 10
search_res, _ = collection_w.search(cf.gen_vectors(2, dim=3840), ct.default_float_vec_field_name,
default_search_params, 10, timeout_decorator=120)
assert len(search_res) == 2
assert len(search_res[0]) == 10
try:
tasks = []
with ThreadPoolExecutor(max_workers=8) as t:
for i in range(20):
task = t.submit(do_collection)
tasks.append(task)
for task in tasks:
task.done()
except Exception as e:
log.error(str(e))
```
3. Standalone pod oomkilled and restart sometimes
```
standalone-timeout-milvus-standalone-6c5b5b7478-vxdzf 1/1 Running 0 23s
```
4. Scale-up cpu and memory
```
resources:
limits:
memory: 16Gi
cpu: 4
```
5. Wait pods ready and run the check case
- e2e-test check, load collection failed
```
[2022-08-15 05:14:32 - DEBUG - ci_test]: (api_request) : [Collection.flush] args: [], kwargs: {'timeout': 20} (api_request.py:56)
[2022-08-15 05:14:32 - DEBUG - ci_test]: (api_response) : None (api_request.py:31)
[2022-08-15 05:14:32 - INFO - ci_test]: [test][2022-08-15T05:14:32Z] [0.05926175s] e2e__ySI564pk flush -> None (wrapper.py:30)
[2022-08-15 05:14:32 - INFO - ci_test]: assert flush: 4.096614360809326, entities: 3000 (test_e2e.py:41)
[2022-08-15 05:14:32 - DEBUG - ci_test]: (api_request) : [Collection.load] args: [None, 1, 20], kwargs: {} (api_request.py:56)
[2022-08-15 05:14:52 - ERROR - pymilvus.decorators]: RPC error: [load_collection], <MilvusException: (code=1, message=rpc deadline e
xceeded: Retry timeout: 20s)>, <Time:{'RPC start': '2022-08-15 05:14:32.762355', 'RPC error': '2022-08-15 05:14:52.763394'}> (decora
tors.py:95)
[2022-08-15 05:14:52 - ERROR - ci_test]: Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 48, in handler
return func(self, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/pymilvus/client/grpc_handler.py", line 673, in load_collection
response = rf.result()
File "/usr/local/lib/python3.8/dist-packages/grpc/_channel.py", line 744, in result
raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "{"created":"@1660540492.763138120","description":"Deadline Exceeded","file":"src/core/ext/filters/dead
line/deadline_filter.cc","file_line":81,"grpc_status":4}"
>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/milvus/tests/python_client/utils/api_request.py", line 26, in inner_wrapper
res = func(*args, **_kwargs)
File "/src/milvus/tests/python_client/utils/api_request.py", line 57, in api_request
return func(*arg, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/pymilvus/orm/collection.py", line 474, in load
conn.load_collection(self._name, replica_number=replica_number, timeout=timeout, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 96, in handler
raise e
```
- oom-collection check flush hangs:
```
[2022-08-15 05:15:06,963 - DEBUG - ci_test]: (api_request) : [Collection] args: ['oom__TwaJZopL', None, 'default', 2], kwargs: {'co
nsistency_level': 'Strong'} (api_request.py:56)
[2022-08-15 05:15:06,967 - DEBUG - ci_test]: (api_response) : <Collection>:
-------------
<name>: oom__TwaJZopL
<partitions>: [{"name": "_default", "collection_name": "oom__TwaJZopL", "description": ""}]
<description>:
<schema>: {
auto_id: True
description:
fields: [{
name: int64
description:
type: 5
is_primary: True
auto_id: True
...... (api_request.py:31)
[2022-08-15 05:15:06,967 - DEBUG - ci_test]: (api_request) : [Collection.flush] args: [], kwargs: {'timeout': 20} (api_request.py:5
6)
Error: signal: terminated
```
### Expected Behavior
_No response_
### Steps To Reproduce
_No response_
### Milvus Log
_No response_
### Anything else?
_No response_ | 1.0 | [Bug]: [scale] Standalone load fails and flush hangs after oomkilled and scaling-up restart - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Environment
```markdown
- Milvus version: 2.1.0-20220812-bc62ca1f
- Deployment mode(standalone or cluster): standalone
- SDK version(e.g. pymilvus v2.0.0rc2): pymilvus 2.2.0.dev6
- OS(Ubuntu or CentOS):
- CPU/Memory:
- GPU:
- Others:
```
### Current Behavior
argo-workflow name: `scale-up-oom-sk85w`
1. Deploy standalone and limit resources
```
resources:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 9Gi
cpu: 2
```
standalone pod:
```
standalone-timeout-milvus-standalone-79887ffc9d-mgmc6 1/1 Running 2 (76s ago) 11m
```
2. Run the case
```
def test_insert_oom(self, host):
default_search_params = {"metric_type": "L2", "params": {"nprobe": 10}}
# collection fields and schema
fields = [cf.gen_int64_field(is_primary=True), cf.gen_double_field(), cf.gen_float_vec_field(dim=3840)]
def do_collection():
""" create collection, insert, load, search, query"""
schema, _ = ApiCollectionSchemaWrapper().init_collection_schema(fields=fields, auto_id=True)
collection_w = ApiCollectionWrapper()
collection_w.init_collection(name=cf.gen_unique_str("oom_"), schema=schema, shards_num=16)
for i in range(10):
df = pd.DataFrame({
ct.default_double_field_name: pd.Series(data=[np.double(i) for i in range(0, 2000)],
dtype="double"),
ct.default_float_vec_field_name: cf.gen_vectors(2000, dim=3840)
})
insert_res, _ = collection_w.insert(df, timeout=180, check_task=CheckTasks.check_nothing)
# log.debug(collection_w.num_entities)
collection_w.load(timeout=120)
log.debug(collection_w.get_replicas(), check_task=CheckTasks.check_nothing)
query_res, _ = collection_w.query(expr=f"{ct.default_int64_field_name} in {insert_res.primary_keys[0:10]}", timeout=120)
assert len(query_res) == 10
search_res, _ = collection_w.search(cf.gen_vectors(2, dim=3840), ct.default_float_vec_field_name,
default_search_params, 10, timeout_decorator=120)
assert len(search_res) == 2
assert len(search_res[0]) == 10
try:
tasks = []
with ThreadPoolExecutor(max_workers=8) as t:
for i in range(20):
task = t.submit(do_collection)
tasks.append(task)
for task in tasks:
task.done()
except Exception as e:
log.error(str(e))
```
3. Standalone pod oomkilled and restart sometimes
```
standalone-timeout-milvus-standalone-6c5b5b7478-vxdzf 1/1 Running 0 23s
```
4. Scale-up cpu and memory
```
resources:
limits:
memory: 16Gi
cpu: 4
```
5. Wait pods ready and run the check case
- e2e-test check, load collection failed
```
[2022-08-15 05:14:32 - DEBUG - ci_test]: (api_request) : [Collection.flush] args: [], kwargs: {'timeout': 20} (api_request.py:56)
[2022-08-15 05:14:32 - DEBUG - ci_test]: (api_response) : None (api_request.py:31)
[2022-08-15 05:14:32 - INFO - ci_test]: [test][2022-08-15T05:14:32Z] [0.05926175s] e2e__ySI564pk flush -> None (wrapper.py:30)
[2022-08-15 05:14:32 - INFO - ci_test]: assert flush: 4.096614360809326, entities: 3000 (test_e2e.py:41)
[2022-08-15 05:14:32 - DEBUG - ci_test]: (api_request) : [Collection.load] args: [None, 1, 20], kwargs: {} (api_request.py:56)
[2022-08-15 05:14:52 - ERROR - pymilvus.decorators]: RPC error: [load_collection], <MilvusException: (code=1, message=rpc deadline e
xceeded: Retry timeout: 20s)>, <Time:{'RPC start': '2022-08-15 05:14:32.762355', 'RPC error': '2022-08-15 05:14:52.763394'}> (decora
tors.py:95)
[2022-08-15 05:14:52 - ERROR - ci_test]: Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 48, in handler
return func(self, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/pymilvus/client/grpc_handler.py", line 673, in load_collection
response = rf.result()
File "/usr/local/lib/python3.8/dist-packages/grpc/_channel.py", line 744, in result
raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "{"created":"@1660540492.763138120","description":"Deadline Exceeded","file":"src/core/ext/filters/dead
line/deadline_filter.cc","file_line":81,"grpc_status":4}"
>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/milvus/tests/python_client/utils/api_request.py", line 26, in inner_wrapper
res = func(*args, **_kwargs)
File "/src/milvus/tests/python_client/utils/api_request.py", line 57, in api_request
return func(*arg, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/pymilvus/orm/collection.py", line 474, in load
conn.load_collection(self._name, replica_number=replica_number, timeout=timeout, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 96, in handler
raise e
```
- oom-collection check flush hangs:
```
[2022-08-15 05:15:06,963 - DEBUG - ci_test]: (api_request) : [Collection] args: ['oom__TwaJZopL', None, 'default', 2], kwargs: {'co
nsistency_level': 'Strong'} (api_request.py:56)
[2022-08-15 05:15:06,967 - DEBUG - ci_test]: (api_response) : <Collection>:
-------------
<name>: oom__TwaJZopL
<partitions>: [{"name": "_default", "collection_name": "oom__TwaJZopL", "description": ""}]
<description>:
<schema>: {
auto_id: True
description:
fields: [{
name: int64
description:
type: 5
is_primary: True
auto_id: True
...... (api_request.py:31)
[2022-08-15 05:15:06,967 - DEBUG - ci_test]: (api_request) : [Collection.flush] args: [], kwargs: {'timeout': 20} (api_request.py:5
6)
Error: signal: terminated
```
### Expected Behavior
_No response_
### Steps To Reproduce
_No response_
### Milvus Log
_No response_
### Anything else?
_No response_ | non_process | standalone load fails and flush hangs after oomkilled and scaling up restart is there an existing issue for this i have searched the existing issues environment markdown milvus version deployment mode standalone or cluster standalone sdk version e g pymilvus pymilvus os ubuntu or centos cpu memory gpu others current behavior argo workflow name scale up oom deploy standalone and limit resources resources requests memory cpu limits memory cpu standalone pod standalone timeout milvus standalone running ago run the case def test insert oom self host default search params metric type params nprobe collection fields and schema fields def do collection create collection insert load search query schema apicollectionschemawrapper init collection schema fields fields auto id true collection w apicollectionwrapper collection w init collection name cf gen unique str oom schema schema shards num for i in range df pd dataframe ct default double field name pd series data dtype double ct default float vec field name cf gen vectors dim insert res collection w insert df timeout check task checktasks check nothing log debug collection w num entities collection w load timeout log debug collection w get replicas check task checktasks check nothing query res collection w query expr f ct default field name in insert res primary keys timeout assert len query res search res collection w search cf gen vectors dim ct default float vec field name default search params timeout decorator assert len search res assert len search res try tasks with threadpoolexecutor max workers as t for i in range task t submit do collection tasks append task for task in tasks task done except exception as e log error str e standalone pod oomkilled and restart sometimes standalone timeout milvus standalone vxdzf running scale up cpu and memory resources limits memory cpu wait pods ready and run the check case test check load collection failed api request args kwargs timeout api request py api response none api request py flush none wrapper py assert flush entities test py api request args kwargs api request py rpc error milvusexception code message rpc deadline e xceeded retry timeout decora tors py traceback most recent call last file usr local lib dist packages pymilvus decorators py line in handler return func self args kwargs file usr local lib dist packages pymilvus client grpc handler py line in load collection response rf result file usr local lib dist packages grpc channel py line in result raise self grpc channel multithreadedrendezvous multithreadedrendezvous of rpc that terminated with status statuscode deadline exceeded details deadline exceeded debug error string created description deadline exceeded file src core ext filters dead line deadline filter cc file line grpc status during handling of the above exception another exception occurred traceback most recent call last file src milvus tests python client utils api request py line in inner wrapper res func args kwargs file src milvus tests python client utils api request py line in api request return func arg kwargs file usr local lib dist packages pymilvus orm collection py line in load conn load collection self name replica number replica number timeout timeout kwargs file usr local lib dist packages pymilvus decorators py line in handler raise e oom collection check flush hangs api request args kwargs co nsistency level strong api request py api response oom twajzopl auto id true description fields name description type is primary true auto id true api request py api request args kwargs timeout api request py error signal terminated expected behavior no response steps to reproduce no response milvus log no response anything else no response | 0 |
7,969 | 11,148,765,104 | IssuesEvent | 2019-12-23 16:27:51 | aiidateam/aiida-core | https://api.github.com/repos/aiidateam/aiida-core | closed | Allow for 'help' on exposed inputs / outputs which have a namespace | requires discussion topic/processes type/feature request | When exposing inputs with the `namespace` keyword, a new `PortNamespace` is created. To document the purpose of this namespace, the `help` currently needs to be set "manually" - not really using an official interface.
This could be improved by allowing a `help` keyword on the `expose_inputs` / `expose_outputs` methods.
Drawback: The `help` does not really make sense if _no_ namespace is used, making the logic slightly more complicated (`help` can be set only if `namespace` is also set).
Example: currently written as
```python
spec.expose_inputs(
SubWorkChain,
namespace='sub'
)
spec.inputs['sub'].help = 'Inputs passed to the sub-workchain.'
```
Proposed syntax:
```python
spec.expose_inputs(
SubWorkChain,
namespace='sub',
help='Inputs passed to the sub-workchain.'
)
```
Mentioning @sphuber for comment. | 1.0 | Allow for 'help' on exposed inputs / outputs which have a namespace - When exposing inputs with the `namespace` keyword, a new `PortNamespace` is created. To document the purpose of this namespace, the `help` currently needs to be set "manually" - not really using an official interface.
This could be improved by allowing a `help` keyword on the `expose_inputs` / `expose_outputs` methods.
Drawback: The `help` does not really make sense if _no_ namespace is used, making the logic slightly more complicated (`help` can be set only if `namespace` is also set).
Example: currently written as
```python
spec.expose_inputs(
SubWorkChain,
namespace='sub'
)
spec.inputs['sub'].help = 'Inputs passed to the sub-workchain.'
```
Proposed syntax:
```python
spec.expose_inputs(
SubWorkChain,
namespace='sub',
help='Inputs passed to the sub-workchain.'
)
```
Mentioning @sphuber for comment. | process | allow for help on exposed inputs outputs which have a namespace when exposing inputs with the namespace keyword a new portnamespace is created to document the purpose of this namespace the help currently needs to be set manually not really using an official interface this could be improved by allowing a help keyword on the expose inputs expose outputs methods drawback the help does not really make sense if no namespace is used making the logic slightly more complicated help can be set only if namespace is also set example currently written as python spec expose inputs subworkchain namespace sub spec inputs help inputs passed to the sub workchain proposed syntax python spec expose inputs subworkchain namespace sub help inputs passed to the sub workchain mentioning sphuber for comment | 1 |
557,243 | 16,504,566,257 | IssuesEvent | 2021-05-25 17:39:12 | kubernetes/kubeadm | https://api.github.com/repos/kubernetes/kubeadm | opened | add UpgradeConfiguration/ResetConfiguration API types | kind/api-change kind/feature priority/backlog | kubeadm currently has some matching configuration formats for its main commands.
join - JoinConfiguration
init - InitConfiguration
upgrade - none
reset - none
we should eventually provide API types for all main commands to avoid flags.
note this was not discussed as part of v1beta3, so it can happen in a future API version.
----------
`upgrade`
upgrade does not have a config, forcing users to rely on flags only and forcing us as maintainers to have some flags unique to "upgrade" only.
we should add a scoped UpgradeConfiguration structure that can hold a number of relevant options to upgrade.
--config for upgrade should accept this configuration and not ClusterConfiguration | InitConfiguration...
maybe CC too:
https://github.com/kubernetes/kubeadm/issues/1681
---------
`reset`
this makes sense for consistency with respect to `skipPhases`. one other option is `--force`.
overall the structure for reset would be very minimal since it does not have a lot of options.
| 1.0 | add UpgradeConfiguration/ResetConfiguration API types - kubeadm currently has some matching configuration formats for its main commands.
join - JoinConfiguration
init - InitConfiguration
upgrade - none
reset - none
we should eventually provide API types for all main commands to avoid flags.
note this was not discussed as part of v1beta3, so it can happen in a future API version.
----------
`upgrade`
upgrade does not have a config, forcing users to rely on flags only and forcing us as maintainers to have some flags unique to "upgrade" only.
we should add a scoped UpgradeConfiguration structure that can hold a number of relevant options to upgrade.
--config for upgrade should accept this configuration and not ClusterConfiguration | InitConfiguration...
maybe CC too:
https://github.com/kubernetes/kubeadm/issues/1681
---------
`reset`
this makes sense for consistency with respect to `skipPhases`. one other option is `--force`.
overall the structure for reset would be very minimal since it does not have a lot of options.
| non_process | add upgradeconfiguration resetconfiguration api types kubeadm currently has some matching configuration formats for its main commands join joinconfiguration init initconfiguration upgrade none reset none we should eventually provide api types for all main commands to avoid flags note this was not discussed as part of so it can happen in a future api version upgrade upgrade does not have a config forcing users to rely on flags only and forcing us as maintainers to have some flags unique to upgrade only we should add a scoped upgradeconfiguration structure that can hold a number of relevant options to upgrade config for upgrade should accept this configuration and not clusterconfiguration initconfiguration maybe cc too reset this makes sense for consistency with respect to skipphases one other option is force overall the structure for reset would be very minimal since it does not have a lot of options | 0 |
49,004 | 13,185,190,624 | IssuesEvent | 2020-08-12 20:54:14 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | hippodraw should check for numpy dev (Trac #591) | Incomplete Migration Migrated from Trac defect tools/ports | <details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/591
, reported by blaufuss and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2010-03-09T00:45:43",
"description": "Georges Kohnen wrote:\n\nHi, I'm attaching the complete output of $I3_PORTS/bin/port install -vd hippodraw +root below. I'm a little surprised by the statement \"Requested variant x86_64 is not provided by port hippodraw\"? Thanks! Georges\n\nLooking as usual at the first compliation error:\n\nerror: numpy/noprefix.h: No such file or directory\n\nDo you have a numpy dev package installed? On Ubuntu that's python-numpy-dev. I have the file:\n\n% ls -l /usr/include/python2.6/numpy/noprefix.h -rw-r--r-- 1 root root 6051 Mar 29 2009 /usr/include/python2.6/numpy/noprefix.h\n\n-t",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1268095543000000",
"component": "tools/ports",
"summary": "hippodraw should check for numpy dev",
"priority": "normal",
"keywords": "",
"time": "2010-01-20T15:45:30",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| 1.0 | hippodraw should check for numpy dev (Trac #591) - <details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/591
, reported by blaufuss and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2010-03-09T00:45:43",
"description": "Georges Kohnen wrote:\n\nHi, I'm attaching the complete output of $I3_PORTS/bin/port install -vd hippodraw +root below. I'm a little surprised by the statement \"Requested variant x86_64 is not provided by port hippodraw\"? Thanks! Georges\n\nLooking as usual at the first compliation error:\n\nerror: numpy/noprefix.h: No such file or directory\n\nDo you have a numpy dev package installed? On Ubuntu that's python-numpy-dev. I have the file:\n\n% ls -l /usr/include/python2.6/numpy/noprefix.h -rw-r--r-- 1 root root 6051 Mar 29 2009 /usr/include/python2.6/numpy/noprefix.h\n\n-t",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1268095543000000",
"component": "tools/ports",
"summary": "hippodraw should check for numpy dev",
"priority": "normal",
"keywords": "",
"time": "2010-01-20T15:45:30",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| non_process | hippodraw should check for numpy dev trac migrated from reported by blaufuss and owned by nega json status closed changetime description georges kohnen wrote n nhi i m attaching the complete output of ports bin port install vd hippodraw root below i m a little surprised by the statement requested variant is not provided by port hippodraw thanks georges n nlooking as usual at the first compliation error n nerror numpy noprefix h no such file or directory n ndo you have a numpy dev package installed on ubuntu that s python numpy dev i have the file n n ls l usr include numpy noprefix h rw r r root root mar usr include numpy noprefix h n n t reporter blaufuss cc resolution fixed ts component tools ports summary hippodraw should check for numpy dev priority normal keywords time milestone owner nega type defect | 0 |
170,127 | 26,905,769,079 | IssuesEvent | 2023-02-06 18:57:24 | webb-tools/webb-experiences | https://api.github.com/repos/webb-tools/webb-experiences | closed | Beautify documentation diagrams | design 🎨 | # Description
Beautify diagrams for documentation site. The below diagrams exist but could use a facelift. These diagrams should follow our brand style and provide as much information as possible.
## Design Checklist
- [ ] Redesign application overview [diagram](https://docs.webb.tools/v1/applications/overview/)
- [x] Beautify diagrams for sequence diagram - [how-proposals-are-signed](https://docs.webb.tools/v1/dkg/governance/#how-proposals-are-signed)
- [x] Relayer diagrams based on this [presentation](https://docs.google.com/presentation/d/16fQ3cCVQAUDm3Wq5uTrxnqfmCJ2G0TxgQEYGWbwVNKA/edit#slide=id.g13222e7668e_0_50)
- [x] Social banners for Webb repos
- [x] Beautify diagrams for DKG - [what-is-a-dkg](https://docs.webb.tools/v1/dkg/overview/#what-is-a-dkg) | 1.0 | Beautify documentation diagrams - # Description
Beautify diagrams for documentation site. The below diagrams exist but could use a facelift. These diagrams should follow our brand style and provide as much information as possible.
## Design Checklist
- [ ] Redesign application overview [diagram](https://docs.webb.tools/v1/applications/overview/)
- [x] Beautify diagrams for sequence diagram - [how-proposals-are-signed](https://docs.webb.tools/v1/dkg/governance/#how-proposals-are-signed)
- [x] Relayer diagrams based on this [presentation](https://docs.google.com/presentation/d/16fQ3cCVQAUDm3Wq5uTrxnqfmCJ2G0TxgQEYGWbwVNKA/edit#slide=id.g13222e7668e_0_50)
- [x] Social banners for Webb repos
- [x] Beautify diagrams for DKG - [what-is-a-dkg](https://docs.webb.tools/v1/dkg/overview/#what-is-a-dkg) | non_process | beautify documentation diagrams description beautify diagrams for documentation site the below diagrams exist but could use a facelift these diagrams should follow our brand style and provide as much information as possible design checklist redesign application overview beautify diagrams for sequence diagram relayer diagrams based on this social banners for webb repos beautify diagrams for dkg | 0 |
44,862 | 11,524,419,305 | IssuesEvent | 2020-02-15 00:29:31 | mono/monodevelop | https://api.github.com/repos/mono/monodevelop | closed | [StructuredBuildOutput] Post-merge issues | Area: Structured Build Output vs-sync | ## Is never unsubscribed, thus it will leak.
https://github.com/mono/monodevelop/pull/3846/files#diff-710fa148d4e36578d081214888b78fe6R967
https://github.com/mono/monodevelop/pull/3846/files#diff-db73af874026f7013be8c8e61bb2a9f7R134
https://github.com/mono/monodevelop/pull/3846/files#diff-db73af874026f7013be8c8e61bb2a9f7R204 - these 2 here, all capture
https://github.com/mono/monodevelop/pull/3846/files#diff-710fa148d4e36578d081214888b78fe6R207
https://github.com/mono/monodevelop/pull/3846/files#diff-db73af874026f7013be8c8e61bb2a9f7R106 - view -> output -> view
## Seems like a left-over from when it was implemented as an editor. Is it still needed? cc @mkrueger
https://github.com/mono/monodevelop/pull/3846/files#diff-6efd107ab1910b5c44cd3024f61026e0R376
## Is this intended public API?
https://github.com/mono/monodevelop/pull/3846/files#diff-fbfab0f0fcc8e14022ee623c166912f8R31
## ImmutableCollections do not guarantee thread safety. Are we ok with losing results? If it's single-threaded, use a list, if it's multi-threaded, guarantee that all projects are added.
https://github.com/mono/monodevelop/pull/3846/files#diff-563f5ecf5ef6e0a860fd9d77cb55312fR47
## Iteration via foreach will box enumerator because it's on interface.
https://github.com/mono/monodevelop/pull/3846/files#diff-563f5ecf5ef6e0a860fd9d77cb55312fR155
https://github.com/mono/monodevelop/pull/3846/files#diff-563f5ecf5ef6e0a860fd9d77cb55312fR143
https://github.com/mono/monodevelop/pull/3846/files#diff-d6f55bb738236344414271dcedbe4da9R142
https://github.com/mono/monodevelop/pull/3846/files#diff-d6f55bb738236344414271dcedbe4da9R197
https://github.com/mono/monodevelop/pull/3846/files#diff-d6f55bb738236344414271dcedbe4da9R328 -- definitely not in recursion
https://github.com/mono/monodevelop/pull/3846/files#diff-d6f55bb738236344414271dcedbe4da9R400
https://github.com/mono/monodevelop/pull/3846/files#diff-db73af874026f7013be8c8e61bb2a9f7R375
https://github.com/mono/monodevelop/pull/3846/files#diff-db73af874026f7013be8c8e61bb2a9f7R673
## Returns IEnumerable when all usages use a concrete collection afterwards, no benefit of lazy enumeration
https://github.com/mono/monodevelop/pull/3846/files#diff-563f5ecf5ef6e0a860fd9d77cb55312fR150
## Don't use LINQ extensions on concrete collections
https://github.com/mono/monodevelop/pull/3846/files#diff-d6f55bb738236344414271dcedbe4da9R101
all in here: https://github.com/mono/monodevelop/pull/3846/files#diff-d6f55bb738236344414271dcedbe4da9R204
> VS bug [#625265](https://devdiv.visualstudio.com/DevDiv/_workitems/edit/625265) | 1.0 | [StructuredBuildOutput] Post-merge issues - ## Is never unsubscribed, thus it will leak.
https://github.com/mono/monodevelop/pull/3846/files#diff-710fa148d4e36578d081214888b78fe6R967
https://github.com/mono/monodevelop/pull/3846/files#diff-db73af874026f7013be8c8e61bb2a9f7R134
https://github.com/mono/monodevelop/pull/3846/files#diff-db73af874026f7013be8c8e61bb2a9f7R204 - these 2 here, all capture
https://github.com/mono/monodevelop/pull/3846/files#diff-710fa148d4e36578d081214888b78fe6R207
https://github.com/mono/monodevelop/pull/3846/files#diff-db73af874026f7013be8c8e61bb2a9f7R106 - view -> output -> view
## Seems like a left-over from when it was implemented as an editor. Is it still needed? cc @mkrueger
https://github.com/mono/monodevelop/pull/3846/files#diff-6efd107ab1910b5c44cd3024f61026e0R376
## Is this intended public API?
https://github.com/mono/monodevelop/pull/3846/files#diff-fbfab0f0fcc8e14022ee623c166912f8R31
## ImmutableCollections do not guarantee thread safety. Are we ok with losing results? If it's single-threaded, use a list, if it's multi-threaded, guarantee that all projects are added.
https://github.com/mono/monodevelop/pull/3846/files#diff-563f5ecf5ef6e0a860fd9d77cb55312fR47
## Iteration via foreach will box enumerator because it's on interface.
https://github.com/mono/monodevelop/pull/3846/files#diff-563f5ecf5ef6e0a860fd9d77cb55312fR155
https://github.com/mono/monodevelop/pull/3846/files#diff-563f5ecf5ef6e0a860fd9d77cb55312fR143
https://github.com/mono/monodevelop/pull/3846/files#diff-d6f55bb738236344414271dcedbe4da9R142
https://github.com/mono/monodevelop/pull/3846/files#diff-d6f55bb738236344414271dcedbe4da9R197
https://github.com/mono/monodevelop/pull/3846/files#diff-d6f55bb738236344414271dcedbe4da9R328 -- definitely not in recursion
https://github.com/mono/monodevelop/pull/3846/files#diff-d6f55bb738236344414271dcedbe4da9R400
https://github.com/mono/monodevelop/pull/3846/files#diff-db73af874026f7013be8c8e61bb2a9f7R375
https://github.com/mono/monodevelop/pull/3846/files#diff-db73af874026f7013be8c8e61bb2a9f7R673
## Returns IEnumerable when all usages use a concrete collection afterwards, no benefit of lazy enumeration
https://github.com/mono/monodevelop/pull/3846/files#diff-563f5ecf5ef6e0a860fd9d77cb55312fR150
## Don't use LINQ extensions on concrete collections
https://github.com/mono/monodevelop/pull/3846/files#diff-d6f55bb738236344414271dcedbe4da9R101
all in here: https://github.com/mono/monodevelop/pull/3846/files#diff-d6f55bb738236344414271dcedbe4da9R204
> VS bug [#625265](https://devdiv.visualstudio.com/DevDiv/_workitems/edit/625265) | non_process | post merge issues is never unsubscribed thus it will leak these here all capture view output view seems like a left over from when it was implemented as an editor is it still needed cc mkrueger is this intended public api immutablecollections do not guarantee thread safety are we ok with losing results if it s single threaded use a list if it s multi threaded guarantee that all projects are added iteration via foreach will box enumerator because it s on interface definitely not in recursion returns ienumerable when all usages use a concrete collection afterwards no benefit of lazy enumeration don t use linq extensions on concrete collections all in here vs bug | 0 |
25,559 | 11,195,161,809 | IssuesEvent | 2020-01-03 05:01:04 | istio/istio | https://api.github.com/repos/istio/istio | closed | Support of multiple JWT origins with same issuer in Policy for different triggers | area/networking area/security kind/enhancement lifecycle/stale | **Describe the feature request**
Assume you want to use a Policy for a composite VirtualService (e.g. on the edge bound to a Gateway) and define multiple JWT configuration, e.g. like here https://github.com/istio/istio/blob/49c978439ddde62f4d5e0cf672976294797cf2fa/tests/e2e/tests/pilot/testdata/authn/v1alpha1/authn-policy-jwt.yaml.tmpl
If the issuer is the same for two config entries, I will not be able to create the Policy. However, I might want to have two different entries for the same issuer e.g. by allowing two different audiences for two different paths defined in trigger_rules.
Thus, it would be nice, if you would support this setting.
**Describe alternatives you've considered**
Alternatives are Virtual Services per service (which is part of the composition) with own Policy.
**Additional context*
/
| True | Support of multiple JWT origins with same issuer in Policy for different triggers - **Describe the feature request**
Assume you want to use a Policy for a composite VirtualService (e.g. on the edge bound to a Gateway) and define multiple JWT configuration, e.g. like here https://github.com/istio/istio/blob/49c978439ddde62f4d5e0cf672976294797cf2fa/tests/e2e/tests/pilot/testdata/authn/v1alpha1/authn-policy-jwt.yaml.tmpl
If the issuer is the same for two config entries, I will not be able to create the Policy. However, I might want to have two different entries for the same issuer e.g. by allowing two different audiences for two different paths defined in trigger_rules.
Thus, it would be nice, if you would support this setting.
**Describe alternatives you've considered**
Alternatives are Virtual Services per service (which is part of the composition) with own Policy.
**Additional context*
/
| non_process | support of multiple jwt origins with same issuer in policy for different triggers describe the feature request assume you want to use a policy for a composite virtualservice e g on the edge bound to a gateway and define multiple jwt configuration e g like here if the issuer is the same for two config entries i will not be able to create the policy however i might want to have two different entries for the same issuer e g by allowing two different audiences for two different paths defined in trigger rules thus it would be nice if you would support this setting describe alternatives you ve considered alternatives are virtual services per service which is part of the composition with own policy additional context | 0 |
1,633 | 2,517,037,806 | IssuesEvent | 2015-01-16 11:04:46 | OCHA-DAP/hdx-ckan | https://api.github.com/repos/OCHA-DAP/hdx-ckan | closed | Possible cache management issue arises from using the perma_link as the url for resource downloads | enhancement Priority-High | It may take a very long time (several hours in a case we observed yesterday) from the time the file for a resource is updated to the time that users who click on the download link can download the updated file. During this period, users are served the old file when they click on the download resource button. The updated file is available on CKAN, and this can be verified by either using the CKAN API to get the URL of the file for the resource, or for files that can be previewed, by clicking on the preview button and using url provided on the preview page. This issue was experienced on the following dataset: https://data.hdx.rwlabs.org/dataset/bed-capacity | 1.0 | Possible cache management issue arises from using the perma_link as the url for resource downloads - It may take a very long time (several hours in a case we observed yesterday) from the time the file for a resource is updated to the time that users who click on the download link can download the updated file. During this period, users are served the old file when they click on the download resource button. The updated file is available on CKAN, and this can be verified by either using the CKAN API to get the URL of the file for the resource, or for files that can be previewed, by clicking on the preview button and using url provided on the preview page. This issue was experienced on the following dataset: https://data.hdx.rwlabs.org/dataset/bed-capacity | non_process | possible cache management issue arises from using the perma link as the url for resource downloads it may take a very long time several hours in a case we observed yesterday from the time the file for a resource is updated to the time that users who click on the download link can download the updated file during this period users are served the old file when they click on the download resource button the updated file is available on ckan and this can be verified by either using the ckan api to get the url of the file for the resource or for files that can be previewed by clicking on the preview button and using url provided on the preview page this issue was experienced on the following dataset | 0 |
15,429 | 11,501,615,927 | IssuesEvent | 2020-02-12 17:29:49 | enarx/enarx | https://api.github.com/repos/enarx/enarx | opened | Migrate Enarx's test suite from Travis to Github Actions | infrastructure | Following positive results from #213 and enabling GHA with #229, we'd like to move our entire CI test suite over to GHA. This will allow some nice integration with Github's PR UI and sets up some good longer-term CI opportunities. | 1.0 | Migrate Enarx's test suite from Travis to Github Actions - Following positive results from #213 and enabling GHA with #229, we'd like to move our entire CI test suite over to GHA. This will allow some nice integration with Github's PR UI and sets up some good longer-term CI opportunities. | non_process | migrate enarx s test suite from travis to github actions following positive results from and enabling gha with we d like to move our entire ci test suite over to gha this will allow some nice integration with github s pr ui and sets up some good longer term ci opportunities | 0 |
41,664 | 10,759,831,716 | IssuesEvent | 2019-10-31 17:22:47 | ARMmbed/mbed-cli | https://api.github.com/repos/ARMmbed/mbed-cli | closed | Duplicate linker files - unhelpful error message | Jira status: OPEN build system enhancement mirrored | ```
[ERROR] 'GCC_ARM' object has no attribute 'info'
```
when I accidentally had two linker files in my project directory.
```
$ mbed --version
1.7.2
``` | 1.0 | Duplicate linker files - unhelpful error message - ```
[ERROR] 'GCC_ARM' object has no attribute 'info'
```
when I accidentally had two linker files in my project directory.
```
$ mbed --version
1.7.2
``` | non_process | duplicate linker files unhelpful error message gcc arm object has no attribute info when i accidentally had two linker files in my project directory mbed version | 0 |
737,944 | 25,538,694,780 | IssuesEvent | 2022-11-29 13:53:59 | googleapis/gax-dotnet | https://api.github.com/repos/googleapis/gax-dotnet | closed | RestServiceCollection.GetRestMethod uses incorrect name | type: bug priority: p1 release blocking | I've discovered this while investigating streaming behavior, but it's worrying that we didn't spot it before with tests.
Will make sure we have appropriate tests | 1.0 | RestServiceCollection.GetRestMethod uses incorrect name - I've discovered this while investigating streaming behavior, but it's worrying that we didn't spot it before with tests.
Will make sure we have appropriate tests | non_process | restservicecollection getrestmethod uses incorrect name i ve discovered this while investigating streaming behavior but it s worrying that we didn t spot it before with tests will make sure we have appropriate tests | 0 |
2,310 | 5,126,213,209 | IssuesEvent | 2017-01-10 00:50:48 | AffiliateWP/AffiliateWP | https://api.github.com/repos/AffiliateWP/AffiliateWP | closed | Generate a payout for a single affiliate only | batch-processing enhancement Has PR | PR: #1889
Right now it's possible to filter referrals for a single user. But it's not possible to generate the total amount of referrals and mark them as paid (for a specific date range) for a single user as well.
With the tools I can export the referrals of a single user but there is no total amount.
| 1.0 | Generate a payout for a single affiliate only - PR: #1889
Right now it's possible to filter referrals for a single user. But it's not possible to generate the total amount of referrals and mark them as paid (for a specific date range) for a single user as well.
With the tools I can export the referrals of a single user but there is no total amount.
| process | generate a payout for a single affiliate only pr right now it s possible to filter referrals for a single user but it s not possible to generate the total amount of referrals and mark them as paid for a specific date range for a single user as well with the tools i can export the referrals of a single user but there is no total amount | 1 |
4,851 | 7,742,322,444 | IssuesEvent | 2018-05-29 09:10:51 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | IPC channel stops delivering messages to cluster workers | child_process cluster freebsd macos os performance | * **Version: 6.9.1** (also reproduced on 4.4.1 and 0.12.4)
* **Platform: OS X, possibly others** (Unable to repro on Linux but I believe it does happen but with less frequency)
* **Subsystem: Cluster / IPC**
When many IPC messages are sent between the master process and cluster workers, IPC channels to workers stop delivering messages. I have not been unable to restore working functionality of the workers and so they must be killed to resolve the issue. Since IPC has stopped working, simply using `Worker.destroy()` does not work since the method will wait for the `disconnect` event which never arrives (because of this issue).
I am able to repro on OS X by running the following script:
```
var cluster = require('cluster');
var express = require('express'); // tested with 4.14.0
const workerCount = 2;
const WTMIPC = 25;
const MTWIPC = 25;
if (cluster.isMaster) {
var workers = {}, worker;
for (var i = 0; i < workerCount; i++) {
worker = cluster.fork({});
workers[worker.process.pid] = worker;
}
var workerPongReceivedTime = {};
cluster.on('online', function(worker) {
worker.on('message', function(message) {
var currentTime = Date.now();
if (message.type === 'pong') {
workerPongReceivedTime[worker.process.pid] = currentTime;
console.log('received pong\tmaster-to-worker\t' + (message.timeReceived - message.timeSent) + '\tworker-to-master ' + (currentTime - message.timeSent));
} else if (message.type === 'fromEndpoint') {
for (var i = 0; i < MTWIPC; i++) {
worker.send({ type: 'toWorker' });
}
}
});
});
setInterval(function() {
var currentTime = Date.now();
console.log('sending ping');
Object.keys(workers).forEach(function(workerPid) {
workers[workerPid].send({ type: 'ping', time: Date.now() });
if (currentTime - workerPongReceivedTime[workerPid] > 10000) {
console.log('Worker missed pings: ' + workerPid);
}
});
}, 1000);
} else {
var app = express();
app.get('/test', function(req, res) {
for (i = 0; i < WTMIPC; i++) {
process.send({ type: 'fromEndpoint' });
}
res.send({ test: 123 });
});
app.listen(7080, function() {
console.log('server started');
});
process.on('message', function(message) {
if (message.type === 'ping') {
process.send({ type: 'pong', timeSent: message.time, timeReceived: Date.now() });
}
});
}
```
and using ApacheBench to place the server under load as follows:
```
ab -n 100000 -c 200 'http://localhost:7080/test'
```
I see the following, for example:
```
server started
server started
sending ping
received pong master-to-worker 1 worker-to-master 1
received pong master-to-worker 0 worker-to-master 1
sending ping
received pong master-to-worker 1 worker-to-master 3
received pong master-to-worker 19 worker-to-master 21
sending ping
received pong master-to-worker 2 worker-to-master 5
received pong master-to-worker 4 worker-to-master 7
sending ping
received pong master-to-worker 3 worker-to-master 4
received pong master-to-worker 4 worker-to-master 6
sending ping
received pong master-to-worker 9 worker-to-master 10
received pong master-to-worker 2 worker-to-master 10
sending ping
received pong master-to-worker 2 worker-to-master 4
received pong master-to-worker 4 worker-to-master 6
sending ping
received pong master-to-worker 2 worker-to-master 4
received pong master-to-worker 4 worker-to-master 6
... (about 10k - 60k requests later) ...
sending ping
sending ping
sending ping
sending ping
sending ping
sending ping
sending ping
sending ping
sending ping
sending ping
Worker missed pings: 97462
sending ping
Worker missed pings: 97462
Worker missed pings: 97463
sending ping
Worker missed pings: 97462
Worker missed pings: 97463
```
As I alluded to earlier, I have seen an issue on Linux which I believe is related but I have been so far unable to repro using this technique on Linux. | 1.0 | IPC channel stops delivering messages to cluster workers - * **Version: 6.9.1** (also reproduced on 4.4.1 and 0.12.4)
* **Platform: OS X, possibly others** (Unable to repro on Linux but I believe it does happen but with less frequency)
* **Subsystem: Cluster / IPC**
When many IPC messages are sent between the master process and cluster workers, IPC channels to workers stop delivering messages. I have not been unable to restore working functionality of the workers and so they must be killed to resolve the issue. Since IPC has stopped working, simply using `Worker.destroy()` does not work since the method will wait for the `disconnect` event which never arrives (because of this issue).
I am able to repro on OS X by running the following script:
```
var cluster = require('cluster');
var express = require('express'); // tested with 4.14.0
const workerCount = 2;
const WTMIPC = 25;
const MTWIPC = 25;
if (cluster.isMaster) {
var workers = {}, worker;
for (var i = 0; i < workerCount; i++) {
worker = cluster.fork({});
workers[worker.process.pid] = worker;
}
var workerPongReceivedTime = {};
cluster.on('online', function(worker) {
worker.on('message', function(message) {
var currentTime = Date.now();
if (message.type === 'pong') {
workerPongReceivedTime[worker.process.pid] = currentTime;
console.log('received pong\tmaster-to-worker\t' + (message.timeReceived - message.timeSent) + '\tworker-to-master ' + (currentTime - message.timeSent));
} else if (message.type === 'fromEndpoint') {
for (var i = 0; i < MTWIPC; i++) {
worker.send({ type: 'toWorker' });
}
}
});
});
setInterval(function() {
var currentTime = Date.now();
console.log('sending ping');
Object.keys(workers).forEach(function(workerPid) {
workers[workerPid].send({ type: 'ping', time: Date.now() });
if (currentTime - workerPongReceivedTime[workerPid] > 10000) {
console.log('Worker missed pings: ' + workerPid);
}
});
}, 1000);
} else {
var app = express();
app.get('/test', function(req, res) {
for (i = 0; i < WTMIPC; i++) {
process.send({ type: 'fromEndpoint' });
}
res.send({ test: 123 });
});
app.listen(7080, function() {
console.log('server started');
});
process.on('message', function(message) {
if (message.type === 'ping') {
process.send({ type: 'pong', timeSent: message.time, timeReceived: Date.now() });
}
});
}
```
and using ApacheBench to place the server under load as follows:
```
ab -n 100000 -c 200 'http://localhost:7080/test'
```
I see the following, for example:
```
server started
server started
sending ping
received pong master-to-worker 1 worker-to-master 1
received pong master-to-worker 0 worker-to-master 1
sending ping
received pong master-to-worker 1 worker-to-master 3
received pong master-to-worker 19 worker-to-master 21
sending ping
received pong master-to-worker 2 worker-to-master 5
received pong master-to-worker 4 worker-to-master 7
sending ping
received pong master-to-worker 3 worker-to-master 4
received pong master-to-worker 4 worker-to-master 6
sending ping
received pong master-to-worker 9 worker-to-master 10
received pong master-to-worker 2 worker-to-master 10
sending ping
received pong master-to-worker 2 worker-to-master 4
received pong master-to-worker 4 worker-to-master 6
sending ping
received pong master-to-worker 2 worker-to-master 4
received pong master-to-worker 4 worker-to-master 6
... (about 10k - 60k requests later) ...
sending ping
sending ping
sending ping
sending ping
sending ping
sending ping
sending ping
sending ping
sending ping
sending ping
Worker missed pings: 97462
sending ping
Worker missed pings: 97462
Worker missed pings: 97463
sending ping
Worker missed pings: 97462
Worker missed pings: 97463
```
As I alluded to earlier, I have seen an issue on Linux which I believe is related but I have been so far unable to repro using this technique on Linux. | process | ipc channel stops delivering messages to cluster workers version also reproduced on and platform os x possibly others unable to repro on linux but i believe it does happen but with less frequency subsystem cluster ipc when many ipc messages are sent between the master process and cluster workers ipc channels to workers stop delivering messages i have not been unable to restore working functionality of the workers and so they must be killed to resolve the issue since ipc has stopped working simply using worker destroy does not work since the method will wait for the disconnect event which never arrives because of this issue i am able to repro on os x by running the following script var cluster require cluster var express require express tested with const workercount const wtmipc const mtwipc if cluster ismaster var workers worker for var i i workercount i worker cluster fork workers worker var workerpongreceivedtime cluster on online function worker worker on message function message var currenttime date now if message type pong workerpongreceivedtime currenttime console log received pong tmaster to worker t message timereceived message timesent tworker to master currenttime message timesent else if message type fromendpoint for var i i mtwipc i worker send type toworker setinterval function var currenttime date now console log sending ping object keys workers foreach function workerpid workers send type ping time date now if currenttime workerpongreceivedtime console log worker missed pings workerpid else var app express app get test function req res for i i wtmipc i process send type fromendpoint res send test app listen function console log server started process on message function message if message type ping process send type pong timesent message time timereceived date now and using apachebench to place the server under load as follows ab n c i see the following for example server started server started sending ping received pong master to worker worker to master received pong master to worker worker to master sending ping received pong master to worker worker to master received pong master to worker worker to master sending ping received pong master to worker worker to master received pong master to worker worker to master sending ping received pong master to worker worker to master received pong master to worker worker to master sending ping received pong master to worker worker to master received pong master to worker worker to master sending ping received pong master to worker worker to master received pong master to worker worker to master sending ping received pong master to worker worker to master received pong master to worker worker to master about requests later sending ping sending ping sending ping sending ping sending ping sending ping sending ping sending ping sending ping sending ping worker missed pings sending ping worker missed pings worker missed pings sending ping worker missed pings worker missed pings as i alluded to earlier i have seen an issue on linux which i believe is related but i have been so far unable to repro using this technique on linux | 1 |
323,070 | 27,694,250,294 | IssuesEvent | 2023-03-14 00:02:10 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | opened | Fix elementwise.test_allclose | Sub Task Ivy API Experimental Failing Test | | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4407111931/jobs/7720324906" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4133384506/jobs/7143229872" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_functional/test_experimental/test_core/test_elementwise.py::test_allclose[cpu-ivy.functional.backends.torch-False-False]</summary>
2023-03-13T16:26:44.4771656Z E TypeError: test_function() missing 1 required keyword-only argument: 'on_device'
2023-03-13T16:26:44.4772111Z E Falsifying example: test_allclose(
2023-03-13T16:26:44.4772574Z E dtype_and_x=(['bfloat16', 'bfloat16'],
2023-03-13T16:26:44.4773088Z E [array([-1], dtype=bfloat16), array([-1], dtype=bfloat16)]),
2023-03-13T16:26:44.4773519Z E rtol=1e-05,
2023-03-13T16:26:44.4773860Z E atol=1e-05,
2023-03-13T16:26:44.4774160Z E equal_nan=False,
2023-03-13T16:26:44.4774586Z E ground_truth_backend='tensorflow',
2023-03-13T16:26:44.4774959Z E test_flags=FunctionTestFlags(
2023-03-13T16:26:44.4775325Z E num_positional_args=2,
2023-03-13T16:26:44.4775652Z E with_out=False,
2023-03-13T16:26:44.4775986Z E instance_method=False,
2023-03-13T16:26:44.4776597Z E test_gradients=False,
2023-03-13T16:26:44.4776940Z E test_compile=False,
2023-03-13T16:26:44.4777279Z E as_variable=[False],
2023-03-13T16:26:44.4777609Z E native_arrays=[False],
2023-03-13T16:26:44.4777949Z E container=[False],
2023-03-13T16:26:44.4778239Z E ),
2023-03-13T16:26:44.4778878Z E backend_fw=<module 'ivy.functional.backends.torch' from '/ivy/ivy/functional/backends/torch/__init__.py'>,
2023-03-13T16:26:44.4779354Z E )
2023-03-13T16:26:44.4779602Z E
2023-03-13T16:26:44.4780390Z E You can reproduce this example by temporarily adding @reproduce_failure('6.68.2', b'AXicY2BkAAMoBaaR2Qc5PuYUSl38hMoEAwCepQju') as a decorator on your test case
</details>
| 1.0 | Fix elementwise.test_allclose - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4407111931/jobs/7720324906" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4133384506/jobs/7143229872" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_functional/test_experimental/test_core/test_elementwise.py::test_allclose[cpu-ivy.functional.backends.torch-False-False]</summary>
2023-03-13T16:26:44.4771656Z E TypeError: test_function() missing 1 required keyword-only argument: 'on_device'
2023-03-13T16:26:44.4772111Z E Falsifying example: test_allclose(
2023-03-13T16:26:44.4772574Z E dtype_and_x=(['bfloat16', 'bfloat16'],
2023-03-13T16:26:44.4773088Z E [array([-1], dtype=bfloat16), array([-1], dtype=bfloat16)]),
2023-03-13T16:26:44.4773519Z E rtol=1e-05,
2023-03-13T16:26:44.4773860Z E atol=1e-05,
2023-03-13T16:26:44.4774160Z E equal_nan=False,
2023-03-13T16:26:44.4774586Z E ground_truth_backend='tensorflow',
2023-03-13T16:26:44.4774959Z E test_flags=FunctionTestFlags(
2023-03-13T16:26:44.4775325Z E num_positional_args=2,
2023-03-13T16:26:44.4775652Z E with_out=False,
2023-03-13T16:26:44.4775986Z E instance_method=False,
2023-03-13T16:26:44.4776597Z E test_gradients=False,
2023-03-13T16:26:44.4776940Z E test_compile=False,
2023-03-13T16:26:44.4777279Z E as_variable=[False],
2023-03-13T16:26:44.4777609Z E native_arrays=[False],
2023-03-13T16:26:44.4777949Z E container=[False],
2023-03-13T16:26:44.4778239Z E ),
2023-03-13T16:26:44.4778878Z E backend_fw=<module 'ivy.functional.backends.torch' from '/ivy/ivy/functional/backends/torch/__init__.py'>,
2023-03-13T16:26:44.4779354Z E )
2023-03-13T16:26:44.4779602Z E
2023-03-13T16:26:44.4780390Z E You can reproduce this example by temporarily adding @reproduce_failure('6.68.2', b'AXicY2BkAAMoBaaR2Qc5PuYUSl38hMoEAwCepQju') as a decorator on your test case
</details>
| non_process | fix elementwise test allclose tensorflow img src torch img src numpy img src jax img src failed ivy tests test ivy test functional test experimental test core test elementwise py test allclose e typeerror test function missing required keyword only argument on device e falsifying example test allclose e dtype and x e dtype array dtype e rtol e atol e equal nan false e ground truth backend tensorflow e test flags functiontestflags e num positional args e with out false e instance method false e test gradients false e test compile false e as variable e native arrays e container e e backend fw e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case | 0 |
19,756 | 26,124,748,750 | IssuesEvent | 2022-12-28 16:55:13 | bridgetownrb/bridgetown | https://api.github.com/repos/bridgetownrb/bridgetown | opened | Remove the deprecated `serve` command | process | It's been marked deprecated for some time now, seems like a good moment after the release of v1.2 to remove it. | 1.0 | Remove the deprecated `serve` command - It's been marked deprecated for some time now, seems like a good moment after the release of v1.2 to remove it. | process | remove the deprecated serve command it s been marked deprecated for some time now seems like a good moment after the release of to remove it | 1 |
9,622 | 12,560,554,702 | IssuesEvent | 2020-06-07 22:32:48 | jyn514/saltwater | https://api.github.com/repos/jyn514/saltwater | opened | Incomplete function macro call should be an error | bug preprocessor | ### Expected behavior
<!-- A description of what you expected to happen.
You could also paste the output of another compiler,
I like `clang -x c - -Wall -Wextra -pedantic` -->
Function macros which do not have a terminating `)` should give an error. Instead they are silently discarded.
```
$ clang -E replace.c
replace.c:2:2: error: unterminated function-like macro invocation
f (
^
```
### Code
<!-- The code that was not interpreted correctly goes here.
This should also include the error message you got. -->
```c
#define f(a) a
f (
```
<!-- If you know where to find it, include the relevant part of the C standard
There's a copy at http://port70.net/~nsz/c/c11/n1570.html -->
http://port70.net/~nsz/c/c11/n1570.html#6.10.3p4:
> There shall exist a ) preprocessing token that terminates the invocation. | 1.0 | Incomplete function macro call should be an error - ### Expected behavior
<!-- A description of what you expected to happen.
You could also paste the output of another compiler,
I like `clang -x c - -Wall -Wextra -pedantic` -->
Function macros which do not have a terminating `)` should give an error. Instead they are silently discarded.
```
$ clang -E replace.c
replace.c:2:2: error: unterminated function-like macro invocation
f (
^
```
### Code
<!-- The code that was not interpreted correctly goes here.
This should also include the error message you got. -->
```c
#define f(a) a
f (
```
<!-- If you know where to find it, include the relevant part of the C standard
There's a copy at http://port70.net/~nsz/c/c11/n1570.html -->
http://port70.net/~nsz/c/c11/n1570.html#6.10.3p4:
> There shall exist a ) preprocessing token that terminates the invocation. | process | incomplete function macro call should be an error expected behavior a description of what you expected to happen you could also paste the output of another compiler i like clang x c wall wextra pedantic function macros which do not have a terminating should give an error instead they are silently discarded clang e replace c replace c error unterminated function like macro invocation f code the code that was not interpreted correctly goes here this should also include the error message you got c define f a a f if you know where to find it include the relevant part of the c standard there s a copy at there shall exist a preprocessing token that terminates the invocation | 1 |
1,988 | 4,816,845,124 | IssuesEvent | 2016-11-04 11:33:14 | woesterduolf/Mission-reisbureau | https://api.github.com/repos/woesterduolf/Mission-reisbureau | opened | Booking confirm | Boekingsprocess priority: highest Type:Feature | Mockup design (see page 8)
The customer is greeted by the nice text that his booking has been completed. Then he is thanked for choosing our travel agency.
All the relevant booking info is then displayed for convenience.
On the bottom is a button the customer can click to close the booking and go back to the front site.
| 1.0 | Booking confirm - Mockup design (see page 8)
The customer is greeted by the nice text that his booking has been completed. Then he is thanked for choosing our travel agency.
All the relevant booking info is then displayed for convenience.
On the bottom is a button the customer can click to close the booking and go back to the front site.
| process | booking confirm mockup design see page the customer is greeted by the nice text that his booking has been completed then he is thanked for choosing our travel agency all the relevant booking info is then displayed for convenience on the bottom is a button the customer can click to close the booking and go back to the front site | 1 |
541,651 | 15,830,934,263 | IssuesEvent | 2021-04-06 13:05:51 | fossasia/open-event-frontend | https://api.github.com/repos/fossasia/open-event-frontend | closed | captcha check not available some time creating a blockage | Priority: High bug | **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Some time while signing up captcha is not available so signup button is not active any how same goes for reset pasword

| 1.0 | captcha check not available some time creating a blockage - **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Some time while signing up captcha is not available so signup button is not active any how same goes for reset pasword

| non_process | captcha check not available some time creating a blockage describe the bug some time while signing up captcha is not available so signup button is not active any how same goes for reset pasword | 0 |
331,212 | 24,296,915,108 | IssuesEvent | 2022-09-29 10:50:12 | syetalabs/vue3-google-signin | https://api.github.com/repos/syetalabs/vue3-google-signin | opened | Add a FAQ | documentation | We need a FAQ section on the docs.
I see a lot of people coming to discussions thread to understand how to use custom scopes etc.
It would be wise to provide some examples and add relevant docs as a FAQ | 1.0 | Add a FAQ - We need a FAQ section on the docs.
I see a lot of people coming to discussions thread to understand how to use custom scopes etc.
It would be wise to provide some examples and add relevant docs as a FAQ | non_process | add a faq we need a faq section on the docs i see a lot of people coming to discussions thread to understand how to use custom scopes etc it would be wise to provide some examples and add relevant docs as a faq | 0 |
48,341 | 20,113,229,733 | IssuesEvent | 2022-02-07 16:53:14 | cityofaustin/atd-data-tech | https://api.github.com/repos/cityofaustin/atd-data-tech | opened | [Meeting] VZV Metrics Alignment | Type: Meeting Service: Dev Workgroup: VZ | There are multiple places where the counts of the data is inconsistent.
We would like to sort through the places where counts are off, and come to an agreement about what the single source of truth is, and reverse engineer that truth to align with the data that we're displaying in the Vision Zero Viewer and other locations in the VZ suite.
Issues that relate to this problem:
#2413 - VZD | Unit metadata | Inconsistent fatality counts between crash total and unit metadata totals
#7080 - VZE: Update Fatality and Serious Injury counts for accuracy
#6817 - Make sure that injury severity counts sum to consistent figures across VZV
#8355 - The total fatalities in the "By Travel Mode" card does not match the "Fatalities" card. | 1.0 | [Meeting] VZV Metrics Alignment - There are multiple places where the counts of the data is inconsistent.
We would like to sort through the places where counts are off, and come to an agreement about what the single source of truth is, and reverse engineer that truth to align with the data that we're displaying in the Vision Zero Viewer and other locations in the VZ suite.
Issues that relate to this problem:
#2413 - VZD | Unit metadata | Inconsistent fatality counts between crash total and unit metadata totals
#7080 - VZE: Update Fatality and Serious Injury counts for accuracy
#6817 - Make sure that injury severity counts sum to consistent figures across VZV
#8355 - The total fatalities in the "By Travel Mode" card does not match the "Fatalities" card. | non_process | vzv metrics alignment there are multiple places where the counts of the data is inconsistent we would like to sort through the places where counts are off and come to an agreement about what the single source of truth is and reverse engineer that truth to align with the data that we re displaying in the vision zero viewer and other locations in the vz suite issues that relate to this problem vzd unit metadata inconsistent fatality counts between crash total and unit metadata totals vze update fatality and serious injury counts for accuracy make sure that injury severity counts sum to consistent figures across vzv the total fatalities in the by travel mode card does not match the fatalities card | 0 |
428,244 | 12,405,133,023 | IssuesEvent | 2020-05-21 16:43:43 | hasadna/anyway-newsflash-infographics | https://api.github.com/repos/hasadna/anyway-newsflash-infographics | opened | [Bug] | Priority: Medium bug | **Describe the bug**
When time filter value is 1 year ago, text widget display wrong caption
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://anyway-newsflash-infographics.web.app/newsflash/4350
2. Select "1 year ago" on time filter ("שנה אחרונה")
3. See widget `https://anyway-newsflash-infographics.web.app/newsflash/4350`
**Expected behavior**
text should be:
"בשנה 2019"
**Actual behavior**
text is:
"בין השנים 2019- 2019"
**Screenshots**

**Environment**
Browser: Chrome
| 1.0 | [Bug] - **Describe the bug**
When time filter value is 1 year ago, text widget display wrong caption
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://anyway-newsflash-infographics.web.app/newsflash/4350
2. Select "1 year ago" on time filter ("שנה אחרונה")
3. See widget `https://anyway-newsflash-infographics.web.app/newsflash/4350`
**Expected behavior**
text should be:
"בשנה 2019"
**Actual behavior**
text is:
"בין השנים 2019- 2019"
**Screenshots**

**Environment**
Browser: Chrome
| non_process | describe the bug when time filter value is year ago text widget display wrong caption to reproduce steps to reproduce the behavior go to select year ago on time filter שנה אחרונה see widget expected behavior text should be בשנה actual behavior text is בין השנים screenshots environment browser chrome | 0 |
1,547 | 4,154,931,986 | IssuesEvent | 2016-06-16 13:28:04 | Jumpscale/jscockpit | https://api.github.com/repos/Jumpscale/jscockpit | closed | Example Blueprint | process_duplicate | Write required service templates or make sure current one works as expected
See : https://github.com/0-complexity/g8cockpit/blob/master/specs/example_docker_blueprint.md | 1.0 | Example Blueprint - Write required service templates or make sure current one works as expected
See : https://github.com/0-complexity/g8cockpit/blob/master/specs/example_docker_blueprint.md | process | example blueprint write required service templates or make sure current one works as expected see | 1 |
418,618 | 12,200,747,587 | IssuesEvent | 2020-04-30 05:38:18 | GoogleCloudPlatform/cloud-code-intellij | https://api.github.com/repos/GoogleCloudPlatform/cloud-code-intellij | closed | unable to run with Cloud Code: Kubernetes | area/kubernetes priority/p2 | (Please ensure you are running the latest version of Cloud Cloud for IntelliJ with _Help > Check for Updates_.)
- OS: MacOS
- IDE: GoLand
- K8s: Local docker-desktop

**What did you do?**
Run/Debug Cloud Code: Kubernetes
**What did you expect to see?**
up and running
**What did you see instead?**
```
time="2020-04-21T13:35:12+08:00" level=fatal msg="exiting dev mode because first deploy failed: reading manifests: kubectl create: running [kubectl --context docker-desktop create --dry-run -oyaml -f $GOPATH/dir/skaffold-test/k8s-deployment.yaml]\n - stdout: \n - stderr: \"Unable to connect to the server: EOF\\n\": exit status 1"
```
[full log](https://gist.github.com/coolyrat/f95ef161d6f7fd5bc98a30596a9aafb6)
(screenshots are helpful)

It works fine if I execute skaffold debug in the terminal. | 1.0 | unable to run with Cloud Code: Kubernetes - (Please ensure you are running the latest version of Cloud Cloud for IntelliJ with _Help > Check for Updates_.)
- OS: MacOS
- IDE: GoLand
- K8s: Local docker-desktop

**What did you do?**
Run/Debug Cloud Code: Kubernetes
**What did you expect to see?**
up and running
**What did you see instead?**
```
time="2020-04-21T13:35:12+08:00" level=fatal msg="exiting dev mode because first deploy failed: reading manifests: kubectl create: running [kubectl --context docker-desktop create --dry-run -oyaml -f $GOPATH/dir/skaffold-test/k8s-deployment.yaml]\n - stdout: \n - stderr: \"Unable to connect to the server: EOF\\n\": exit status 1"
```
[full log](https://gist.github.com/coolyrat/f95ef161d6f7fd5bc98a30596a9aafb6)
(screenshots are helpful)

It works fine if I execute skaffold debug in the terminal. | non_process | unable to run with cloud code kubernetes please ensure you are running the latest version of cloud cloud for intellij with help check for updates os macos ide goland local docker desktop what did you do run debug cloud code kubernetes what did you expect to see up and running what did you see instead time level fatal msg exiting dev mode because first deploy failed reading manifests kubectl create running n stdout n stderr unable to connect to the server eof n exit status screenshots are helpful it works fine if i execute skaffold debug in the terminal | 0 |
196,930 | 14,896,415,218 | IssuesEvent | 2021-01-21 10:22:22 | MISP/MISP | https://api.github.com/repos/MISP/MISP | closed | Multiple issues with search logs | S: feature incomplete S: stale T: enhancement from:testcases inconsistency topic: API | * [ ] 'POST', 'admin/logs/index' => missing limit/page parameters
```json
{
"code": 500,
"name": "SQLSTATE[42S22]: Column not found: 1054 Unknown column 'Log.limit' in 'where clause'",
"message": "SQLSTATE[42S22]: Column not found: 1054 Unknown column 'Log.limit' in 'where clause'",
"url": "\/admin\/logs\/index",
"error": {
"errorInfo": [
"42S22",
1054,
"Unknown column 'Log.limit' in 'where clause'"
],
"queryString": "SELECT `Log`.`id`, `Log`.`title`, `Log`.`created`, `Log`.`model`, `Log`.`model_id`, `Log`.`action`, `Log`.`user_id`, `Log`.`change`, `Log`.`email`, `Log`.`org`, `Log`.`description`, `Log`.`ip` FROM `misp`.`logs` AS `Log` WHERE ((`Log`.`limit` = ('5')) AND (`Log`.`page` = ('0')) AND (`Log`.`model` = ('User'))) LIMIT 5"
}
}
```
* [ ] The response is sorted from old to new, should be the other way around (or at least configurable) | 1.0 | Multiple issues with search logs - * [ ] 'POST', 'admin/logs/index' => missing limit/page parameters
```json
{
"code": 500,
"name": "SQLSTATE[42S22]: Column not found: 1054 Unknown column 'Log.limit' in 'where clause'",
"message": "SQLSTATE[42S22]: Column not found: 1054 Unknown column 'Log.limit' in 'where clause'",
"url": "\/admin\/logs\/index",
"error": {
"errorInfo": [
"42S22",
1054,
"Unknown column 'Log.limit' in 'where clause'"
],
"queryString": "SELECT `Log`.`id`, `Log`.`title`, `Log`.`created`, `Log`.`model`, `Log`.`model_id`, `Log`.`action`, `Log`.`user_id`, `Log`.`change`, `Log`.`email`, `Log`.`org`, `Log`.`description`, `Log`.`ip` FROM `misp`.`logs` AS `Log` WHERE ((`Log`.`limit` = ('5')) AND (`Log`.`page` = ('0')) AND (`Log`.`model` = ('User'))) LIMIT 5"
}
}
```
* [ ] The response is sorted from old to new, should be the other way around (or at least configurable) | non_process | multiple issues with search logs post admin logs index missing limit page parameters json code name sqlstate column not found unknown column log limit in where clause message sqlstate column not found unknown column log limit in where clause url admin logs index error errorinfo unknown column log limit in where clause querystring select log id log title log created log model log model id log action log user id log change log email log org log description log ip from misp logs as log where log limit and log page and log model user limit the response is sorted from old to new should be the other way around or at least configurable | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.